id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.08144
Two-Step Knowledge Distillation for Tiny Speech Enhancement
Tiny, causal models are crucial for embedded audio machine learning applications. Model compression can be achieved via distilling knowledge from a large teacher into a smaller student model. In this work, we propose a novel two-step approach for tiny speech enhancement model distillation. In contrast to the standard approach of a weighted mixture of distillation and supervised losses, we firstly pre-train the student using only the knowledge distillation (KD) objective, after which we switch to a fully supervised training regime. We also propose a novel fine-grained similarity-preserving KD loss, which aims to match the student's intra-activation Gram matrices to that of the teacher. Our method demonstrates broad improvements, but particularly shines in adverse conditions including high compression and low signal to noise ratios (SNR), yielding signal to distortion ratio gains of 0.9 dB and 1.1 dB, respectively, at -5 dB input SNR and 63x compression compared to baseline.
Rayan Daod Nathoo, Mikolaj Kegler, Marko Stamenovic
2023-09-15T04:19:38Z
http://arxiv.org/abs/2309.08144v1
# Two-Step Knowledge Distillation for Tiny Speech Enhancement ###### Abstract Tiny, causal models are crucial for embedded audio machine learning applications. Model compression can be achieved via distilling knowledge from a large teacher into a smaller student model. In this work, we propose a novel two-step approach for tiny speech enhancement model distillation. In contrast to the standard approach of a weighted mixture of distillation and supervised losses, we firstly pre-train the student using only the knowledge distillation (KD) objective, after which we switch to a fully supervised training regime. We also propose a novel fine-grained similarity-preserving KD loss, which aims to match the student's intra-activation Gram matrices to that of the teacher. Our method demonstrates broad improvements, but particularly shines in adverse conditions including high compression and low signal to noise ratios (SNR), yielding signal to distortion ratio gains of 0.9 dB and 1.1 dB, respectively, at -5 dB input SNR and 63\(\times\) compression compared to baseline. Rayan Daod Nathoo+, Mikolaj Kegler+, Marko Stamenovic Bose Corporation, USA speech enhancement, knowledge distillation, tinyML, model compression Footnote †: leftmargin=*]These authors contributed equally to this work. ## 1 Introduction In recent years, deep neural network (DNN) models have become a common approach to the speech enhancement (SE) problem, due to their performance [1, 2, 3]. However, large, powerful models are often unsuitable for resource-constrained platforms, like hearing aids or wearables, because of their memory footprint, computational latency, and power consumption [2, 4, 5, 6]. To meet these constraints, audio TinyML research tends to focus on designing model architectures with small numbers of parameters, using model compression techniques to reduce the size of large models, or both [4, 5, 6, 7]. Pruning is a popular method for reducing the size of DNN models for SE [4, 5, 6, 8]. The goal of pruning is to remove weights least contributing to model performance. In its simplest form, this can be performed post-training by removing weights with the lowest magnitudes. Online pruning, where the model is trained and pruned concurrently, builds on post-training pruning by exposing the model to pruning errors while training, allowing it to adapt to this form of compression noise [4]. Unstructured pruning of individual weights can yield impressive model size reduction with little performance sacrifice, but corresponding savings in computational throughput are not possible without hardware support for sparse inference, which is unusual in embedded hardware. Structured pruning of blocks of weights and/or neurons is often designed with broader hardware compatibility in mind, but the performance drop tends to be larger than for the unstructured case [6]. In contrast to pruning, knowledge distillation (KD) adopts a different framework. The goal of KD is to utilize a strong pre-trained _teacher_ model to guide the training of the smaller _student_[9, 10, 11]. The underlying assumption is that the pre-trained teacher network offers additional useful context compared to the ground truth data by itself. Unlike pruning, this process does not involve modifying the student network from its original _dense_ form, which reduces the complexity of the deployment process. In this work, we focus on KD due to its above-outlined benefits over pruning. KD methods have been applied to various classification tasks in the audio domain [12, 13]. However, KD has not been extensively explored for causal low-latency SE, which often requires tiny networks (sub-100k parameters) optimized for low-resource wearable devices, such as hearing aids [5, 6]. So-called _response-based_ KD approaches use the pre-trained teacher model's outputs to train a student network [14, 15]. However, distillation can be further facilitated by taking advantage of intermediate representations of the two models, not just their outputs [10]. One common obstacle in such _feature-based_ KD is the dimensionality mismatch between teacher and student activations due to the model size difference. To alleviate this issue, [16] proposed aligning intermediate features, while [17] used attention maps to do so. The latter was applied in the context of SE in [18] using considerably large, non-causal student models intended for offline applications. In [19], the authors addressed the dimensionality mismatch problem for the causal SE models by using frame-level Similarity Preserving KD [20] (SPKD). SPKD captures the similarity patterns between network activations for different training examples and aims to match those patterns between the student and the frozen pre-trained teacher models. The authors of [19] also introduced fusion blocks, analogous to [21], to distill relationships between consecutive layers. Here, we show that the efficacy of conventional KD methods is limited for tiny, causal SE models. To improve distillation efficacy, we first extend the method from [19] by computing SPKD for each bin of the latent representations, corresponding to the time frame (as in [19]) but also the frequency bin of the input, thus providing more resolution for KD loss optimization. The proposed extension outperforms other similarity-based KD methods which we also explore. Second, we hypothesize that matching a large teacher model might be challenging for small student models and thus may lead to sub-optimal performance. Inspired by [22], we propose a novel two-step framework for distilling tiny SE models. In the first step, the student is pre-trained using only the KD criterion to match the activation patterns of the teacher, with no additional ground truth supervision. The goal of this unsupervised KD pre-training is to make the student similar to the teacher prior to the main training. Then, the pre-trained student model is further optimized in a supervised fashion and/or using KD routines. We find that pre-training using the proposed SPKD method at the level of the individual bin of the latent representation, followed by fully supervised training yields superior performance compared to other distillation approaches utilizing weighted mixtures of KD and supervised losses. We report the performance of our method across various student model sizes, input mixture signal-to-noise ratios (SNRs), and finally, assess the similarity between the activation patterns of the teacher and distilled student. ## 2 Methods ### Model architecture Our backbone architecture for the exploration of tiny SE KD is the Convolutional Recurrent U-Net for SE (CRUSE) topology [7]. However, note that the methodology developed here can, in principle, be applied to any other architecture. The CRUSE model operates in the time-frequency domain and takes power-law compressed log-mel spectrograms (LMS) as input. The LMS is obtained by taking the magnitude of the complex short-time Fourier transform (STFT, 512/256 samples frame/hop size), processing it through a Mel-space filterbank (80 bins, covering 50-8k Hz range) and finally compressing the result by raising it to the power of \(0.3\). The model output is a real-valued time-frequency mask bounded within the range \((0,1)\) through the sigmoid activation of the final block. The mask is applied to the noisy model input and reconstituted into the time domain using the inverse STFT and the noisy input phase. The model comprises four encoder/decoder blocks and a bottleneck with grouped GRU units (4 groups), reducing the computational complexity compared to a conventional GRU layer with the same number of units [23]. The encoder/decoder blocks are composed of 2D convolution/transpose convolution layers with (2, 3) kernels (time, frequency) and (1, 2) strides, followed by cumulative layer normalization [24] and leaky ReLU activation (\(\alpha\) = 0.2). To further reduce the model complexity, skip connections between the encoder and decoder used in classic U-Net are replaced with 1x1 convolutions, whose outputs are summed into the decoder inputs. We enforce the model's frame-level causality by using causal convolutions and causal cumulative layer norms. The total algorithmic latency of the model is 32 ms (single STFT frame) [2]. In our experiments, both teacher and student are CRUSE models and their sizes are adjusted by changing the number of units in each block. In particular, the teacher uses {32, 64, 128, 192} encoder/decoder channels and 960 bottleneck GRU units, resulting in 1.9M parameters, and 13.34 MOps/frame (i.e. the number of operations required to process a single STFT frame). Our default student uses {8, 16, 32, 32} encoder/decoder channels and 160 bottleneck GRU units resulting in 62k parameters (3.3% of the teacher), and 0.84 MOps/frame (6.3% of the teacher). ### Self-similarity local knowledge distillation Inspired by previous work [19, 20], we address the issue of dimensionality mismatch between teacher and student models by computing similarity-based distillation losses. The method captures and compares the relationship between batch items at each layer output, between teacher and student (Fig. 1a, \(\mathcal{L}_{KD}^{local}\)). We refer to this relationship as the self-similarity Gram matrix \(\mathbf{G}_{x}\). Self-similarity matrices (Fig. 1b) can be computed for an example network latent activation \(\mathbf{X}\) of shape [\(b\), \(c\), \(t\), \(f\)], where \(b\) - batch size, \(c\) - channel, \(t\) - activation _width_ (corresponding to the input time dimension), \(f\) - activation _height_ (corresponding to the input frequency dimension), as shown in Fig. 1b. The original implementation from [20] involves reshaping \(\mathbf{X}\) to [\(b\), \(ctf\)] and matrix multiplying it by its transpose \(\mathbf{X}^{T}\) to obtain the [\(b\), \(b\)] symmetric self-similarity matrix \(\mathbf{G}\). Analogously, this operation can be performed for each \(t\) or \(f\) dimension independently with resulting \(\mathbf{G}_{t/f}\) matrices of size [\(t/f\), \(b\), \(b\)]. Such an increase in granularity improved the KD performance in [19]. Here, we obtain even more detailed intra-activation Gram matrices by considering each \((t,f)\) bin separately, resulting in the \(\mathbf{G}_{tf}\) self-similarity matrix with shape [\(t\), \(f\), \(b\), \(b\)]. Finally, the local KD loss is computed using self-similarity matrices \(\mathbf{G}_{x}\) of any kind \(x\) obtained from teacher \(T\) and student \(S\) as: \[\mathcal{L}_{KD}^{local}=\frac{1}{b^{2}}\sum_{i}\left\|\mathbf{G}_{x}^{T_{i}}- \mathbf{G}_{x}^{S_{i}}\right\|_{F}^{2}, \tag{1}\] where \(i\) is the layer index and \(\left\|\right\|_{F}^{2}\) is the Frobenius \(l_{2}\) norm. ### Information flow knowledge distillation The above-outlined local similarity losses can be extended to capture relationships between activations of subsequent layers of the teacher and student models (Fig. 1a, \(\mathcal{L}_{KD}^{flow}\)). The method is inspired by the Flow of Solution Procedure (FSP) matrices introduced in [22] and aims to not only match local similarity between the teacher and student in the corresponding layers but also global inter-layer relations. We propose two versions of flow matrices between layers \(i\) and \(j\) in our model (Fig. 1c). The first one, \(\mathbf{G}_{t}^{i-j}\), leverages \(\mathbf{G}_{t}\) self-similarity matrices. Thereby each self-similarity block shares the \(t\) Figure 1: (a) Distillation process overview (b) Self-Similarity Gram matrices computation. (c) Flow matrices computation (\(\bigotimes\) denotes matrix multiplication). Note that, for clarity, transpositions and matrix multiplications are applied only to the last two dimensions of each tensor. dimension and thus the interaction between the layers' self-similarity can be captured by performing matrix multiplication of \(\mathbf{G}_{t}^{i}\) and transposed \(\mathbf{G}_{t}^{j}\) (both sized [\(t\), \(b\), \(b\)]) for each time frame \(t\). The second version leverages \(\mathbf{G}_{tf}\) self-similarity matrices. However, the \(f\) dimension in our model changes for each block due to the strided convolutions. To quantify the relationship between layers \(i\) and \(j\) of different dimensions we reshaped \(\mathbf{G}_{tf}^{i/j}\) to the size of [\(t\), \(b\), \(f_{i/j}\), \(b\)]. Then for each time-batch-item pair (\(t\),\(b\)), we obtain a [\(f_{i/j}\), \(b\)] sub-matrix, which can be matrix multiplied with its transpose to obtain the flow matrix \(\mathbf{G}_{tf}^{i\to j}\) of size [\(t\), \(b\), \(f_{i}\), \(f_{j}\)]. We define the loss similarly to Eq. 1 by comparing the teacher \(\mathbf{G}_{x}^{T_{i\to j}}\) flow matrix with the student \(\mathbf{G}_{x}^{S_{i\to j}}\) flow matrix, of the same kind \(x\), for every 2-layer-combination (\(i\), \(j\)): \[\mathcal{L}_{KD}^{flow}=\frac{1}{b^{2}}\sum_{i}\sum_{j>i}\left\|\mathbf{G}_{x} ^{T_{i\to j}}-\mathbf{G}_{x}^{S_{i\to j}}\right\|_{F}^{2} \tag{2}\] ### Training objective and two-step KD We use phase-sensitive spectrum approximation (PSA) [25], with clean speech as a target, as the supervised portion \(\mathcal{L}_{PSA}\) of the total loss. The KD portion \(\mathcal{L}_{KD}\) of the total loss does not use the ground truth objective but instead, features obtained from the pre-trained, frozen teacher model. In particular, \(\mathcal{L}_{KD}\) can match model outputs (i.e. response distillation, analogous to [15]), \(\mathbf{G}_{x}\) (\(\mathcal{L}_{KD}^{local}\)), or \(\mathbf{G}_{x}^{i\to j}\) (\(\mathcal{L}_{KD}^{flow}\)) matrices introduced in Sections 2.2 and 2.3, respectively. \(\mathcal{L}_{PSA}\) and \(\mathcal{L}_{KD}\) are mixed using \(\gamma\) coefficient to form the total loss: \[\mathcal{L}=\gamma\mathcal{L}_{KD}+(1-\gamma)\mathcal{L}_{PSA} \tag{3}\] Inspired by [22], we propose a two-step KD approach by separating the student distillation process into two distinct parts. **Step 1**: In the first step, \(\gamma=1\) for a fixed set of epochs to solely minimize \(\mathcal{L}_{KD}\). While excluding the supervised \(\mathcal{L}_{PSA}\) does not contribute to the optimal objective performance, this step provides strong initial weights for further student model training. **Step 2**: After this pre-training step, the student is further optimized to maximize its objective performance using a fully supervised loss by setting \(\gamma=0\), (\(\mathcal{L}_{PSA}\) only) or a weighted \(\mathcal{L}_{KD}\)/\(\mathcal{L}_{PSA}\) loss obtained by setting \(\gamma=0.5\). For one-step KD using a weighted \(\mathcal{L}_{KD}\)/\(\mathcal{L}_{PSA}\) loss, we set \(\gamma=0.5\). ### Training setup During training, each epoch consists of 5,000 training steps, with each step being a 32-item batch of 2-second-long audio clips. We used the Adam optimizer with \(6\cdot 10^{-5}\) learning rate. The teacher model is trained till convergence to ensure the best performance for subsequent distillations. We train each student model for a total of 400 epochs (2M steps, 35.5k+ hours of audio). For student model pre-training (Step 1, Section 2.4), we use 100 epochs with \(\gamma=1\), thus excluding supervised term \(\mathcal{L}_{PSA}\) (Eq. 3). ## 3 Results We use the dataset from the Interspeech 2020 Deep Noise Suppression (MS-DNS) Challenge [1] for experimentation, which consists of 500+ hours of clean speech and 100+ hours of noise, all mono clips sampled at 16 kHz. For model training, we mix the speech and noise at various SNR levels, sampled from a uniform distribution between -5 and 15 dB. We employ a LUFS-based SNR calculation for more perceptually relevant mixtures and to de-emphasize the effects of impulsive noises [26]. To evaluate the trained models we use the non-reverberant evaluation set consisting of 150 clips of noisy speech samples and their respective clean references. We quantify the performance of each model via Signal-to-Distortion ratio (SDR) [27], wide-band Perceptual Evaluation of Speech Quality (PESQ) [28], and extended Short-Time Objective Intelligibility (eSTOI) [29]. We also report scores obtained from DNS-MOS P.835 [30], a DNN mean opinion score (MOS) estimator showing a good correlation with subjective ratings. All of our results are reported as improvements over unprocessed noisy inputs (\(\Delta\)). ### Self-similarity local KD Table 1 shows the efficacy of local similarity-based one-step KD approaches when training student models from scratch. Using teacher output as \(\mathcal{L}_{KD}\) in Eq. 3 or **G** similarity [20] does not improve, or even deteriorates the student performance. \(\mathbf{G}_{t}\) similarity proposed in [19] provides 0.16 dB SDR improvement over the student alone, alongside the best PESQ score. Our proposed time-frequency similarity calculation method \(\mathbf{G}_{tf}\) outperforms \(\mathbf{G}_{t}\) by doubling its SDR improvement (+0.34 dB, w.r.t. the student alone) and increasing all other scores. This suggests that increasing the granularity of the similarity matrix in the \(\mathcal{L}_{KD}^{local}\) calculation facilitates the KD process and overall improves the performance of the distilled student model. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \(\Delta\)**SDR** & \(\Delta\)**PESQ** & \(\Delta\)**eSTOI** & \multicolumn{2}{c}{**\(\Delta\)DNS-MOS**} \\ & **(dB)** & **(MOS)** & **(\%)** & **BAK** & **OVRL** & **SIG** \\ \hline Teacher & 8.65 & 1.25 & 10.07 & 1.44 & 0.69 & 0.06 \\ Student & 6.34 & 0.75 & 5.82 & 1.27 & 0.55 & -0.02 \\ \hline \hline \end{tabular} \end{table} Table 1: One-step KD for tiny SE. _Output:_\(\mathcal{L}_{KD}\) comparing teacher and student outputs (similar to [15]). \(\mathbf{G}_{x}\): Feature-based \(\mathcal{L}_{KD}\) using self-similarity matrix of type \(x\) (Fig. 1b). All models are initialized with the same random weights and use \(\gamma=0.5\) (Eq. 3). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \(\Delta\)**SDR** & \(\Delta\)**PESQ** & \(\Delta\)**eSTOI** & \multicolumn{2}{c}{**\(\Delta\)DNS-MOS**} \\ & **(dB)** & **(MOS)** & **(\%)** & **BAK** & **OVRL** & **SIG** \\ \hline Teacher & 8.65 & 1.25 & 10.07 & 1.44 & 0.69 & 0.06 \\ Student & 6.34 & 0.75 & 5.82 & 1.27 & 0.55 & -0.02 \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Step 1**} & \multirow{2}{*}{**Step 2**} & \multirow{2}{*}{**Step 2**} & \multirow{2}{*}{**Step 1**} & \multirow{2}{*}{**Step 3**} & \multirow{2}{*}{**Step 1**} & \multirow{2}{*}{**Step 3**} & \multirow{2}{*}{**Step 2**} & \multirow{2}{*}{**Step 4**} \\ & **(dB)** & **(MOS)** & **(\%)** & **BAK** & **OVRL** & **SIG** \\ \hline Teacher & 8.65 & 1.25 & 10.07 & 1.44 & 0.69 & 0.06 \\ Student & 6.34 & 0.75 & 5.82 & 1.27 & 0.55 & -0.02 \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \(\Delta\)**SDR** & \(\Delta\)**PESQ** & \(\Delta\)**eSTOI** & \multicolumn{2}{c}{**\(\Delta\)DNS-MOS**} \\ & **(dB)** & **(MOS)** & **(\%)** & **BAK** & **OVRL** & **SIG** \\ \hline Teacher & 8.65 & 1.25 & 10.07 & 1.44 & 0.69 & 0.06 \\ Student & 6.34 & 0.75 & 5.82 & 1.27 & 0.55 & -0.02 \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \(\Delta\)**SDR** & \(\Delta\)**PESQ** & \(\Delta\)**eSTOI** & \multicolumn{2}{c}{**\(\Delta\)DNS-MOS**} \\ & **(\%)** & **BAK** & **OVRL** & **SIG** \\ \hline Teacher & 8.65 & 1.25 & 10.07 & 1.44 & 0.69 & 0.06 \\ Student & 6.34 & 0.75 & 5.82 & 1.27 & 0.55 & -0.02 \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \(\Delta\)**SDR** & \(\Delta\)**PESQ** & \(\Delta\)**eSTOI** & \multicolumn{2}{c}{**\(\Delta\)DNS-MOS**} \\ & **(dB)** & **(MOS)** & **(\%)** & **BAK** & **OVRL** & **SIG** \\ \hline Teacher & 8.65 ### Two-step KD Table 2 presents the results of the proposed two-step distillation process described in Section 2.4. We find that using time-preserving flow matrices \(\mathbf{G}_{t}^{i\to j}\)as the \(\mathcal{L}_{KD}\) pre-training objective (Step 1) yields comparable or worse performance than using local similarity \(\mathbf{G}_{tf}\) with no pre-training. However, changing the pre-training objective \(\mathbf{G}_{tf}^{i\to j}\), which captures interactions between latent representations in greater detail, yields improvement across nearly all the metrics when paired with \(\mathbf{G}_{tf}\)-based KD as Step 2. Most interestingly, pre-training the student with \(\mathbf{G}_{tf}\) criterion and continuing with only the supervised loss \(\mathcal{L}_{PSA}\) provides substantial improvements across all the metrics, especially SDR (+0.44 dB, w.r.t. the student alone) and eSTOI (+0.56%), suggesting improved intelligibility. We further investigate our best two-stage KD approach by performing Central Kernel Alignment (CKA) [31] analysis. In principle, CKA allows us to compare the similarity between activation patterns across different models in response to a set of inputs. We use the entire evaluation dataset to probe the models and compute CKA similarities for each pair of layers (averaged over all the audio clips). Fig. 2-left presents CKA similarity between the teacher and student trained independently. Fig. 2-middle compares the teacher to the student pre-trained with \(\mathbf{G}_{tf}\) criterion (only Step 1). As expected, the first step alone increases the similarity between the corresponding teacher and student layers (diagonal). Finally, Fig. 2-right shows the best student from Table 2, namely Step 1: \(\mathbf{G}_{tf}\)\(\mathcal{L}_{KD}\)-only pre-training, and Step 2: fully-supervised \(\mathcal{L}_{PSA}\). The overall similarity to the teacher decreases but remains much higher than for the student trained independently. This suggests that a brief pre-training distillation (\(\gamma=1\), no \(\mathcal{L}_{PSA}\)) allows the student to develop its unique solution starting from strong prior knowledge inherited from the teacher. ### Impact of the student model size and mixture SNR The MS-DNS evaluation dataset consists of relatively high SNRs between 0 and 19 dB (mean 9.07 dB). To assess the SNR-dependent benefit of the proposed two-step KD approach, we remix the entire evaluation set to obtain mixtures of the same speech and noise clips but at fixed SNRs of -5, 0 and 5 dB. In Table. 3, we observe the inverse relationship between the benefit of our approach and SNR of the noisy mixtures. In particular, for -5 dB SNR mixtures, our KD approach improves student performance by approximately 1 dB SDR, 1.5% eSTOI, and 0.1 DNS-MOS BAK. This is an important observation as tiny SE models (here, 3.3% teacher size) tend to exhibit the most significant performance drop in the low-SNR cases, compared to their larger counterparts [5]. In Table 4 we showcase the efficacy of the proposed KD framework for various student sizes using the same teacher. We observe that the smaller the downstream model, the larger benefit our KD method provides over the student trained alone. In particular, for the 30k-parameter student (\(\sim\)1.5% teacher size), the improvements are the largest with over 1 dB SDR, 0.1 PESQ, and nearly 2% eSTOI. For model sizes above 200k (\(\sim\)15% teacher size), the improvements start to plateau. These findings indicate that our method provides the largest performance boost for the most resource-constrained cases, usually deemed as the most challenging [4, 5, 6]. ## 4 Conclusions This work proposes a novel two-step KD protocol for distilling tiny, causal SE models. No previous KD work has investigated this class of embedded-scale SE. Our framework consists of two distinct steps: 1. Distilling the student model using only KD objective using only our proposed fine-grained self-similarity matrix \(\mathbf{G}_{tf}\) for computing distillation loss; 2. Training the model obtained in Step 1 via supervised loss. Our results show that tiny SE models distilled in this fashion perform better than KD methods utilizing weighted loss between supervised and distillation objectives. Our experimental evaluation shows that the proposed two-step KD provides the largest benefits for low-SNR mixtures and smaller student models. Future work should explore integrating the proposed two-step KD with pruning and/or quantization to achieve SE models of even lower complexity and apply the method to other audio-to-audio problems such as source separation, bandwidth extension, or signal improvement. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **Params / OPS** & \(\Delta\)**SDR** & \(\Delta\)**PESQ** & \(\Delta\)**eSTOI** & \multicolumn{2}{c}{\(\Delta\)**DNS-MOS**} & \multirow{2}{*}{**SIG**} \\ & **(M)** & **(dB)** & **(MOS)** & **(\%)** & **BAK** & **OVRL** & **SIG** \\ \hline Teacher & 1.9 / 13.34 & 8.65 & 1.25 & 10.07 & 1.44 & 0.69 & 0.06 \\ \hline \hline Student & \multirow{2}{*}{0.03 / 0.42} & 4.42 & 0.50 & 2.59 & **1.21** & 0.47 & -0.07 \\ Proposed & & **5.52** & **0.61** & **4.55** & 1.18 & 0.47 & **-0.05** \\ \hline Student & \multirow{2}{*}{0.06 / 0.84} & 6.34 & 0.75 & 5.82 & 1.27 & 0.55 & -0.02 \\ Proposed & & **6.77** & **0.81** & **6.38** & **1.34** & **0.59** & **-0.01** \\ \hline Student & \multirow{2}{*}{0.24 / 2.48} & 7.24 & 0.93 & 7.53 & 1.38 & 0.62 & 0.00 \\ Proposed & & **7.60** & **0.97** & **7.71** & **1.41** & **0.64** & **0.01** \\ \hline Student & \multirow{2}{*}{0.35 / 3.08} & 7.51 & 0.99 & 7.97 & **1.39** & 0.63 & 0.01 \\ Proposed & & **7.54** & **1.01** & **8.22** & 1.38 & **0.64** & **0.02** \\ \hline \hline \end{tabular} \end{table} Table 4: Impact of the student model size on the two-step KD performance. OPS: number of operations per frame at inference time. _Proposed_: two-step KD using \(\mathbf{G}_{tf}\) pre-training followed by fully-supervised training (best in Table 2). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**SNR (dB)**} & \multirow{2}{*}{**Model**} & \(\Delta\)**SDR** & \(\Delta\)**PESQ** & \(\Delta\)**eSTOI** & \multicolumn{2}{c}{\(\Delta\)**DNS-MOS**} & \multirow{2}{*}{**SIG**} \\ & & **(dB)** & **(MOS)** & **(\%)** & **BAK** & **OVRL** & **SIG** \\ \hline \multirow{4}{*}{-5} & Teacher & 14.05 & 0.62 & 19.12 & 2.16 & 1.02 & 0.64 \\ & Student & 10.82 & 0.30 & 10.07 & 1.86 & 0.79 & **0.51** \\ & Proposed & **11.73** & **0.35** & **11.61** & **1.98** & **0.81** & 0.47 \\ \hline \multirow{4}{*}{0} & Teacher & 12.30 & 0.92 & 17.83 & 1.99 & 0.98 & 0.40 \\ & Student & 9.65 & 0.49 & 10.56 & 1.75 & 0.75 & **0.26** \\ & Proposed & **10.23** & **0.56** & **11.51** & **1.84** & **0.79** & 0.25 \\ \hline \multirow{4}{*}{5} & Teacher & 10.27 & 1.21 & 13.98 & 1.65 & 0.78 & 0.02 \\ & Student & 7.97 & 0.69 & 8.58 & 1.44 & 0.59 & -0.10 \\ \cline{1-1} & Proposed & **8.43** & **0.76** & **9.32** & **1.51** & **0.62** & **-0.09** \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation of the two-step KD approach on the MS-DNS dataset removed at fixed SNRs. _Proposed_: two-step KD using \(\mathbf{G}_{tf}\) pre-training followed by fully-supervised training (best in Table 2). Figure 2: Block-wise CKA similarity between students and teacher networks, averaged over the MS-DNS test set. _Mean(diag)_ and _Mean(all)_ denote the average similarity for the corresponding blocks (diagonal) or all the block combinations, respectively.
2309.12134
Self-Supervised Contrastive Learning for Robust Audio-Sheet Music Retrieval Systems
Linking sheet music images to audio recordings remains a key problem for the development of efficient cross-modal music retrieval systems. One of the fundamental approaches toward this task is to learn a cross-modal embedding space via deep neural networks that is able to connect short snippets of audio and sheet music. However, the scarcity of annotated data from real musical content affects the capability of such methods to generalize to real retrieval scenarios. In this work, we investigate whether we can mitigate this limitation with self-supervised contrastive learning, by exposing a network to a large amount of real music data as a pre-training step, by contrasting randomly augmented views of snippets of both modalities, namely audio and sheet images. Through a number of experiments on synthetic and real piano data, we show that pre-trained models are able to retrieve snippets with better precision in all scenarios and pre-training configurations. Encouraged by these results, we employ the snippet embeddings in the higher-level task of cross-modal piece identification and conduct more experiments on several retrieval configurations. In this task, we observe that the retrieval quality improves from 30% up to 100% when real music data is present. We then conclude by arguing for the potential of self-supervised contrastive learning for alleviating the annotated data scarcity in multi-modal music retrieval models.
Luis Carvalho, Tobias Washüttl, Gerhard Widmer
2023-09-21T14:54:48Z
http://arxiv.org/abs/2309.12134v1
# Self-Supervised Contrastive Learning for Robust Audio - ###### Abstract. Linking sheet music images to audio recordings remains a key problem for the development of efficient cross-modal music retrieval systems. One of the fundamental approaches toward this task is to learn a cross-modal embedding space via deep neural networks that is able to connect short snippets of audio and sheet music. However, the scarcity of annotated data from real musical content affects the capability of such methods to generalize to real retrieval scenarios. In this work, we investigate whether we can mitigate this limitation with self-supervised contrastive learning, by exposing a network to a large amount of real music data as a pre-training step, by contrasting randomly augmented views of snippets of both modalities, namely audio and sheet images. Through a number of experiments on synthetic and real piano data, we show that pre-trained models are able to retrieve snippets with better precision in all scenarios and pre-training configurations. Encouraged by these results, we employ the snippet embeddings in the higher-level task of cross-modal piece identification and conduct more experiments on several retrieval configurations. In this task, we observe that the retrieval quality improves from 30% up to 100% when real music data is present. We then conclude by arguing for the potential of self-supervised contrastive learning for alleviating the annotated data scarcity in multi-modal music retrieval models. Code and trained models are accessible at [https://github.com/luisfvc/ucasr](https://github.com/luisfvc/ucasr). multi-modal embedding spaces; audio-sheet music retrieval + Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [ music libraries, such technologies play an essential role in the indexing, navigation, browsing, and synchronization of multi-modal databases (Kang et al., 2017). One example of such applications is the Piano Music Companion (Piano, 2010), a system that first tries to identify a piano piece that is being played, followed by synchronizing it within the corresponding score in real time. However, a critical limitation of these systems is that they require the score to be available in a symbolic, machine-readable form - e.g., MIDI or MusicXML - which is a serious problem in practical applications. Recent approaches for snippet-level audio-sheet music retrieval attempt to overcome this limitation by learning low-dimensional embeddings directly from the multi-modal data - audio and scans or photographs of scores (Kang et al., 2016; Li et al., 2017; Li et al., 2018). This is done by training a cross-modal deep convolution neural network (CNN) to project audio and score image snippets onto a shared space where semantically similar items of the two modalities will end up close together, whereas dissimilar ones far apart. Being of a fully supervised nature, this approach has a number of limitations. First, it requires a large amount of labeled training data in order for a model to generalize to unseen data. Second, such annotated data is of complex and expensive nature: it requires fine-grained alignments between time stamps on the audio signal and pixel coordinates in sheet music images in order to obtain matching cross-modal snippets. The annotation process, besides being labor- and time-consuming, requires annotators with specialized musical training who are able to correctly identify and interpret music notation in sheet music images and match them to note onsets in audio recordings. For that reason current approaches rely solely on synthetic datasets, where both the scores and the audios - and the corresponding annotations - are generated from a symbolic score representation; this results in poor generalization to real data, as we will demonstrate in our experiments (see Section 5). In this paper, we explore _self-supervised contrastive learning_ as way to mitigate the data scarcity problem in audio-sheet music snippet retrieval. We propose to contrast differently augmented versions of short fragments of audio recordings and sheet music images, as a pre-training step. The data for this task needs no labels or annotations, so we have an almost infinite supply of this. The key idea is that by trying to solve the pretext problem, the model can learn useful low-level representations, which can then be used for the audio-sheet music snippet retrieval task, where only few annotated data are available. We conduct several experiments in datasets of different natures to demonstrate that the pre-training stage effectively alleviates the performance gap between synthetic and real data. We then use the learned snippet embeddings for the downstream task of cross-modal _piece identification_ and observe improved retrieval performance in all models that were pre-trained. We summarize our contributions as follows. * We design a method for multi-modal self-supervised contrastive learning of audio-sheet music representations with publicly available music data, where the network responsible for each modality can be independently pre-trained and enabled for fine-tuning. * We show through detailed experiments on diverse data-sets that our models outperform the current state-of-the-art method by a significant margin in the task of snippet retrieval. * As a proof of concept, we aggregate snippet embeddings to perform cross-modal piece identification and demonstrate the effectiveness of our improved models, which significantly outperform fully supervised methods. ## 2. Related Work One of the key challenges in audio-sheet music retrieval refers to its multi-modality nature: finding some shared representation that allows for an easy comparison between items from different modalities. The traditional methods for connecting printed scores to their relative audio recordings are based on common mid-level representations (Kang et al., 2017; Kang et al., 2017), such as chroma-based features (Kang et al., 2017; Li et al., 2018), symbolic representations (Beng et al., 2017), or the bootleg score (Kang et al., 2018; Li et al., 2018), the latter being defined as a coarse codification of sequences of the main note-heads in a printed score. However generating these mid-level representations involves pre-processing stages which are prone to error, such as optical music recognition (Kang et al., 2017; Li et al., 2018; Li et al., 2018) and automatic music transcription (Li et al., 2018; Li et al., 2018). In order to avoid such unreliable pre-processing components, an alternative approach was proposed in (Li et al., 2018; Li et al., 2018), by designing a two-modal network that is able to learn a shared space between short fragments of score scans and their corresponding audio excerpts. This is done by training the network to minimize the cosine distance between pairs of low-dimensional embeddings from snippets of audio and sheet music, and promising results on synthetic music data indicate the potential of replacing manually-designed common representations with learned spaces. ## 3. The Proposed Method In this section we first briefly describe how current approaches employ deep CNNs to learn a cross-modal embedding space from pairs of matching audio and sheet music snippets. Then we explain Figure 1. Illustration of the audio-sheet music snippet retrieval problem for both search directions. First, one wishes to query an audio excerpt (on the right), represented by its magnitude spectrogram, and retrieve the corresponding sheet music snippet from an image database (on the left). Analogously, one may wish to invert the search direction and retrieve items from an audio database given a sheet music snippet input. All music visualizations were extracted from the MSMD dataset (Li et al., 2018). our proposed method, followed by describing the augmentation strategies for both sheet music and audio samples. ### Learning Audio-Sheet Music Embeddings The fundamental approach to learn correspondences between short snippets of music recordings and sheet music images was first proposed in (Sutskever et al., 2016; Sutskever et al., 2017). This task is formulated as a cross-modal embedding learning problem, where a network is trained to optimize a shared space between the two modalities, by minimizing the cosine distance between musically similar snippets whereas maximizing the distance between non-corresponding items. The network, which is illustrated in Figure 2, consists of two independent pathways, each responsible for one modality. Each pathway is composed of a VGG-style encoder (Vaswani et al., 2017), followed by a multi-layer perceptron layer (MLP) that learns higher-level abstractions from the encoder output. At the top of the network a canonically correlated (CCA) embedding layer (Sutskever et al., 2017) is placed, forcing the two pathways to learn representations that can be projected onto a 32-dimensional shared space. Then a pairwise ranking loss (Sutskever et al., 2017) is employed to minimize the distance between embeddings from matching snippets of different modalities. Let \((\mathbf{x},\mathbf{y})\) represent a pair of corresponding sheet music and audio snippets (positive pairs), as displayed in Figure 2. The sheet music pathway is represented by the function \(f\), while \(g\) denotes the audio embedding function. The functions \(f\) and \(g\) map \(\mathbf{x}\) and \(\mathbf{y}\) to the shared low-dimensional space. Then the similarity function \(\text{sim}(\cdot)\), defined as the cosine similarity, is used to compute the final ranking loss : \[\mathcal{L}_{\text{rank}}=\sum_{(\mathbf{x},\mathbf{y})}\sum_{k=1}^{K}\max \left\{0,\alpha-\text{sim}\left(f(\mathbf{x}),g(\mathbf{y})\right)+\text{sim} \left(f(\mathbf{x}),g(\mathbf{y}_{k})\right)\right\}, \tag{1}\] where \(\mathbf{y}_{k}\) for \(k\in 1,2,\ldots,K\) represent additional contrastive (negative) examples from \(K\) non-matching snippets within the same training mini-batch. This ranking loss is applied on all \((\mathbf{x},\mathbf{y})\) pairs of each mini-batch iteration, and the margin parameter \(\alpha\in\mathbb{R}_{+}\) in combination with the \(\max\left\{\cdot\right\}\) function penalize matching snippets that were poorly embedded. After the training is done, the snippet retrieval task illustrated in Figure 1 can then be easily and efficiently performed via nearest-neighbor search in the shared space. ### Self-Supervised Contrastive Learning In this work we build on the SimCLR framework (Sutskever et al., 2016), a self-supervised contrastive method for image representations. The goal is to learn useful representations from unlabeled data using self-supervision. The idea is to train a network encoder to be as invariant as possible concerning a set of given augmentation transforms (Sutskever et al., 2017). In order to do that, different augmentations are applied to a training sample so two distinct views thereof are generated (which constitute a "positive pair" that represent the same item). Then a Siamese network (Sutskever et al., 2016) encodes both views into embeddings, and a contrastive loss function is applied in order to bring together latent representations from the same sample, while pushing away embeddings of negative pairs. This approach is sketched in Figure 3 for the case of sheet image snippets, however we stress the procedure is analogous for the audio case. More precisely, the following steps are performed: * Given a sample \(\mathbf{x}\) from the training mini-batch, two stochastic sets of data augmentation transforms are applied to \(\mathbf{x}\) to render two different augmented views of the same sample (a "positive pair"), namely \(\tilde{\mathbf{x}}_{i}\) and \(\tilde{\mathbf{x}}_{j}\). (Our specific data augmentation pipeline for each modality is described in Section 4 below). * Then a CNN encoder \(e(\cdot)\) is used to compute a latent representation \(\mathbf{h}_{i}=e(\tilde{\mathbf{x}}_{i})\) for each view. * An MLP projection head \(p(\cdot)\) maps the encoder latent embedding \(\mathbf{h}_{i}\) to a final space \(\mathbf{z}_{i}=p(\mathbf{h}_{i})\) where the contrastive loss is used. Figure 3. Sketch of our proposed self-supervised contrastive learning architecture, adapted from (Sutskever et al., 2016), for an example of sheet music snippet. Two different views are rendered using augmentation transforms (contrast and rotation, for example), which are fed to a CNN encoder followed by a MLP head, generating a positive pair of embeddings \((\mathbf{z}_{i},\mathbf{z}_{j})\), which should be projected close together. Figure 2. Illustration of audio–sheet retrieval model, adapted from (Sutskever et al., 2017). The left and right independent pathways encode sheet music and audio snippets, respectively, by projecting together corresponding cross-modal pairs while maximizing the distance between non-corresponding ones. * Then the normalized-temperature cross-entropy (\(NT-Xent\)) loss function (Nakamoto and Takayama, 2015) is applied and summed over all positive augmented pairs \((\tilde{\mathbf{x}}_{i},\tilde{\mathbf{x}}_{j})\) within the mini-batch: \[\mathcal{L}=\sum_{(i,j)}\log\frac{\exp(\sin(z_{i},z_{j})/\tau)}{\sum_{o=1}^{2N }\mathds{1}_{\lfloor\varphi t\nmid}\exp(\sin(z_{i},z_{o})/\tau)}.\] (2) During training, two different augmented views are rendered from each sample within the mini-batch of size \(N\), yielding a pool of \(2N\) augmented views per mini-batch, over which the above loss function is applied. For each positive pair, all remaining \(2N-2\) samples within the mini-batch are considered negative samples, as indicated by the summation in the denominator of Equation (2). The temperature parameter \(\tau\in\mathbb{R}_{+}\) works similarly to the margin parameter \(\alpha\) in Equation (1), prioritizing poorly embedded samples. In this architecture the MLP projection layer \(p\) is employed only during self-supervised learning. After the model is trained, this layer is discarded and only the encoder \(\varepsilon\) is used as a pre-trained model for a given downstream task, which in our case is audio-sheet music retrieval. As discussed in (Kang et al., 2019), the reason is that empirical results show that applying the contrastive loss over a function \(p\) of the encoder embeddings \(\mathbf{z}_{i}=p(\mathbf{h}_{i})\) during training is beneficial because it improves the quality of learned representations. An important difference between our approach and the method described in (Kang et al., 2019) is that in our setup we have two separate convolutional pathways, one responsible for encoding each modality (see Figure 2). We perform self-supervised contrastive learning separately in each of the modalities, in order to obtain two separate and independent pre-trained encoders. Since the pathways for audio and sheet music are independent, we can simply select the modality we wish to pre-train, and obtain a pre-trained encoder for the given modality. The encoder is then placed in the multi-modal network in Figure 2 and fine-tuned for the audio-sheet music retrieval task. Our CNN encoder follow the setup in (Kang et al., 2019). The encoder architecture is the same in each modality, and consists of a VGG-style network (Vaswani et al., 2017) with eight convolutional layers, each of them followed by a batch normalization layer (Krizhevsky et al., 2014) and exponential linear unit (ELU) (Krizhevsky et al., 2014) activation. A max pooling layer is applied every two consecutive convolutional layers in order to halve the dimensions of the hidden representations. Our projection head \(p\) consists of an MLP with one hidden layer followed by batch normalization and rectified linear unit activation (ReLU) (Bengio et al., 2016), from which the output embedding is L2-normalized and mapped to a 32-dimensional final representation, on which the contrastive loss is calculated. ## 4. Data augmentations In self-supervised learning, one wishes to optimize a model so it can be highly invariant in regards to a set of augmentation transforms. Therefore a proper composition of data augmentation operations is crucial for learning good representations (Kang et al., 2019). In our system, an augmented view \(\tilde{\mathbf{x}}_{i}\) is generated by applying a pipeline of \(M\) augmentation transforms on the original sample \(\mathbf{x}\). Each augmentation transform \(t_{m}()\) has an independent probability \(p_{m}\) to be applied to \(\mathbf{x}\). Each time the transform \(t_{m}()\) is selected, its hyper-parameters are stochastically sampled from a pre-defined distribution, which is particular for each transform. In the following we provide details of the augmentations we employed during the self-supervised training of each modality, as well as information about the used datasets. ### Sheet Music Augmentation Transforms Augmentation strategies have proven to be powerful techniques to help machine learning models generalize to unseen data in image tasks (Krizhevsky et al., 2014; Vaswani et al., 2017). In sheet music analysis, augmentation transforms are chosen so that they can emulate document variations and degradations of various types (Kang et al., 2019; Krizhevsky et al., 2014; Krizhevsky et al., 2014). We build on these works and define a set of nine transforms that are applied to the sheet music snippets, which are described as follows. * We shift the snippet horizontally (1) and vertically (2) in relation to its positive pair. The horizontal shift is calculated in a way that positive pairs will share at least 80% of their content, and 75% for the vertical shift. * The snippet is resized (3) to have between 90 and 110% of its original size. * The snippet is rotated (4) to a maximum angle of 8 degrees, counter- or clockwise. * We apply Additive White Gaussian Noise (AWGN) (5) and Gaussian Blur (GB) (6), to simulate noisy documents and poor resolution, respectively. * Additive Perlin Noise (APN) (7) (Krizhevsky et al., 2014) is added to the sample. This transform generates big darker and lighter areas in the image, mimicking quality differences in the image snippet. * Then random small (8) and large (9) elastic deformations (ED) (Vaswani et al., 2017) are applied, generating wave-like distortions to the image, whose strength and smoothing can be tuned. Small EDs are applied on small scales, with the effect of deforming the shapes of smaller symbols and lines. When large EDs are applied, the structure and orientation of bigger music symbols are modified, for example by skewing or bending bar lines and note symbols, and squeezing or elongating note heads. The augmentations are applied in the presented order and we tune the hyper-parameters of each individual transform in a way that a snippet is highly degraded, but still legible. Figure 4 shows four examples of augmented snippet pairs when all nine transforms are stochastically applied to four sheet music snippets. ### Audio Augmentation Transforms Several works have successfully explored data augmentation for several audio and music learning tasks (Krizhevsky et al., 2014; Vaswani et al., 2017; Vaswani et al., 2017; Vaswani et al., 2017). We build on them and in the following define the sequence of eight audio transforms used to augment audio excerpts. * We apply a time shift (1) between the two excerpts of a positive pair. The shift is calculated in a way that corresponding snippets will share at least 70% of their content. * Polarity inversion (2) is applied to the audio excerpt by multiplying its amplitude by \(-1\). * Additive White Gaussian Noise (3) with a signal-to-noise ratio between 5 and 20 dB is added. * A gain change (4) between \(-12\) and \(12\) dB is applied to the signal. * We apply a seven-band parametric equalizer (5) in order to adjust the volume of seven different randomly-chosen frequency bands. 1 Footnote 1: [https://iver56.github.io/audiomentations/waveform_transforms/seven_band_parametric_eq/](https://iver56.github.io/audiomentations/waveform_transforms/seven_band_parametric_eq/) * The audio excerpt is stretched in time (6) without modifying its pitch by changing the tempo with a random factor between 0.5 and 1.8. * Time (7) and frequency (8) masks are applying to the audio snippet _a la_ SpecAugment (Zhou et al., 2017). Both time and frequency largest masks correspond to 20% of the snippet duration and frequency range, respectively. The augmentations are applied in the order they were declared above. The transforms 1-5 are applied directly on the waveform snippets, while transforms 6-8 are applied in the frequency domain due to computational benefits. ## 5. Experiments and Results In this section, we report on the experiments conducted to validate our proposed method. We first briefly elaborate on the pre-processing steps, dedicated datasets and training setup. Then we carry out experiments on cross-modal snippet retrieval and piece identification. ### Snippet Preparation In the following, we describe how the snippets are extracted, pre-processed and prepared for training. #### 5.1.1. Sheet Music Snippets Our sheet music images are first re-scaled to a \(1181\times 835\) resolution (pixels) per page. Then \(160\times 200\) snippets are selected in such a way that they comprise musical content, _i.e._ within the systems of the document (groups of two staves, for piano sheet music). When no annotation is available concerning pixel coordinates of note heads and/or system locations (_i.e._, in the raw data for self-supervised learning), we use the Audiveris engine 2 to automatically detect the staff lines as a pre-processing stage. Manual inspections showed that Audiveris is able to properly identify system coordinates in printed piano scores with accuracy of over 99%, therefore it is unlikely that snippets will not exhibit musical content. Examples of sheet music snippets are depicted in Figures 2, 3 and 4 Footnote 2: [https://audiveris.github.io/audiveris/](https://audiveris.github.io/audiveris/) #### 5.1.2. Audio Snippets Our music datasets consist of audio signals with a sampling rate of \(22.05\) kHz. The log-frequency spectrogram of each signal is computed with a resolution of 20 frames per second and minimum and maximum frequencies of 30 Hz and 6 kHz respectively, generating 92 frequency bins per frame. We then cut out 84 frames of audio (approximately 4.2 seconds) to generate a snippet, which has a final shape of \(92\times 84\) (bins \(\times\) frames). Examples of audio log-spectrograms and snippets are shown in Figures 1 and 2. ### Datasets To pre-train the sheet music encoder, we scrape data from the International Music Score Library Project (IMSLP) 3, an online plataform that hosts public domain music scores. We collect 3,485 scanned piano scores relating to 842 music pieces by several composers, which amounts to approximately 7,000 pages of sheet music. From these documents we extract over 700k snippets as described in 5.1 for training and validation. We will provide the IMSLP links to all music scores of our collection in the paper repository 4. Footnote 3: [https://imsl.org/wiki/Main_Page](https://imsl.org/wiki/Main_Page) Footnote 4: [https://github.com/blinded_for_review](https://github.com/blinded_for_review). For self-supervised learning of the audio encoder, we use the recordings from MAESTRO (Mae et al., 2017), a public dataset with 1,276 piano recordings comprising around 200 hours of piano music. Since there is no test stage at pre-training, we merge the pre-defined MAESTRO test split into the train set, and generate around 840k audio snippets as described in 5.1 to train and validate the audio encoder. To train the final audio-sheet music network, we use the Multi-Modal Sheet Music Dataset (MSMD) (Mae et al., 2017), which is a database of polyphonic piano music and scores. With over 400 pieces covering over 15 hours of audio, this dataset has fine-grained cross-modal Figure 4. Examples of four pairs of augmented sheet music snippets after all nine transforms were randomly applied. One should note that, even though the excerpts were greatly corrupted, they are still readable. These examples were obtained from the MSMD dataset (Mae et al., 2017). alignments between audio note onsets and sheet music note-head coordinates, which makes it suitable for generating matching audio-sheet music snippets. This dataset is of fully artificial nature: audio recordings are synthesized from MIDI files using FluidSynth 5 and the scores are engraved with LilyPond 6. The matching snippets are extracted in a way that they are centred around the same note event, being the note onset for the audio side and the note-head pixel coordinate for the sheet music side. Footnote 5: [https://www.fluidsynth.org/](https://www.fluidsynth.org/) Footnote 6: [http://lilypond.org/](http://lilypond.org/) In our experiments, we wish to investigate how well pre-training helps to generalize from synthetic to real data. To this end, we evaluate on three datasets: on a (1) fully artificial one, and on datasets consisting (2) partially and (3) entirely of real data. For (1) we use the test split of MSMD and for (2) and (3) we combine the Zeilinger and Magaloff Corpora (Magaloff, 2017) with a collection of commercial recordings and scanned scores that we have access. These data account for more than a thousand pages of sheet music scans with fine-grained mappings to both MIDI files and over 20 hours of classical piano recordings. We then define the following evaluation sets. (2) _RealScores_Synth_: a partially real set, with _scanned_ (real) scores of around 300 pieces aligned to notes of _synthesized_ MIDI recordings. And (3) _RealScores_Rec_: an entirely real set, with _scanned_ (real) scores of around 200 pieces with fine-grained alignments to _real audio_ recordings. ### Training Setup Our learning pipeline is split into two stages: (i) self-supervised learning on each individual modality with a batch size of 256, followed by (ii) cross-modal training on pre-loaded encoders from either or both modalities, with a batch size of 128 pairs, where audio and sheet music snippets are project onto a 32-dimensional space. In both stages we use the Adam optimizer (Kingmaa et al., 2014) and He initialization (Kingmaa et al., 2014) in all convolutional layers. The temperature parameter \(\tau\) and triplet margin \(\alpha\) are set to 0.5 and 0.6, respectively. We set the initial learning rates of (i) and (ii) to 0.001 and 0.0004 respectively. We observe the validation loss during training and halve the learning rate if there are no improvements over 10 consecutive epochs, apply early stopping when halving happens five times, and select the best model among all epochs for testing. For sake of simplicity, we leave the remaining details concerning topological design of the networks, further learning hyper-parameters, and augmentation probabilities and hyper-parameters, to our repository. 4 Footnote 4: [https://www.fluidsynth.org/](https://www.fluidsynth.org/) ### Snippet Retrieval Experiments In this section we evaluate a two-way snippet retrieval task: given a query excerpt, retrieve the corresponding snippet in the other modality. This is done by first embedding the query excerpt and all snippets of the target modality, and then selecting the query's nearest neighbor in the embedding space as the best match, based on their pairwise cosine distance. For each of the three evaluation datasets introduced in section 5.2, we select a pool of 10,000 audio-sheet music snippet pairs for evaluation. We perform the retrieval task in both search directions: audio-to-sheet music (A2S) and sheet music-to-audio (S2A). As evaluation metrics we compute the _Recall(\(\beta\)k_ (R@k), _Mean Reciprocal Rank_ (MRR) and the _Median Rank_ (MR) for each experiment. The R@k measures the ratio of queries which were correctly retrieved within the top \(k\) results. The MRR is defined as the average value of the reciprocal rank over all queries, with the rank being the position of the correct match in the distance-ordered ranked list of candidates. MR is the median position of the correct match in the ranked list. We perform snippet retrieval with the state-of-the-art method (Krizhevsky et al., 2014), which will be denoted as the baseline _BL_, and compare with all possible combinations of self-supervised pre-training as we proposed. Since in the cross-modal network the two convolutional pathways responsible for encoding each modality are independent, we can load either or both encoders with parameters that were pre-learned before training. We then define the following models: * _BL+A_: the audio encoder is pre-trained * _BL+S_: the sheet music encoder is pre-trained * _BL+A+S_: both audio and sheet music encoders are pre-trained, which are modified versions of the baseline. Table 1 presents the snippet retrieval results of the four models defined above, evaluated on both search directions A2S and S2A. In the first section (I) we examine the completely synthetic set defined as the MSMD test split. Then in sections (II) and (III) we consider the partially and completely real scenarios, where audio excerpts are extracted from synthetic and real recordings, respectively, and sheet music snippets are derived from scans of real scores in both setups. We first observe the performance of the current state-of-the-art model (_BL_) dropping sharply when moving from artificial to real data. In the fully synthetic set (I) it achieves MRRs of 0.653 and 0.704 in the directions A2S and S2A, respectively, correctly retrieving approximately 60% of the snippets as the best match in the S2A \begin{table} \begin{tabular}{l c c c|c c c c} & \multicolumn{3}{c}{**Audio-to-Score (A2S)**} & \multicolumn{3}{c}{**Score-to-Audio (S2A)**} \\ \cline{2-9} & **R@1** & **R@25** & **MRR** & **MR** & **R@1** & **R@25** & **MRR** & **MR** \\ \hline \hline \multicolumn{9}{l}{I MSMD (Fully synthetic)} \\ \hline BL & 0.54 & 0.91 & 0.653 & 1 & 0.60 & 0.94 & 0.704 & 1 \\ BL+A & **0.59** & **0.93** & **0.699** & **1** & **0.61** & **0.95** & **0.723** & **1** \\ BL+S & 0.56 & 0.92 & 0.676 & 1 & 0.61 & 0.94 & 0.717 & 1 \\ BL+A+S & 0.57 & 0.93 & 0.687 & 1 & 0.60 & 0.94 & 0.718 & 1 \\ \hline \hline \multicolumn{9}{l}{II RealScores\_Synth (Sheet music scans and synthetic recordings)} \\ \hline BL & 0.28 & 0.67 & 0.375 & 7 & 0.36 & 0.77 & 0.467 & 3 \\ BL+A & 0.37 & 0.78 & 0.478 & 3 & 0.43 & 0.82 & 0.537 & 2 \\ BL+S & 0.34 & 0.75 & 0.447 & 4 & 0.43 & 0.84 & 0.544 & 2 \\ BL+A+S & **0.37** & **0.79** & **0.481** & **3** & **0.44** & **0.84** & **0.548** & **2** \\ \hline \hline \multicolumn{9}{l}{III RealScores\_Rec (Sheet music scans and real recordings)} \\ \hline BL & 0.10 & 0.36 & 0.156 & 76 & 0.14 & 0.47 & 0.216 & 33 \\ BL+A & 0.13 & 0.44 & 0.200 & 41 & 0.17 & 0.55 & 0.261 & 18 \\ BL+S & 0.12 & 0.42 & 0.192 & 47 & 0.18 & 0.54 & 0.259 & 18 \\ BL+A+S & **0.15** & **0.48** & **0.226** & **29** & **0.18** & **0.54** & **0.266** & **18** \\ \end{tabular} \end{table} Table 1. Comparison of snippet retrieval results in both query directions on three types of datasets: (I) fully synthetic, (II) partially real and (III) entirely real. Boldfaced rows represent the best performing model per dataset. task. The MRR drops at least 23% points for either A2S or S2A as we move to (II) and at least 48% at (III). The most extreme drop occurs at (III) in the A2S task: only 10% of the score snippets are on rank 1 (R@1 = 0.10). We additionally note that the retrieval quality of the S2A search direction is better than that of A2S for all evaluation metrics. Our proposed models outperform the baseline in all scenarios for all evaluation metrics, indicating that self-supervising pre-training of either modality is beneficial in the problem we attempt to solve. We derive the following observations and discussions: * The most significant improvements were observed in configurations with real music data, namely (II) and (III). We argue for the modest improvements on (I): the synthesized data of MSMD do not exhibit the degradations simulated by the augmentations transforms described in Sec 4, for either scores or recordings. Therefore it was not expected that our pre-training strategy would considerably benefit retrieval on artificial data. * Pre-training both audio and score encoders (_BL+A+S_) generated the best retrieval metrics in scenarios with real data, with the largest improvements being observed in (II), where the MRR of the A2S and S2A tasks were increased by roughly 10% and 8% points, correspondingly. Moreover, it was not observed a substantial compound effect of pre-training both encoders (_BL+A+S_) when comparing to individual encoders (_BL+A_ and _BL+S_): the improvements were merely marginal. * In addition to the absolute improvements, the performance drop between evaluations on synthesized and real datasets was reduced: The MRR gap when moving from (I) to (II) and to (III) reduced by 7.2% and 2.6% points for the A2S direction, respectively; when retrieving the S2A direction these drops accounted for 6.7% and 3.6% points, correspondingly. * The best models also reduced the overall performance gap between retrieval directions A2S and S2A, in all dataset configurations. In an additional experiment, we take a closer look at the shared space properties of mapping matching snippets close together. Figure 5 depicts the distribution of pairwise cosine distances between 2,000 snippet pairs across each test dataset. When jointly analyzing Table 1, we observe that models which are capable of producing smaller distances between matching fragments generate better snippet retrieval quality. Moreover, we see that distances between snippet pairs from real data are on average mapped farther to each other than those from synthesized music data. In all experimental scenarios, our pre-trained models were able to project corresponding snippets in the embedding space closer together, in comparison with the state-of-the-art method. From this we can point out the potential of self-supervised pre-training as a key component towards more powerful joint embedding spaces. ### Cross-modal Piece Identification Experiments In this set of experiments, we aggregate snippet embeddings generated by our models to perform cross-modal piece identification: given a full recording as a query, retrieve the corresponding score from a collection; or given a printed score, find an appropriate music recording within a database. We evaluate this task as a proof-of-concept, to validate our proposed methods in a higher-level realistic retrieval scenario. As underlined in Section 1, piece identification is a key component of many audio-score retrieval systems, so we believe this evaluation can give us insights towards more robust systems. The piece identification is done as in (Han et al., 2017), with an approach that we will denote as _vote-based_: a matching procedure purely based on snippet retrieval and indexing. Let \(\mathcal{D}\) be a collection of \(L\) documents and \(Q\) be a document query in the target and search modalities, respectively. Each document \(D_{i}\in\mathcal{D}\) is sequentially cut into snippets, which are embedded via their respective pathway network of Figure 2, generating a set of embeddings \(\{d_{i}^{i},d_{2}^{i},...,d_{M_{i}}^{i}\}\), where each embedding \(d_{j}^{i}\) is indexed to its originating document \(D_{i}\). We define hop sizes of 50 pixels and 10 frames (roughly 0.5 sec) for consecutive sheet music and audio snippets. The document query is segmented into 100 equally-spaced excerpts, which are passed through the counterpart pathway of the model, producing a set of query embeddings \(\{q_{1},q_{2},...,q_{100}\}\). Then snippet retrieval is carried out as in Section 5.4 for each query embedding \(q_{j}\), with the difference that now the top 25 nearest neighbors are retrieved per query embedding among all embeddings from the collection \(\mathcal{D}\). Each nearest neighbor votes for its originating document, and a vote-based ranked list is created by aggregating all nearest neighbors from all 100 query embeddings. The document achieving the highest count among all 2,500 votes is selected as the best match. In our piece identification experiments we evaluate on pieces of the same datasets as in Section 5.4. (I) The MSMD test split has 100 Figure 5. Distributions of pairwise cosine distances between corresponding pairs of audio-sheet music snippets, with 2,000 pairs randomly drawn from each evaluation set. Outliers are not directly visualized in order not to distort the plots. The vertical lines highlight the medians of the distribution of the baseline model _BL_ for each dataset. pairs of both synthesized scores and their respective recordings; (II) has 314 pieces with their corresponding scanned sheet music and synthesized recordings; and (III) has 198 pairs of scanned sheet music and real recordings. The cross-modal piece identification results are summarized in Table 2. We evaluate the same scenarios and models as for the two-way snippet retrieval task, in both search directions A2S and S2A. Moreover we include in the table (between parentheses) the actual number of pieces retrieved for each recall value. The piece identification results exhibit a similar trend as in the previous experiments on snippet retrieval. The performance of the baseline model _BL_ also declines abruptly as more real scenarios are evaluated. The mean reciprocal rank drops around 59% and 34% points when traversing from (I) to the most realistic case (III), for the retrieval directions A2S and S2A, correspondingly. The worst case happens at (III) for the A2S direction, when only approximately 11% of the scores (22 items among 198) are correctly retrieved as the best match. We derive the following discussions and observations concerning the performance of our proposed methods: * In configurations with real data, our methods outperformed the baseline _BL_ in all evaluation metrics by a significant margin, with _BL+A+S_ being the best model among them. For example, in the fully real scenario (III) the MRR of _BL+A+S_ in the A2S direction increased from 0.256 to 0.535, which indicates a performance jump of more than 100% of the former value; in the S2A direction, now only 6% of the recordings are not correctly retrieved among the best ten matches. * The compound effect of pre-training both encoders (_BL+A+S_) when comparing to individual encoders (_BL+A_ and _BL+S_) was stronger than in the two-way snippet retrieval. In the (III)-(_BL+A+S_)-(A2S) configuration the MRR improvement accounted for more than the sum of the individual improvements observed for models _BL+A_ and _BL+S_. * In addition to the dataset-wise improvements, the performance gaps between synthesized and real datasets, and between A2S and S2A directions, were significantly reduced. * Overall the boost in retrieval quality that our proposed models produced is significantly higher for cross-modal piece identification than for snippet retrieval. This indicates that a moderate performance boost in short fragment-level music retrieval tasks has great potential to escalate to greater improvements in higher-level retrieval problems if a proper post-processing method aggregating those fragments is employed. To get a better understanding of the matching quality of our models on piece identification scenarios, we discuss on the _separation indicator_, introduced in [42]. This factor measures how distinct the relevant document is among the other items during the retrieval process. Given the vote-based ranked list created during the identification procedure of query \(Q\), its counterpart document is retrieved at rank \(r\). Defining \(\delta_{D_{i}}\) as the number of votes the document ranked at \(i-\)th position received, the separation indicator \(\rho\in\mathbb{R}_{+}\) is defined as: \[\rho=\begin{cases}\delta_{D_{2}}/\delta_{D_{1}}&\text{if rank $r=1$,}\\ \delta_{D_{1}}/\delta_{D_{r}}&\text{otherwise.}\end{cases} \tag{3}\] In this metric, indicators below 1 point out to a correct match, with lower values indicating better matching quality. A \(\rho>1\) implies a wrong detection; the bigger its value, the lesser is number of votes received by the correct document in comparison with the top match. \begin{table} \begin{tabular}{l c c c c c|c c c c} & \multicolumn{6}{c}{**Audio-to-Score (A2S)**} & \multicolumn{6}{c}{**Score-to-Audio (S2A)**} \\ \cline{2-10} & \# & **R@1** & **R@10** & \textgreater{} & **R@10** & **MRR** & **R@1** & **R@10** & \textgreater{} & **R@10** & **MRR** \\ \hline \hline \multicolumn{1}{c}{I} & \multicolumn{1}{c}{MSMD (Fully synthetic)} & & & & & & & & & \\ \hline BL & 100 & 0.76 (76) & 0.98 (98) & 0.02 (2) & 0.846 & 0.87 (87) & 1.00 (100) & 0.00 (0) & 0.927 \\ BL+A & 100 & 0.85 (85) & 0.99 (99) & 0.01 (1) & 0.910 & 0.81 (81) & 1.00 (100) & 0.00 (0) & 0.896 \\ BL+S & 100 & 0.84 (84) & 1.00 (100) & 0.00 (0) & 0.898 & 0.87 (87) & 1.00 (100) & 0.00 (0) & 0.928 \\ BL+A+S & 100 & **0.87 (87)** & **1.00 (100)** & **0.00 (0)** & **0.918** & **0.93 (93)** & **1.00 (100)** & **0.00 (0)** & **0.961** \\ \hline \hline \multicolumn{10}{c}{II} & \multicolumn{1}{c}{RealScores\_Synth (Sheet music scans and synthetic recordings)} & & & & & & \\ \hline BL & 314 & 0.49 (154) & 0.84 (265) & 0.16 (49) & 0.609 & 0.65 (203) & 0.90 (282) & 0.10 (32) & 0.734 \\ BL+A & 314 & 0.71 (223) & 0.94 (294) & 0.06 (20) & 0.792 & 0.82 (256) & 0.98 (307) & 0.02 (7) & 0.874 \\ BL+S & 314 & 0.70 (219) & 0.93 (291) & 0.07 (23) & 0.781 & 0.82 (256) & 0.97 (306) & 0.03 (8) & 0.871 \\ BL+A+S & 314 & **0.80 (250)** & **0.96 (302)** & **0.04 (12)** & **0.857** & **0.88 (277)** & **0.98 (308)** & **0.02 (6)** & **0.919** \\ \hline \hline \multicolumn{10}{c}{III} & \multicolumn{1}{c}{RealScores\_Rec (Sheet music scans and real recordings)} & & & & & & \\ \hline BL & 198 & 0.11 (22) & 0.57 (113) & 0.43 (85) & 0.256 & 0.48 (95) & 0.79 (156) & 0.21 (42) & 0.587 \\ BL+A & 198 & 0.21 (42) & 0.69 (136) & 0.31 (62) & 0.361 & 0.62 (122) & 0.87 (173) & 0.13 (25) & 0.714 \\ BL+S & 198 & 0.22 (44) & 0.69 (137) & 0.31 (61) & 0.375 & 0.63 (125) & 0.88 (175) & 0.12 (23) & 0.721 \\ BL+A+S & 198 & **0.39 (78)** & **0.81 (161)** & **0.19 (37)** & **0.535** & **0.72 (143)** & **0.94 (187)** & **0.06 (11)** & **0.795** \\ \end{tabular} \end{table} Table 2. Comparison of audio–sheet music piece identification results in both query directions on three types of datasets: (I), fully synthetic, (II) partially real and (III) entirely real. Boldfaced rows represent the best performing model per dataset. Figure 6 visualizes the distribution of the separation indicators obtained when performing cross-modal piece identification on the datasets with real music data. In this experiment we reduce the number of documents of each dataset to 100 pairs of audio recordings and scanned scores. A joint analysis with Table 1 reveals that overall the models with better piece identification results also exhibit a better matching quality statistics. Noteworthy is the poor matching quality of the (III)-A2S setup, the most realistic case in the audio-score search direction: the distributions of all models are strongly concentrated above \(\rho=1\). Our proposed methods generated overall smaller separation indicators for all audio-sheet music identification setups, indicating that self-supervised learning is a promising orientation for reliable audio-score retrieval systems. ## 6. Conclusion In this work we designed a learning framework to alleviate labeled data scarcity in training networks to solve audio-score retrieval tasks. We proposed multi-modal self-supervised contrastive learning of short excerpts of sheet music images and audio recordings as a first pre-processing step. In this framework, the network responsible for encoding each modality can be independently pre-trained and enabled for fine-tuning, having the potential to adapt to different tasks that require different fine-tuning configurations. For that we define a pipeline of augmentation transforms specifically for audio and sheet music snippets, and employ publicly available music data to pre-train our networks. Experiments on two-way snippet retrieval and subsequently on cross-modal piece identification evaluating diverse datasets showed that our proposed framework outperforms current state-of-the arts methods, specially in scenarios composed partially or entirely of real music data. Moreover, the self-supervised approach helped reducing the performance gap between synthetic and real data, which is one of the main challenges of audio-score retrieval problems. Given the improved retrieval performance in realistic configurations, in addition to larges amounts of publicly available music data that are available with easy access, we believe this is a promising research direction for the design of robust multi-modal music search and retrieval systems. ###### Acknowledgements. This work is supported by the European Research Council (ERC) under the EU's Horizon 2020 research and innovation programme, grant agreement No. 101019375 ("Whither Music?"), and the Federal State of Upper Austria (LIT AI Lab).
2306.17811
Safe Edges: A Study of Triangulation in Fill-in and Tree-Width Problems
This paper considers two well-studied problems \textsc{Minimum Fill-In} (\textsc{Min Fill-In}) and \textsc{Treewidth}. Since both problems are \textsf{NP}-hard, various reduction rules simplifying an input graph have been intensively studied to better understand the structural properties relevant to these problems. Bodlaender at el. introduced the concept of a safe edge that is included in a solution of the \textsc{Minimum Fill-In} problem and showed some initial results. In this paper, we extend their result and prove a new condition for an edge set to be safe. This in turn helps us to construct a novel reduction tool for \textsc{Min Fill-In} that we use to answer other questions related to the problem. In this paper, we also study another interesting research question: Whether there exists a triangulation that answers both problems \textsc{Min Fill-In} and \textsc{Treewidth}. To formalise our study, we introduce a new parameter reflecting a distance of triangulations optimising both problems. We present some initial results regarding this parameter and study graph classes where both problems can be solved with one triangulation.
Mani Ghahremani, Janka Chlebikova
2023-06-30T17:21:51Z
http://arxiv.org/abs/2306.17811v1
# Safe Edges: A Study of Triangulation in Fill-in and Tree-Width Problems ###### Abstract This paper considers two well-studied problems Minimum Fill-In (Min Fill-In) and Treewidth. Since both problems are \(\mathsf{NP}\)-hard, various reduction rules simplifying an input graph have been intensively studied to better understand the structural properties relevant to these problems. Bodlaender at el. [1] introduced the concept of a safe edge that is included in a solution of the Minimum Fill-In problem and showed some initial results. In this paper, we extend their result and prove a new condition for an edge set to be safe. This in turn helps us to construct a novel reduction tool for Min Fill-In that we use to answer other questions related to the problem. In this paper, we also study another interesting research question: Whether there exists a triangulation that answers both problems Min Fill-In and Treewidth. To formalise our study, we introduce a new parameter reflecting a distance of triangulations optimising both problems. We present some initial results regarding this parameter and study graph classes where both problems can be solved with one triangulation. keywords: Fill-in, Chordal Triangulation, Treewidth, Minimum Triangulation, Elimination Ordering Msc: [2020] 05C75, 05C85 + ## 1 Introduction The minimum fill-in and treewidth of graphs are well studied graph parameters with many practical applications [2]. The minimum fill-in of a graph \(G\), \(\textit{mfi}(G)\), is the minimum number of edges that triangulate it. The treewidth parameter, \(\textit{tw}(G)\), is equal to the minimum clique size over all triangulations of the input graph, subtracted by one. This naturally leads to the interesting question of whether there exists a triangulation that minimizes both parameters at the same time. Clearly, one such triangulation answers both NP-hard problems Minimum Fill-In and Treewidth simultaneously. To formalise our study, we introduce a new parameter \(\tau\). **Definition 1**.: _Let \(G\) be a graph and \(H\) be a minimum triangulation of \(G\) chosen so that its clique size is minimized. Define \(\tau(G)=\textit{tw}(H)-\textit{tw}(G)\)._ Knowing that \(\tau(G)=0\) for a graph \(G\), implies that there exists a minimum triangulation of \(G\) among those determining the treewidth and vice versa. Conversely, distinct graph classes have been constructed where one triangulation cannot solve both problems [3; 4; 5]. In such graphs the \(\tau\) parameter can be arbitrarily large; see, for example, the construction from [6] that explicitly uses the parameter \(\tau\). To study this parameter, we design methods that can be useful in determining the value of \(\tau\) as well as the minimum fill-in of graphs. In Section 2 we provide the necessary definitions and some properties of elimination orderings, as well as a reduction rule that can be useful for the Minimum Fill-In problem. Next in Section 3, we extend the notion of safe edges while constructing a minimum triangulation of a graph, a concept that was introduced in [1]. In Section 4, we present initial study of the \(\tau\) parameter on graphs with low treewidth and graphs with the same value of treewidth and vertex connectivity. ## 2 Preliminaries The used terminology is consistent with textbooks such as [7; 8]. We assume all graphs \(G=(V,E)\) to be finite, simple, connected, and undirected. For a vertex \(v\in V\), the neighbourhood of \(v\), \(N(v)\), refers to the set of vertices adjacent to \(v\). The closed neighbourhood of \(v\) is \(N[v]=N(v)\cup\{v\}\) and the degree of \(v\) is \(\textit{deg}(v)=|N(v)|\). For a vertex set \(X\subseteq V\), we use \(G[X]\) to denote the graph induced on \(X\). A vertex \(v\in V\) is _universal_ in \(G\) if \(N(v)=V\setminus\{v\}\). Given a graph \(G\) and a vertex set \(X\subseteq V\), \(G-X\) refers to the graph obtained from \(G\) after the _removal_ of \(X\) and we use \(G-v\), as a shorthand for \(G-\{v\}\), for a vertex \(v\in V\). The set of _missing edges_, or _fill edges_, of a vertex set \(X\subseteq V\) is defined as \(\textit{fill}(X)=\{uv|u,v\in X,uv\notin E\}\). For a set of missing edges \(F\) we refer to the graph obtained by _addition_ of \(F\) to \(G\) as \(G\oplus F\), which is a shorthand for the graph \((V,E\cup F)\). Given a vertex \(v\in V\), _elimination_ of \(v\) equates to the addition of the missing edges in neighbourhood of \(v\), \(\textit{fill}(N(v))\), to \(G\) and then removal of \(v\) in the resulting graph, formally \((G-v)\oplus\textit{fill}(N(v))\). A vertex set \(Y\subseteq V\) is _almost clique_ in \(G\) if there exists a vertex \(x\in Y\) such that \(Y\setminus\{x\}\) is a clique. A vertex is called _(almost) simplicial_ if its neighbourhood is a (almost) clique. A _chord_ is an edge connecting two non-consecutive vertices of a cycle, and a graph is _chordal_ iff every cycle with at least four vertices has a chord. Given a graph \(G\), a chordal supergraph \(H=(V,E\cup F)\) is called a _triangulation_ of \(G\), where \(F\) is a set of added chords. A triangulation \(H^{\prime}=(V,E\cup F^{\prime})\) is a _minimal triangulation_ of \(G\) if for every proper subset of edges \(F^{\prime\prime}\subset F^{\prime}\), the graph \(H^{\prime\prime}=(V,E\cup F^{\prime\prime})\) is not chordal. Similarly, a triangulation \(H^{*}=(V,E\cup F^{*})\) is a _minimum triangulation_ if \(|F^{*}|\) is the minimum number of fill edges that triangulate \(G\). The _minimum fill-in_ parameter of \(G\) is defined as \(\textit{mfi}(G)=|F^{*}|\). Given the graph \(G\), the Minimum Fill-In problem is answered by determining the value of \(\textit{mfi}(G)\). The Treewidth problem can be answered by finding a triangulation \(H\) of the input graph \(G\) where its clique size, \(\omega(H)\), is minimized. Then the treewidth of the graph, \(\textit{tw}(G)\), is defined as \(\textit{tw}(G)=\omega(H)-1\). However, the treewidth parameter can be defined in many ways; we recommend [9] for a complete introduction. The _vertex-connectivity_ (or simply connectivity) of \(G\), \(\kappa(G)\), is the smallest number of vertices whose removal disconnects \(G\). As an exception, the connec tivity of a complete graph on \(k\) vertices is defined as \(\kappa(K_{k})=k-1\), for \(k\geq 1\). Given two non-adjacent vertices \(a,b\in V\), a vertex set \(S\subseteq V\setminus\{a,b\}\) is an \(a,b\)-separator in \(G\), or a separator for short, if \(a\) and \(b\) are contained in different components of \(G-S\). For a separator \(S\subset V\), we use \(\mathcal{C}(S)\) to refer to the set of connected components of \(G-S\). An \(a,b\)-separator \(S\) is a _minimal_ separator in \(G\), if no proper subset \(S^{\prime}\subset S\) is also an \(a,b\)-separator. A (minimal) separator \(S\) in \(G\) is a clique (minimal) separator if \(S\) is a clique in \(G\). The following is a well-known property of clique minimal separators (discussed in [10]) that is essential to prove some of our later findings in this paper. **Theorem 1**.: _Let \(G=(V,E)\) be a graph and \(S\) a clique minimal separator in \(G\). Then:_ \[\text{mfi}(G)=\sum\nolimits_{C\in\mathcal{C}(S)}\text{mfi}(G[S\cup V_{C}])\] An _elimination ordering_\(\alpha\) over a graph \(G\) is a bijection \(\alpha:\{1,\ldots,|V|\}\to V\). It is shown in [11] that any given elimination ordering \(\alpha\) over a graph \(G\) can be used to define a triangulation \(G_{\alpha}^{+}\) as outlined below: **Definition 2**.: _Given an elimination ordering \(\alpha\) over the graph \(G=(V,E)\) we define the supergraph \(G_{\alpha}^{+}=(V,E\cup F)\) using the sequence of graphs \(G_{0},\ldots,G_{|V|}\):_ * _Let_ \(G_{0}=G\)_._ * _For each step_ \(i\in\{1,\ldots,|V|\}\)_: let_ \(\alpha(i)\) _be a vertex of_ \(G_{i-1}\) _and denote_ \(F_{i}=\text{fill}_{G_{i-1}}\left(\,N_{G_{i-1}}\left(\alpha(i)\right)\,\right)\)_. Then define the graph obtained from_ \(G_{i-1}\) _after eliminating_ \(\alpha(i)\) _as_ \(G_{i}=(G_{i-1}\oplus F_{i})-\alpha(i)\)_._ _Finally, define \(F=\bigcup_{i\in\{1,\ldots,|V|\}}F_{i}\)._ Given a graph \(G=(V,E)\) and an elimination ordering \(\alpha\) over \(G\), we write \(G_{i}^{\alpha}\) to refer to the graph obtained at the step \(i\in\{1,\ldots,|V|\}\) of \(\alpha\). For a vertex \(v\in V\), we define \(\text{mad}_{\alpha}^{+}(v)=\{u|u\in N_{G_{\alpha}^{+}}(v),\alpha^{-1}(u)> \alpha^{-1}(v)\}\). Following Definition 2, we note that \(\text{mad}_{\alpha}^{+}\left(\alpha(i)\right)=N_{G_{i-1}}\left(\alpha(i)\right)\) for any step \(i\in\{1,\ldots,|V|\}\). An elimination ordering \(\alpha\) over the graph \(G\) is a _perfect elimination ordering_, peo for short, if for every step \(i\in\{1,\ldots,|V|\}\), \(\alpha(i)\) is a simplicial vertex in \(G_{i-1}\), therefore \(G_{\alpha}^{+}=G\). An elimination ordering \(\alpha\) over \(G\) is _minimal_ if \(G_{\alpha}^{+}\) is a minimal triangulation of \(G\). In [12] it has been shown that for every minimal triangulation \(H\) of the graph \(G\), there exists a minimal elimination ordering \(\alpha\) over \(G\) where \(G_{\alpha}^{+}=H\). Since every minimum triangulation is also a minimal triangulation, the Minimum Fill-In problem over the graph \(G\) can be answered using an elimination ordering \(\alpha\) which constructs a minimum triangulation \(G_{\alpha}^{+}\). Therefore, if \(\alpha\) is a _minimum elimination ordering_, then \(\sum_{v\in V}|\textit{madj}_{\alpha}^{+}(v)|=\textit{mfi}(G)+|E|\). Similarly, there must exist an elimination ordering \(\beta\) such that \(\max_{v\in V}|\textit{madj}_{\beta}^{+}(v)|=\textit{tw}(G_{\beta}^{+})=\textit{tw }(G)\) solving the Treewidth problem. Furthermore, if there exists a minimum elimination ordering \(\gamma\) over the graph \(G\) such that \(\max_{v\in V}|\textit{madj}_{\gamma}^{+}(v)|=\textit{tw}(G)\) we have that \(\tau(G)=0\). The following lemma demonstrates an easy-to-prove property of elimination ordering: **Lemma 1**.: _Let \(G=(V,E)\) be a graph, \(\alpha\) an elimination ordering and let \(v\in V\) be a vertex eliminated at step \(i\in\{1,\ldots,|V|\}\). Then \(N_{G_{i-1}}(v)=N_{G}(v)\) if \(\alpha(j)\notin N_{G}(v)\) for every step \(j\), \(j<i\)._ Next, we present a lemma discussing the vertex connectivity of graphs in regard to the operation of vertex elimination: **Lemma 2**.: _Let \(G=(V,E)\) be a connected graph with the vertex connectivity of \(k\) and \(\alpha\) any elimination ordering over \(G\). Then the following statements are true (where \(G_{0}=G\) and \(G_{i}\) is the graph obtained at the step \(i\) of elimination ordering \(\alpha\) as defined in Definition 2):_ 1. _For every step_ \(i\in\{1,\ldots,|V|-k-1\}\) _of the elimination ordering_ \(\alpha\)_,_ \(|\textit{madj}_{\alpha}^{+}(\alpha(i))|\geq\kappa(G_{i})\geq k\)_._ 2. \(G_{|V|-k-1}\) _is a complete graph, hence_ \(\sum_{i\in\{|V|-k,\ldots,|V|\}}|\textit{madj}_{\alpha}^{+}(\alpha(i))|=\frac{k (k+1)}{2}\) _Consequently \(|E|+\text{mfl}(G)\geq k(|V|-k)+\frac{k(k-1)}{2}\)._ Proof.: In order to prove the statement (i), it suffices to show that \(\kappa(G_{i})\geq\kappa(G_{i-1})\) for any step \(i\in\{1,\ldots,|V|-k-1\}\). Since \(G\) is connected, we have \(k=\kappa(G_{0})\geq 1\). Suppose for a contradiction that there exists a step \(i\in\{1,\ldots,|V|-k-1\}\) in which \(\kappa(G_{i})<\kappa(G_{i-1})\). Then we can fix a subset \(S\subset V_{G_{i}}\) where \(\kappa(G_{i})\leq|S|<\kappa(G_{i-1})\) and \(G_{i}-S\) is not connected. Consequently, we can fix two vertices \(u_{1},u_{2}\in V_{G_{i}}\) such that there are no \(u_{1},u_{2}\)-paths in \(G_{i}-S\). Since \(|S|<\kappa(G_{i-1})\), we can let \(P\) be the shortest \(u_{1},u_{2}\)-path in \(G_{i-1}-S\). Clearly, \(P\) must contain the vertex \(v\) as an internal vertex, otherwise \(P\) is a \(u_{1},u_{2}\)-path in \(G_{i}-S\). Then \((y,v,w)\) is a subpath of \(P\) for two vertices \(y,w\in N_{G_{i-1}}(v)\) (where \(u_{1},u_{2}\) are not necessarily different from \(y,w\)). Since \(N_{G_{i-1}}(v)\) becomes a clique in \(G_{i}\) after elimination of vertex \(\alpha(i)\), we can define \(P^{\prime}\) as the path obtained from \(P\) by replacing the subpath \((y,v,w)\) with the edge \(yw\). Then \(P^{\prime}\) is by definition a \(u_{1},u_{2}\)-path in \(G_{i}-S\) which results in a contradiction proving statement (i). Combining this and the fact that \(G_{|V|-k-1}\) is a graph with \(k+1\) vertices and connectivity of \(k\), proves statement (ii). ## 3 Edges That Can Be Safely Added When constructing a minimum triangulation of a graph in [1], Bodlaender et al. provided a criterion for a set of missing edges to be safely added. In this section, we extend their results by providing another characteristic of such edges. Our result will be used in Theorem 4 to define a reduction rule for the Minimum Fill-In problem. This theorem outlines a set of vertices of the input graph that can be safely eliminated at the start. **Definition 3**.: _Given a graph \(G=(V,E)\), a set of missing edges \(F^{\prime}\subseteq\text{fill}(V)\) is safe to add in \(G\) if there exists a minimum triangulation \(H=(V,E\cup F)\) of \(G\) where \(F^{\prime}\subseteq F\)._ The following lemmas are easy to prove properties of edges that are safe to add (formal proofs can be found in [6]): **Lemma 3**.: _Let \(G\) be a graph, and \(F^{\prime}\) a set of safe edges to add in \(G\). Then every minimum triangulation \(H^{\prime}\) of \(G\oplus F^{\prime}\) is also a minimum triangulation of \(G\)._ **Lemma 4**.: _Let \(F^{\prime}\) be a set of safe edges to add in \(G\) and \(F^{\prime\prime}\) be a set of safe edges to add in \(G\oplus F^{\prime}\). Then the edge set \(F^{\prime}\cup F^{\prime\prime}\) is safe to add in \(G\)._ Combining Lemma 3 and Lemma 4, we can conclude that a minimum triangulation of a graph can be constructed by a repeated addition of a set of safe edges and subsequently constructing a minimum triangulation of the resulting graph. The following theorem by Bodlaender et al. ([1]) presents a condition for an edge set to be safe to add. Notice that the sets of edges it covers must have only one missing edge. **Theorem 2** ([1]).: _Let \(S\) be a minimal separator of the graph \(G=(V,E)\) where \(|\text{fill}(S)|=1\) and \(S\subseteq N(v)\) for a vertex \(v\in V\setminus S\). Then \(\text{fill}(S)\) is safe to add in \(G\)._ To the best of our knowledge, this remains the only known criterion for edges that are safe to add. In the following theorem, we prove a new condition for a set of edges to be safe to add. We note that, an almost simplicial vertex has a degree of at least two. Therefore, the following theorem is only applicable to graphs with connectivity of at least two. **Theorem 3**.: _Let \(G=(V,E)\) be a graph, and \(v\in V\) be an almost simplicial vertex where \(\text{deg}(v)=\kappa(G)\). Then \(\text{fill}(N_{G}(v))\) is safe to add._ Proof.: Let \(k=\text{deg}(v)\) and \(N_{G}(v)=\{w_{1},\ldots,w_{k}\}\). As noted above, the fact that \(v\) is almost simplicial implies that \(k\geq 2\) and we can assume that \(N_{G}(v)\setminus\{w_{1}\}\) is a clique in \(G\). To prove the statement of the theorem, suppose for a contradiction that in every minimum triangulation \(H=(V,E\cup F)\) of \(G\), \(N_{G}(v)\) is not a clique. Fix one such minimum triangulation \(H\) and observe that because \(H\) is a supergraph of \(G\) (obtained by only adding edges), we have \(\kappa(H)\geq k=\kappa(G)\) and \(N_{G}(v)\subseteq N_{H}(v)\). Define the set of edges \(I_{v}\subseteq F\) as \(I_{v}=\{vu\mid u\in N_{H}(v)\setminus N_{G}(v)\}\). In the first stage of the proof we establish that for every missing edge \(w_{1}w_{i}\in\mathit{fill}_{H}\left(N_{G}(v)\right)\), for \(i\in\{2,\ldots,k\}\), there exists a unique vertex \(u_{i}\) such that \(vu_{i}\in I_{v}\) and \(w_{1}u_{i}\in E\cup F\). We prove this by induction on the size of \(\mathit{fill}_{H}\left(N_{G}(v)\right)\) starting with the base case: \(\left|\mathit{fill}_{H}\left(N_{G}(v)\right)\right|=1\). Assume that \(\mathit{fill}_{H}\left(N_{G}(v)\right)=\{w_{1}w_{2}\}\). As discussed before, \(\kappa(H)\geq k\) and \(\left|N_{G}[v]\right|=k+1\) hence the graph \(H-\left(N_{G}[v]\setminus\{w_{1},w_{2}\}\right)\) remains connected. Let \(P_{w_{1},w_{2}}=\left(w_{1},u_{2}^{1},\ldots,u_{2}^{l},w_{2}\right)\) for \(l\geq 1\) be the shortest \(w_{1},w_{2}\)-path in the graph \(H-\left(N_{G}[v]\setminus\{w_{1},w_{2}\}\right)\), where obviously \(u_{2}^{j}\notin N_{G}[v]\), for all \(j\in\{1,\ldots,l\}\). Since \(P_{w_{1},w_{2}}\) is the shortest path between \(w_{1},w_{2}\) in \(H-\left(N_{G}[v]\setminus\{w_{1},w_{2}\}\right)\) and \(w_{1}w_{2}\notin E\cup F\), the cycle \(C_{w_{1},w_{2}}=\left(v,w_{1},u_{2}^{1},\ldots,u_{2}^{l},w_{2},v\right)\) in \(H\) does not have any chords with both vertices of the vertex set \(\{w_{1},u_{2}^{1},\ldots,u_{2}^{l},w_{2}\}\). Then we necessarily have that \(vu_{2}^{j}\in F\) for every \(j\in\{1,\ldots,l\}\) otherwise \(C_{w_{1},w_{2}}\) is a chordless cycle in \(H\) contradicting the assumption that \(H\) is a triangulation of \(G\). Then in this case, select \(u_{2}=u_{2}^{1}\in N_{H}(v)\setminus N_{G}(v)\) be the unique vertex corresponding to the missing edge \(w_{1}w_{2}\in\mathit{fill}_{H}\left(N_{G}(v)\right)\). By selection, we have \(vu_{2}\in I_{v}\) and \(w_{1}u_{2}\in E\cup F\) as required. Next, suppose that \(\left|\mathit{fill}_{H}\left(N_{G}(v)\right)\right|\geq 2\). Since \(\left|\mathit{fill}_{G}\left(N_{G}(v)\right)\right|\geq\left|\mathit{fill}_{ H}\left(N_{G}(v)\right)\right|\) and \(\left|\mathit{fill}_{G}\left(N_{G}(v)\right)\right|\leq k-1\), we have \(\left|\mathit{fill}_{H}\left(N_{G}(v)\right)\right|\leq k-1\). Assume that our claim holds for any \(c=\left|\mathit{fill}_{H}\left(N_{G}(v)\right)\right|\), where \(1\leq c\leq k-2\), and now we extend it to the case where \(\left|\mathit{fill}_{H}\left(N_{G}(v)\right)\right|=c+1\). Let \(\mathit{fill}_{H}\left(N_{G}(v)\right)=\{w_{1}w_{2},\ldots,w_{1}w_{c+2}\}\). Therefore, we are supposing that for every missing edge in \(\{w_{1}w_{2},\ldots,w_{1}w_{c+1}\}\subset\mathit{fill}_{H}\left(N_{G}(v)\right)\) there exists a unique vertex in \(\{u_{2},\ldots,u_{c+1}\}\) such that \(\{vu_{2},\ldots,vu_{c+1}\}\subset I_{v}\) and \(\{w_{1}u_{2},\ldots,w_{1}u_{c+1}\}\subset E\cup F\). Now we need to prove that there exists a unique vertex \(u_{c+2}\in N_{H}(v)\setminus N_{G}(v)\) associated with the missing edge \(w_{1}w_{c+2}\in\mathit{fill}_{H}\left(N_{G}(v)\right)\) where \(w_{1}u_{c+2}\in E\cup F\). Since \(\kappa(G)\geq k\), the graph \(H-\{u_{2},\ldots,u_{c+1},v,w_{c+3},\ldots,w_{k}\}\) remains connected. Note that in the case where \(c=k-2\), we refer to the graph \(H-\{u_{2},\ldots,u_{c+1},v\}\). Let \(P_{w_{1},w_{c+2}}\) be the shortest \(w_{1},w_{c+2}\)-path in this graph. By selection and the fact that \(\{w_{2},\ldots,w_{c+2}\}\) is a clique in \(H\), \(P_{w_{1},w_{c+2}}\) can contain at most one internal vertex from \(\{w_{2},\ldots,w_{c+1}\}\). Thus, \(P_{w_{1},w_{c+2}}\) is either \((w_{1},u_{c+2}^{1},\ldots,u_{c+2}^{l},w_{c+2})\) or \((w_{1},u_{c+2}^{1},\ldots,u_{c+2}^{l},w_{i},w_{c+2})\) where \(w_{i}\in\{w_{2},\ldots,w_{c+1}\}\) and \(l\geq 1\). Also following the selection of \(P_{w_{1},w_{c+2}}\), for every \(j\in\{1,\ldots,l\}\) we have \(u_{c+2}^{j}\notin N_{G}[v]\). Define \(C_{w_{1},w_{c+2}}\) to be the cycle in \(H\) composed of the paths \(P_{w_{1},w_{c+2}}\) and \((w_{1},v,w_{c+2})\). Since \(P_{w_{1},w_{c+2}}\) is the shortest path between \(w_{1},w_{c+2}\) in \(H-\{u_{2},\ldots,u_{c+2},v,w_{c+3},\ldots,w_{k}\}\) and \(w_{1}w_{c+2}\notin E\cup F\), the cycle \(C_{w_{1},w_{c+2}}\) does not have chords with both endpoints in the path \(P_{w_{1},w_{c+2}}\). Therefore, we must have \(vu_{c+2}^{j}\in I_{v}\) for every \(j\in\{1,\ldots,l\}\) otherwise \(C_{w_{1},w_{c+2}}\) is a chordless cycle in \(H\), a contradiction. By selection of the path \(P_{w_{1},w_{c+2}}\) we have \(u_{c+2}^{1}\notin\{u_{2},\ldots,u_{c+1}\}\), and \(w_{1}u_{c+2}^{1}\in E\cup F\) therefore, we let \(u_{c+2}=u_{c+2}^{1}\) be the unique vertex in \(N_{H}(v)\setminus N_{G}(v)\) associated with the missing edge \(w_{1}w_{c+2}\in\mbox{\it fill}_{H}\left(N_{G}(v)\right)\). This proves our claim that for every missing edge \(w_{1}w_{i}\in\mbox{\it fill}_{H}\left(N_{G}(v)\right)\), \(i\in\{2,\ldots,k\}\), there exists a unique vertex \(u_{i}\) such that \(vu_{i}\in I_{v}\) and \(w_{1}u_{i}\in E\cup F\). This will be needed shortly. For the next stage of the proof, we define the set of chords \(F^{\prime}=\left(F\setminus I_{v}\right)\cup\{w_{1}u|vu\in I_{v}\}\cup\mbox{ \it fill}_{H}\left(N_{G}(v)\right)\). We know from the first part of the proof that \(|I_{v}|\geq|\mbox{\it fill}_{H}\left(N_{G}(v)\right)|\). By definition, every missing edge \(w_{1}w_{i}\in\mbox{\it fill}_{H}\left(N_{G}(v)\right)\) added in \(F^{\prime}\) replaces an edge \(vu\in I_{v}\) that is removed from \(F\) and the corresponding edge \(w_{1}u\) already exists in \(E\cup F\) (hence \(w_{1}u\notin\left(F^{\prime}\setminus F\right)\)). We then replace all the remaining edges \(vu^{\prime}\in I_{v}\) with an edge \(w_{1}u^{\prime}\) in \(F^{\prime}\) (unless \(w_{1}u^{\prime}\) already exists). As a result, we have that \(|F^{\prime}|\leq|F|\). Next, define the supergraph \(H^{\prime}=\left(V,E\cup F^{\prime}\right)\). Note that by the construction of \(F^{\prime}\), \(N_{H^{\prime}}(v)=N_{G}(v)\) and \(N_{H^{\prime}}(v)\) is a clique in \(H^{\prime}\). Now, let us prove that \(H^{\prime}\) is chordal. Suppose for a contradiction that there exists a chordless cycle in \(H^{\prime}\). Let \(C^{\prime}\) be the shortest chordless cycle in \(H^{\prime}\). Since \(N_{G}(v)=\{w_{1},\ldots,w_{k}\}\) is a clique minimal separator in \(H^{\prime}\), separating \(v\) from the rest of the graph, we know that \(C^{\prime}\) must be entirely contained in the graph \(H^{\prime}-v\) and can contain at most two vertices from \(\{w_{1},\ldots,w_{k}\}\). In the following, we will consider all possibilities and show that every case proves the existence of a chordless cycle in the triangulation \(H\), a contradiction. **Case A.** Suppose that \(w_{1}\) is not a vertex of the cycle \(C^{\prime}\). Hence \(C^{\prime}\) cannot contain any edges from \(\mbox{\it fill}_{H}\left(N_{G}(v)\right)\) that are added in \(H^{\prime}\). This implies that \(C^{\prime}\) is a cycle in \(H\) and since it does not contain the vertex \(v\), none of the removed edges \(I_{v}\) are chords in \(C^{\prime}\). This concludes that \(C^{\prime}\) is a chordless cycle in \(H\). **Case B.** Suppose that \(C^{\prime}\) only contains the vertex \(w_{1}\) from \(\{w_{1},\ldots,w_{k}\}\). Let \(C^{\prime}=(w_{1},y_{1},\ldots,y_{l},w_{1})\) where \(l\geq 3\). As \(C^{\prime}\) is a chordless cycle, it can contain at most two edges from the set of edges \(\{w_{1}u|vu\in I_{v}\}\) that are added in \(H^{\prime}\). Clearly, these edges may only be \(w_{1}y_{1}\) or \(w_{1}y_{l}\). Thus, we need to consider the following complementary subcases and prove that each leads to the existence of a chordless cycle in \(H\): _Subcase (i):_ Suppose that none of the edges \(w_{1}y_{1},w_{1}y_{l}\) are added in \(H^{\prime}\). Consequently, \(C^{\prime}\) is a cycle in \(H\) and because it does not contain the vertex \(v\), the removed \(I_{v}\) are not chords in \(C^{\prime}\). As a result, \(C^{\prime}\) is a chordless cycle in \(H\). _Subcase (ii):_ Suppose that \(w_{1}y_{1}\) is the only edge between \(w_{1}y_{1},w_{1}y_{l}\) that is added in \(H^{\prime}\) (the other case is symmetrical). By construction, \(w_{1}y_{1}\) in \(H^{\prime}\) replaces \(vy_{1}\) in \(H\). Consider the cycle \(C=(v,y_{1},\ldots,y_{l},w_{1},v)\) in \(H\). Since \(C^{\prime}\) is chordless in \(H^{\prime}\) and only the edges from \(I_{v}\) are removed from \(H\), the cycle \(C\) can not have any chords between the vertices \(y_{1},\ldots,y_{l},w_{1}\) in \(H\). Furthermore, for every \(i\in\{2,\ldots,l\}\), \(vy_{i}\notin I_{v}\) otherwise, by our construction \(C^{\prime}\) would have the chord \(w_{1}y_{i}\) in \(H^{\prime}\), contradicting the assumption that \(C^{\prime}\) is chordless. This concludes that \(C\) is a chordless cycle in \(H\), as desired. _Subcase (iii):_ Suppose that both edges \(w_{1}y_{1},w_{1}y_{l}\) are added in \(H^{\prime}\). By construction, \(w_{1}y_{1},w_{1}y_{l}\) replace the edges \(vy_{1},vy_{l}\in I_{v}\) in \(H\). Then define the cycle \(C=(v,y_{1},\ldots,y_{l},v)\) in \(H\). Similar to the previous subcase, the cycle \(C\) cannot have chords between the vertices \(y_{1},\ldots,y_{l}\) in \(H\). Additionally, for every \(i\in\{2,\ldots,l-1\}\) we have \(vy_{i}\notin I_{v}\) otherwise \(w_{1}y_{i}\) is a chord in \(C^{\prime}\). This proves that \(C\) is a chordless cycle in \(H\). **Case C.** Suppose that \(C^{\prime}\) contains the vertex \(w_{1}\) and a vertex \(w_{i}\in\{w_{2},\ldots,w_{k}\}\). Since \(\{w_{1},\ldots,w_{k}\}\) is a clique in \(H^{\prime}\), \(w_{1}w_{i}\) must be an edge in the cycle \(C^{\prime}\). Let \(C^{\prime}=(w_{1},w_{i},y_{1},\ldots,y_{l},w_{1})\) where \(l\geq 2\). If \(w_{1}w_{i}\) was already an edge in \(H\), then \(C^{\prime}\) is a cycle in \(H\). Additionally, \(C^{\prime}\) would be a chordless cycle in \(H\) as none of the removed edges \(I_{v}\) would belong to \(C^{\prime}\). Therefore we can further assume that \(w_{1}w_{i}\in\mathit{fill}_{H}\left(N_{G}(v)\right)\). \(w_{1}y_{l}\) can potentially be one of the edges that are added in \(H^{\prime}\) so let us consider two complementary cases and define the cycle \(C\) in \(H\) accordingly: If \(vy_{l}\in I_{v}\) we define \(C=(v,w_{i},y_{1},\ldots,y_{l},v)\) otherwise if \(vy_{l}\notin I_{v}\) let \(C=(w_{1},v,w_{i},y_{1},\ldots,y_{l},w_{1})\). In either case, \(C\) cannot have \(vy_{j}\) as a chord in \(H\) for any \(j\in\{1,\ldots,l-1\}\) since this would imply that the cycle \(C^{\prime}\) has the chord \(w_{1}y_{j}\) in \(H^{\prime}\). Furthermore, going by the assumption that \(C^{\prime}\) is a chordless cycle in \(H^{\prime}\), \(C\) cannot have any chords between vertices in \(\{w_{1},w_{i},y_{1},\ldots,y_{l}\}\). This concludes that \(C\) is a chordless cycle in \(H\). This proves that \(H^{\prime}\) is a chordal supergraph of \(G\). Combined with the fact that \(|F^{\prime}|\leq|F|\) we find that \(H^{\prime}\) is also a minimum triangulation of \(G\) and by construction \(\mathit{fill}_{G}(N_{G}(v))\subseteq F^{\prime}\). Consequently, by Definition 3 we can state that \(\mathit{fill}_{G}(N_{G}(v))\) is safe to add in \(G\). By construction of the minimum triangulation \(H^{\prime}=(V,E\cup F^{\prime})\) (in the proof of Theorem 3), \(N_{H^{\prime}}(v)=N_{G}(v)\) in addition to \(\mathit{fill}_{G}(N_{G}(v))\subseteq F^{\prime}\). In other words, no edges are added in \(H^{\prime}\) with an endpoint in \(v\). In what follows, given a graph \(G=(V,E)\) and a vertex \(v\in V\), we use Algorithm 1 to define a set of edges \(F^{v}\), \(F^{v}\subseteq\mathit{fill}_{G}\left(N_{G}(v)\right)\), used later in Theorem 4 in a reduction rule for the Minimum Fill-In problem. ``` Input: A graph \(G=(V,E)\) and a vertex \(v\in V\) Output: The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); whilethere exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 1**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 2**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 3**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 4**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 5**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 5**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 6**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 6**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 7**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 8**Definition of the set \(F^{v}\) **Input:** The set of edges \(F^{v}\subseteq\mathit{fill}\left(N(v)\right)\) Initially let \(F^{v}=\emptyset\); **while**_there exists a minimal separator \(S\) in the graph \(G\oplus F^{v}\) s.t. \(S\subseteq N_{G}(v)\) and \(|\mathit{fill}_{G\oplus F^{v}}(S)|=1\)do Fix one such minimal separator \(S\in\mathcal{S}_{G\oplus F^{v}}\); Add \(\mathit{fill}_{G\oplus F^{v}}\left(S\right)\) to \(F^{v}\); end while ``` **Algorithm 9**Definition of the set \(F^{v}\ Notice that \(F^{v}\) can be possibly empty and that the order in which the minimal separators are selected in Algorithm 1 is not important. Figure 1 demonstrates the steps of Algorithm 1 with an example. **Observation 1**.: _Combining Lemma 4 and a repeated application of Theorem 2, it is easy to see that for any vertex \(v\) the set \(F^{v}\) is safe to add to \(G\)._ The following theorem summarises our results in this section and can be used to construct a minimum triangulation for a graph. **Theorem 4**.: _Let \(G\) be a graph and \(\alpha\) be an elimination ordering over \(G\) where for a fixed step \(k\in\{1,\ldots,|V|-1\}\), both of the following conditions are satisfied:_ 1. _For every step_ \(i\in\{1,\ldots,k\}\)_,_ \(\alpha(i)\) _satisfies one of the following conditions (where_ \(F^{\alpha(i)}\) _is the set of edges defined by Algorithm_ 1_):_ 1. \(\alpha(i)\) _is a simplicial vertex in_ \(G_{i-1}\) _or_ \(G_{i-1}\oplus F^{\alpha(i)}\)_._ 2. \(\text{deg}_{G_{i-1}}(\alpha(i))=\kappa(G_{i-1})\) _and_ \(\alpha(i)\) _is an almost simplicial vertex in the graph_ \(G_{i-1}\) _or_ \(G_{i-1}\oplus F^{\alpha(i)}\)_._ 2. \(\alpha\) _is any minimum elimination ordering over the graph_ \(G_{k}\) _Then \(\alpha\) is a minimum elimination ordering over \(G\)._ Proof.: To prove that \(\alpha\) is a minimum elimination ordering, it suffices to prove that the number of edges added by \(\alpha\) is \(\textit{mfi}(G)\). For simplicity, let us refer to the set of edges added at step \(i\in\{1,\ldots,|V|-1\}\) of \(\alpha\) with \(R_{i}\), i.e., \(R_{i}=\textit{fill}_{G_{i-1}}\left(N_{G_{i-1}}(\alpha(i))\right)\). Notice that by definition, \(F^{\alpha(i)}\subseteq R_{i}\) for every step \(i\in\{1,\ldots,|V|-1\}\) of \(\alpha\). Then to prove the statement of the theorem, we need to show that \(|\bigcup_{i\in\{1,\ldots,|V|\}}R_{i}|=\sum_{i\in\{1,\ldots,|V|\}}|R_{i}|= \textit{mfi}(G)\). Let \(k\) be a fixed step. Since \(\alpha\) is a minimum elimination ordering over the graph \(G_{k}\), we have \(\textit{mfi}(G_{k})=\sum_{i\in\{k+1,\ldots,|V|\}}|R_{i}|\). Then it only remains to prove the following: \[\textit{mfi}(G)-\sum\nolimits_{i\in\{1,\ldots,k\}}|R_{i}|=\textit{mfi}(G_{k}) \tag{1}\] We begin by showing that in the first step of \(\alpha\), \(R_{1}\) is safe to add in \(G\). By selection, \(\alpha(1)\) satisfies one of the conditions (A) or (B) stated in the theorem. Let us consider the following complementary cases in regard to \(F^{\alpha(1)}\): **Case 1.** Suppose that \(F^{\alpha(1)}=\emptyset\). Then \(\alpha(1)\) is simplicial in \(G\), and hence \(R_{1}=\emptyset\) and obviously safe to add, or \(\alpha(1)\) is an almost simplicial vertex in \(G\) and \(\textit{deg}_{G}(\alpha(1))=\kappa(G)\). Then, by Theorem 3, \(R_{1}\) is safe to add in \(G\). **Case 2.** Suppose that \(F^{\alpha(1)}\neq\emptyset\). Then either \(\alpha(1)\) is simplicial in \(G\oplus F^{\alpha(1)}\) implying that \(\textit{fill}_{G}(N(\alpha(1)))=F^{\alpha(1)}\) which in turn means that \(R_{1}\) is safe to add in \(G\) or \(\alpha(1)\) is almost simplicial in \(G\oplus F^{\alpha(1)}\) and \(\textit{deg}_{G}(\alpha(1))=\kappa(G)\). In the latter case, we know from Observation 1 that \(F^{\alpha(1)}\) is safe to add in \(G\) and Theorem 3 shows that \(\textit{fill}_{G\oplus F^{\alpha(1)}}(N(\alpha(1)))\) is also safe to add in \(G\oplus F^{\alpha(1)}\) consequently, the union of these edge sets, \(R_{1}\), is also safe to add by Lemma 4. As a result of Lemma 3 we can state that: \[\textit{mfi}(G)-|R_{1}|=\textit{mfi}(G\oplus R_{1}) \tag{2}\] Since \(R_{1}=\textit{fill}_{G}\left(N(\alpha(1))\right)\), \(\alpha(1)\) is simplicial in the graph \(G\oplus R_{1}\). Consequently, \(N_{G\oplus R_{1}}\left(\alpha(1)\right)\) is a clique minimal separator in \(G\oplus R_{1}\) with components \(G\oplus R_{1}[N[\alpha(1)]]\) and \((G\oplus R_{1})-\alpha(1)\). \(G\oplus R_{1}[N[\alpha(1)]]\) is a complete graph (thus has minimum fill-in of zero) so by Theorem 1 we can ignore this component: \[\mathit{mfi}(G\oplus R_{1})=\mathit{mfi}((G\oplus R_{1})-\alpha(1))=\mathit{mfi}(G _{1}) \tag{3}\] Then by combining Equation (2) and Equation (3) we can state the following: \[\mathit{mfi}(G)-|R_{1}|=\mathit{mfi}(G_{1}) \tag{4}\] In the case where \(k=1\), the theorem follows as this proves Equation (1), so assume that \(k\geq 2\). In the following we argue that the following equation holds for any value of \(l\in\{2,\ldots,k\}\) thus proving Equation (1). \[\mathit{mfi}(G_{l})-|R_{l}|=\mathit{mfi}(G_{l}) \tag{5}\] By applying the same argument as above for step 2 of \(\alpha\), we can state that: \[\mathit{mfi}(G_{1})-|R_{2}|=\mathit{mfi}(G_{2}) \tag{6}\] This combined with Equation (4) implies that: \[\mathit{mfi}(G)-(|R_{1}|+|R_{2}|)=\mathit{mfi}(G_{2}) \tag{7}\] Surely, by repeating this argument for every \(l\), until step \(k\), we get Equation (1) as required. The theorem thus follows. ## 4 \(\tau\) Parameter in Relation With Treewidth and Connectivity In this section we prove that \(\tau\) equals 0 for all graphs with treewidth of at most two and for graphs, in which treewidth and vertex connectivity have the same value. Halin graphs are an example of such graphs which are not chordal and have treewidth and vertex connectivity of three. **Lemma 5**.: _Let \(G\) be a graph where \(\mathit{tw}(G)=\kappa(G)\). Then \(\tau(G)=0\)._ Proof.: Let \(\mathit{tw}(G)=k\). Then, there exists an elimination ordering \(\beta\) over \(G\) where \(\max_{v\in V}|\mathit{madj}^{+}_{\beta}(v)|\leq k\). This together with statement (i) from Lemma 2 implies that \(|\mathit{madj}^{+}_{\beta}(\beta(i))|=k\) for every step \(i\in\{1,\ldots,|V|-k-1\}\). Combining this with statement (ii) in the same lemma, we have that \(\sum_{v\in V}|\textit{madj}^{+}_{\beta}(v)|=k(|V|-k-1)+\frac{k(k+1)}{2}\). Also, by Lemma 2, \(|E|+\textit{mfi}(G)\geq k(|V|-k-1)+\frac{k(k+1)}{2}=|E_{G^{+}_{\beta}}|\). This shows that \(G^{+}_{\beta}\) is a minimum triangulation over \(G\). Since \(\textit{tw}(G^{+}_{\beta})=\max_{v\in V}|\textit{madj}^{+}_{\beta}(v)|=k\) we have that \(\textit{tw}(G^{+}_{\beta})=\textit{tw}(G)\) proving that \(\tau(G)=0\) following Definition 1. Lemma 5 actually demonstrates a stronger property of graphs where treewidth and connectivity have the same value. This property is stated below: **Corollary 1**.: _Let \(G\) be a graph with \(\kappa(G)=\textit{tw}(G)\). Then a minimal triangulation \(H\) of \(G\) is a minimum triangulation iff \(\textit{tw}(H)=\textit{tw}(G)\)._ Proof.: Let \(k=\kappa(G)=\textit{tw}(G)\). To prove the statement of the corollary in the '_if_' direction, let \(H\) be a minimum triangulation of \(G\) and \(\alpha\) a perfect elimination ordering over \(H\). Notice that by selection, \(\alpha\) is a minimum elimination ordering over \(G\). As shown in the proof of Lemma 5, we have \(\sum_{v\in V}|\textit{madj}^{+}_{\alpha}(v)|=k(|V|-k-1)+\frac{k(k+1)}{2}|\). This, combined with the statements from Lemma 2 implies that \(|\textit{madj}^{+}_{\alpha}(\alpha(i))|=k\) for every step \(i\in\{1,\ldots,|V|-k-1\}\) and \(G^{\alpha}_{|V|-k-1}\) is a \(K_{k+1}\), therefore \(\max_{v\in V}|\textit{madj}^{+}_{\alpha}(v)|=k=\textit{tw}(H)\). Now, let us prove the corollary in the '_only if_' direction. To do so, let \(H\) be a fixed minimal triangulation where \(\textit{tw}(H)=\textit{tw}(G)\) and let \(\beta\) be a perfect elimination ordering over \(H\). Once again, notice that \(\beta\) is a minimal elimination ordering over \(G\) where \(\max_{v\in V}|\textit{madj}^{+}_{\beta}(v)|=\textit{tw}(H)=k\). By the selection of \(\beta\) and statements (i) from Lemma 2, \(|\textit{madj}^{+}_{\beta}(\beta(i))|=k\) for every step \(i\in\{1,\ldots,|V|-k-1\}\). This combined with statement (ii) from Lemma 2 implies that \(\sum_{v\in V}|\textit{madj}^{+}_{\beta}(v)|=k(|V|-k-1)+\frac{k(k+1)}{2}=|E|+ \textit{mfi}(G)\) (the last part follows from the proof of Lemma 5). In conclusion, we have that \(H\) is a minimum triangulation as desired. **Lemma 6**.: _Let \(G\) be a graph where \(\textit{tw}(G)\leq 2\). Then \(\tau(G)=0\)._ Proof.: Clearly, \(\tau(G)=0\) if \(\textit{tw}(G)=1\) as trees are chordal. Additionally, if \(\textit{tw}(G)=\kappa(G)=2\) then following Lemma 2, \(\tau(G)=0\). So, it remains to discuss the case where \(\textit{tw}(G)=2\) and \(\kappa(G)\leq 1\) (note that vertex connectivity of a graph cannot be greater than its treewidth). Let \(\mathcal{Q}\) be the set of all biconnected components of \(G\). We also point out that by definition, \(\mathcal{Q}\) is the set of all maximal connected subgraphs of \(G\) where every two subgraphs share at most one vertex. For every component \(Q\in\mathcal{Q}\), either (a) \(Q\) is a complete graph on two vertices, or (b) \(\kappa(Q)=2\). We point out that every cycle of the graph \(G\) is entirely contained within the biconnected components \(Q\) of type (b) from \(\mathcal{Q}\). Therefore, given any minimum triangulation \(H=(V,E\cup F)\) of \(G\), for every edge \(uv\in F\) both vertices \(u,v\) are contained in some biconnected component \(Q\) of type (b). Now we define a minimum triangulation \(H=(V,E\cup F)\) as it follows. Let \(Q=(V_{Q},E_{Q})\) be a component of type (b) from \(\mathcal{Q}\). Since \(Q\) is a subgraph of \(G\), \(\textit{tw}(Q)=2\) and then obviously also \(\kappa(Q)=2\). Following the proof of Lemma 5, there exists a minimum triangulation \(Q^{*}=(V_{Q},E_{Q}\cup F_{Q})\) of \(Q\) where \(\textit{tw}(Q^{*})=2\). We add \(F_{Q}\) to \(F\) and repeat the same process for every other component \(Q\) of type (b) in \(\mathcal{Q}\). By construction, \(H=(V,E\cup F)\) is a minimum triangulation of \(G\) as \(F\) is the set of all added edges that construct minimum triangulations over all components of type (b) of \(\mathcal{Q}\). Additionally, because every maximal clique of the triangulation \(H=(V,E\cup F)\) is contained within one of the triangulations \(Q^{*}\) defined above, we have \(\textit{tw}(H)=\textit{tw}(G)=2\). This concludes that \(\tau(G)=0\) as required. ## 5 Future Work Future work will be carried out to characterise graph classes with \(\tau\) of \(0\), e.g., an the extension of Lemma 6 for graphs with treewidth of at most \(3\) or other well-studied graph classes. These methods could also potentially be used to determine some lower/upper bound for minimum fill-in of some graph classes. Additionally, we will be looking for other conditions for an edge set in a graph to be safe to add as well as generalisation of Theorem 3, e.g., if a requirement on the degree and connectivity can be relaxed.
2309.10723
Temporal Evolution of the Light Emitted by a Thin, Laser-ionized Plasma Source
We present an experimental and simulation-based investigation of the temporal evolution of light emission from a thin, laser-ionized Helium plasma source. We demonstrate an analytic model to calculate the approximate scaling of the time-integrated, on-axis light emission with the initial plasma density and temperature, supported by the experiment, which enhances the understanding of plasma light measurement for plasma wakefield accelerator (PWFA) plasma sources. Our model simulates the plasma density and temperature using a split-step Fourier code and a particle-in-cell (PIC) code. A fluid simulation is then used to model the plasma and neutral density, and the electron temperature as a function of time and position. We then show the numerical results of the space-and-time-resolved light emission and that collisional excitation is the dominant source of light emission. We validate our model by measuring the light emitted by a laser-ionized plasma using a novel statistical method capable of resolving the nanosecond-scale temporal dynamics of the plasma light using a cost-effective camera with microsecond-scale timing jitter. This method is ideal for deployment in the high radiation environment of a particle accelerator that precludes the use of expensive nanosecond-gated cameras. Our results show that our models can effectively simulate the dynamics of a thin, laser-ionized plasma source and this work is useful to understand the plasma light measurement, which plays an important role in the PWFA.
Valentina Lee, Robert Ariniello, Christopher Doss, Kathryn Wolfinger, Peter Stoltz, Claire Hansel, Spencer Gessner, John Cary, Michael Litos
2023-09-16T21:16:41Z
http://arxiv.org/abs/2309.10723v1
# Temporal Evolution of the Light Emitted by a Thin, Laser-ionized Plasma Source ###### Abstract We present an experimental and simulation-based investigation of the temporal evolution of light emission from a thin, laser-ionized Helium plasma source. We demonstrate an analytic model to calculate the approximate scaling of the time-integrated, on-axis light emission with the initial plasma density and temperature, supported by the experiment, which enhances the understanding of plasma light measurement for plasma wakefield accelerator (PWFA) plasma sources. Our model simulates the plasma density and temperature using a split-step Fourier code and a particle-in-cell (PIC) code. A fluid simulation is then used to model the plasma and neutral density, and the electron temperature as a function of time and position. We then show the numerical results of the space-and-time-resolved light emission and that collisional excitation is the dominant source of light emission. We validate our model by measuring the light emitted by a laser-ionized plasma using a novel statistical method capable of resolving the nanosecond-scale temporal dynamics of the plasma light using a cost-effective camera with microsecond-scale timing jitter. This method is ideal for deployment in the high radiation environment of a particle accelerator that precludes the use of expensive nanosecond-gated cameras. Our results show that our models can effectively simulate the dynamics of a thin, laser-ionized plasma source and this work is useful to understand the plasma light measurement, which plays an important role in the PWFA. + Footnote †: preprint: ## I Introduction Plasma-based accelerators have demonstrated accelerating gradients two to three orders of magnitude larger than conventional radio-frequency accelerators, making them an enticing alternative for future high-energy particle accelerator applications [1; 2; 3]. In a beam-driven plasma wakefield accelerator (PWFA), an electron drive beam generates a wake as it propagates through a plasma. A second electron beam follows the drive beam at a distance on the order of the plasma skin depth. The strong longitudinal electric field of the plasma wake accelerates this "witness" beam. As the performance of a PWFA depends strongly on the plasma density profile [4; 5; 6; 7], control and understanding of the plasma source are essential to interpret the outcome of experiments and optimize the properties of the plasma source. A typical PWFA plasma source is a long, narrow filament, 10's to 100's of micrometers in diameter and 10's of centimeters in length, with a core density of \(10^{16-18}\) cm\({}^{-3}\). One technique to form a suitable plasma is to laser-ionize a gas using an optic with a long depth of focus, such as an axicon lens or diffractive optic [8; 9; 10; 11]. In such a plasma source, an \(\mathcal{O}(10\text{TW})\), ultrashort laser pulse is focused into a volume of gas, such as Hydrogen, Helium, or Lithium, ionizing a thin filament prior to the arrival of the electron beams. Characterization of the plasma source is often performed in the absence of the electron beams. One common technique is to look at the light emitted by the plasma. Multiple mechanisms with varying time scales contribute to the plasma light emission process of these thin laser-ionized plasma sources. The plasma is locally formed on the time scale of the ionizing laser pulse duration, which is around 100 fs. The plasma electrons thermalize within 10 ps of formation. Then, the plasma expands quickly outward within a few nanoseconds while the neutral gas diffuses inward, followed by a slower diffusion and thermalization phase between the plasma and the neutral gas that lasts 10's to 100's of nanoseconds. During this time, plasma electrons collide with neutral atoms, exciting them, while also colliding with plasma ions and recombining. Both of these mechanisms contribute to plasma light emission. The latter will also lead to the eventual neutralization of the plasma over a few hundred microseconds time scale [12]. The dynamics of plasma expansion have been studied through simulations and experiments [13; 14; 15], and the electron thermalization time scale has been demonstrated in Ref. [16] and [17]. Experimental and theoretical studies of plasma neutralization through recombination are presented in Ref. [18] and [19]. However, there remains a gap in understanding the light emission process in thin, laser-ionized plasmas, such as PWFA plasma filaments. The amount of light emitted depends on the electron temperature and the number densities of the plasma and neutral gas. Determining the relative dominance of excitation versus recombination is a complex question influenced by the plasma's geometry. In this work, we show an analytical and computational model of the plasma light emission process of a thin laser-ionized Helium plasma and confirmed by the experiment. Our measurements of plasma light emission are taken using an inexpensive camera. These cameras are often used to verify plasma formation in PWFA experiments. For example, many of the experiments at the Facility for Advanced Accelerator Experimental Tests-II (FACET-II) at SLAC National Accelerator Laboratory rely on cameras to observe the laser-ionized plasma source [20; 21; 22; 23; 24; 25; 26]. Cost-effective cameras, such as Gigabit Ethernet (GigE) machine vision cameras, are typically used in PWFA experiments because the high radiation environment in the accelerator housing leads to rapid cycles of camera failure and replacement. Unfortunately, these cameras have a large trigger timing jitter (\(\sim 10\)\(\mu\)s) and long exposure time (\(\sim 10\)\(\mu\)s) compared to the short time scales (10's-100's ns) of plasma light emission, making images from these kinds of cameras integrate over various dynamics described previously. In order to verify our model using these cameras, we demonstrate a novel technique to measure the time-resolved light signal using these low-cost cameras. Our three step light emission model: plasma formation, plasma expansion, and plasma light emission is presented in section II. In section III, we introduce our experimental setup where a laser-ionized Helium plasma is viewed by a GigE machine vision camera. In section IV, we present a novel technique for studying plasma light emission with 1 ns resolution using a low-cost GigE camera. In section V, we demonstrate that the experimental data confirms our Helium plasma light emission model. Furthermore, we show the broad applicability of this diagnostic tool in PWFA experiments by demonstrating that a simple theoretical model can describe the intensity of the plasma light in a time-integrated image, permitting estimation of the initial plasma parameters with a single image of the time-integrated plasma light. ## II Modeling This section outlines a workflow for modeling the laser-ionized plasma formation, expansion, and light emission. We demonstrate the entire workflow using a single set of parameters (laser pulse duration: \(\tau\)= 50 fs, focusing optic: 1\({}^{\circ}\) micron lens, laser energy: \(E\)= 160.33 mJ, gas pressure: \(P\)= 118 mbar) and compare the time-resolved simulated plasma light emission pattern with experimental data. We also conducted multiple plasma formation simulations for a range of parameters (\(E\)= 140-175 mJ and \(P\)= 20-80 mbar). The simulated electron temperatures were used to calculate plasma light emission, which is compared to experimental measurements in Section V. ### Plasma Formation In a laser-ionized PWFA plasma source, ionization occurs through field ionization, typically in the tunneling regime where the electric field from the laser distorts the atomic potential allowing an electron to tunnel out of the atomic barrier. This process is well described by the ADK model [27]. Particle-in-cell (PIC) simulations are able to capture this process, however, simulating the ionization process over a 5 ns window (axicon focus of 1.5 m) while resolving the laser period would require an impractical amount of computational resources. As an alternative, we use an in-house code to simulate the 3D plasma profile produced by the laser. The ADK model is used to calculate the ionization rate; while a the split-step Fourier (SSF) algorithm [28] takes into account refraction of the rear of the laser pulse due to the presence of plasma ionized by the front of the laser pulse. Figure 1 shows an example of the simulated laser intensity and plasma density profiles formed by an axicon lens by our SSF code. This code, however, cannot provide information about the initial plasma electron temperature. To accommodate this, we use a PIC code, VSim [29] provided by Tech-X, to simulate ionization of a typical localized region along the laser ionization path, importing the local laser pulse profile retrieved from the SSF code. We then extract the plasma electron temperature from the resulting kinetic energy distribution of the electrons in the PIC simulation. For all PIC simulations in this work, 1 particle per cell is used with a time step of \(dt=3.25\times 10^{-17}\) s and a spatial increment of \(dx=7\times 10^{-8}\) m. The transverse size of the simulation box is approximately 14 times the full width at half maximum (FWHM) of the laser's spatial profile, and the temporal dimension is roughly 4 times the FWHM duration of the laser pulse. Modeling ionization through a combination of the SSF and PIC simulations allows us to acquire the plasma density and plasma electron temperature with reasonable computational resources. Figure 2 shows a histogram of the electron kinetic energy distribution as a function of radius. It also demonstrates that the intensity of the ionization laser profoundly influences the azimuthally averaged electron energy. As we will show, our experimental observations confirm that the method described above is a reliable and relatively inexpensive way to model a laser-ionized PWFA plasma source. Figure 1: Simulated formation of an axicon plasma using a split-step Fourier code, with an initial laser pulse energy of 175 mJ in a Helium gas pressure of 28 mbar, corresponding to a fully ionized plasma of \(n_{e}=~{}6.9\times 10^{17}\) cm\({}^{-3}\). The axicon lens (\(\alpha=1^{\circ}\)) is positioned at \(z=0\). (a) illustrates the laser intensity at the middle of the pulse on the \(y=0\) plane. (b) shows the plasma density on the \(y=0\) plane. Because of the long Bessel focus from the axicon lens, the full ionization regime extends to up to 1 meter. ### Plasma Expansion The plasma electrons thermalize on the order of the Spitzer electron self-collision time, given by \(\tau\approx(1.40/(8\pi r_{e}^{2}c^{4}n_{e}\ln\Lambda))(3k_{B}T_{e}/m_{e})^{3/2}\) where \(r_{e}\) is the classical electron radius, \(c\) is the speed of light, \(n_{e}\) is the plasma electron density, \(\ln\Lambda\) is the Coulomb logarithm, \(k_{B}\) is the Boltzmann constant, \(T_{e}\) is the average electron temperature, and \(m_{e}\) is the mass of electron [30]. For example, for a plasma with \(k_{B}T_{e}=10\,\mathrm{eV}\) and \(n_{e}=1\times 10^{17}\,\mathrm{cm}^{-3}\), \(\tau\approx 9\,\mathrm{ps}\). This fast thermalization process has been measured in Ref. [16] and [17]. The thermalized electron temperature of the plasma is \(k_{B}T_{e}=2/3<E_{k}>\), where \(<E_{k}>\) is the average electron kinetic energy immediately after ionization. The expansion of a laser-ionized plasma source can be modeled using fluid simulations, as demonstrated in Ref. [13]. Using the plasma density profile and the plasma electron temperature acquired from previous steps, we model the plasma expansion using Tech-X's fluid simulation software, USim. USim is an Eulerian computational fluid dynamics code optimized for plasma fluid simulations by solving the Magnetohydrodynamic (MHD) equations [31]. We simulate the first \(3\,\mathrm{ns}\) of the expansion using a two-temperature, single-fluid MHD diffusion code, where the ion temperature is \(0.025\,\mathrm{eV}\) (\(300\,\mathrm{K}\)) and the average electron temperature is taken from the PIC simulation, \(<T_{e}>\sim 13\,\mathrm{eV}\). The simulated expansion is shown in figure 3(a). For the subsequent expansion from \(3\,\mathrm{ns}\) until \(t=200\,\mathrm{ns}\), plasma-neutral collisions are accounted for using a two-fluid model that calculates the mass diffusion, energy exchange, and temperature exchange between two species (plasma and neutral) [32; 33]. The subsequent expansion is shown in Fig. 3(b). It has been tested that the results of the second simulation period are relatively insensitive to the exact choice of the transition time, so long as the change in density is relatively small. As shown by the red curve (density lineout) in Fig. 3(a), the expansion (decay) reaches \(1/e^{2}\) before \(3\,\mathrm{ns}\). Simulating the expansion in two steps simplifies the complexity of the model and allows for a higher temporal resolution during the initial rapid expansion. ### Plasma Light Emission Light emission from plasma can be generated by two primary processes: electron-neutral collisional excitation and plasma electron-ion recombination. Both processes are intricate and require Monte Carlo atomic electron tracking models [34] to simulate exactly, which is both theoretically and computationally demanding. To simplify the problem, we focus on the most probable transition process that emits visible/NIR light in either process, since that is what Complementary Metal-oxide Semiconductor (CMOS) GigE camera can detect. In the collisional excitation model, the electrons have a significantly higher temperature (\(T_{e}\sim 15\,\mathrm{eV}\)) compared to the ions and neutrals (\(T_{i}\approx T_{n}\sim 0.025\,\mathrm{eV}\)). Therefore, electron-neutral collisions dominate while ion-neutral and neutral-neutral collisions are negligible. We also assume that all neutral atoms are in their ground state before the collision. In a Helium atom, \(3d-2p\) de-excitation has the strongest persistent lines. The dipole-allowed transitions from the ground state occur from the \(1s\) (\(l=0\)) state to the \(p\) (\(l=1\)) states. Of these, excitation to the \(4p\) state has the largest cross-section among allowed excited states that can generate a visible/NIR photon from a subsequent \(3d-2p\) de-excitation. Therefore, we choose this \(1s-4p-3d-2p\) excitation/de-excitation pro Figure 3: Simulated plasma expansion using fluid codes. (a) shows the plasma expansion on the \(y=0\) plane within the first three nanoseconds simulated by a single-fluid MHD model. (b) shows the plasma expansion on the \(y=0\) plane from the \(3\) ns to \(200\) ns, including plasma-neutral collision effects. In both figures, the red and green curves show the plasma density (\(n_{e}\)) and plasma temperature (\(T_{e}\)) at \(x=0\), respectively. Note that the first \(5\,\mathrm{ns}\) (numerical thermalization time) of the green curve (\(T_{e}\)) in (b) is excluded from the plot. Figure 2: The plot illustrates the plasma electron energy histogram as a function of the radius, ionized by a \(175\) mJ pulse in Helium gas with a density of \(6.9\times 10^{17}\) cm\({}^{-3}\), simulated by a PIC simulation. The azimuthally averaged radial electron energy is represented by the red curve and the ionizing laser intensity is represented by the green curve. The azimuthally averaged radial electron energy follows the intensity of the ionization laser. cess to calculate the lower bound of the visible photon emission rate via a single excitation path. Appendix A presents a more detailed explanation of this model. To calculate the local excitation rate per unit volume, we start with the collision rate for one electron with a population of neutral particles [35; 36; 37], \[\upsilon_{en}=n_{n}\sigma(v)_{1s-4p}\ v, \tag{1}\] where \(n_{n}\) is the local neutral density, \(\sigma(v)_{1s-4p}\) is the electron-impact excitation cross-section from the 1s state to the 4p state, and \(v\) is the relative velocity between the electron and the neutrals, which is approximated as the electron velocity (\(v_{e}>>v_{n}\)). The probability density function for a given speed (\(v=|\vec{v}|\)) for a Maxwellian population of electrons is \[f(v)=\left(\frac{m_{e}}{2\pi k_{B}T_{e}}\right)^{3/2}\ 4\pi v^{2}\exp\left(- \frac{m_{e}v^{2}}{2k_{B}T_{e}}\right), \tag{2}\] where \(\int_{0}^{\infty}f(v)dv=1\). The average collision rate per electron is \[<\upsilon_{en}>=\frac{\int_{0}^{\infty}f(v)n_{n}\sigma(v)vdv}{ \int_{0}^{\infty}f(v)dv}\] \[=n_{n}4\pi\left(\frac{m_{e}}{2\pi k_{B}T_{e}}\right)^{3/2}\int_{ 0}^{\infty}\exp\left(-\frac{m_{e}v^{2}}{2k_{B}T_{e}}\right)\sigma(v)v^{3}dv. \tag{3}\] This average collision rate can be written in terms of the kinetic energy, \(K\): \[<\upsilon_{en}>= \frac{8\pi n_{n}}{m_{e}^{2}}\left(\frac{m_{e}}{2\pi k_{B}T_{e}} \right)^{3/2} \tag{4}\] \[\int_{0}^{\infty}\exp\left(-\frac{K}{k_{B}T_{e}}\right)\sigma(K)K \ dK.\] Thus, the excitation rate per unit volume at location \(\vec{r}\) and time \(t\), for a population of electrons colliding with a population of neutrals is \(n_{e}<\upsilon_{en}>\), which depends on \(\vec{r}\) ant \(t\) through \(n_{e}\), \(n_{n}\), and \(T_{e}\). After the collisional excitation, some excited electrons in the \(4p\) state spontaneously de-excite to the \(3d\) state, and then to the \(2p\) state, yielding a detectable photon emission. The decay transition probability from state \(i\) to state \(j\) is \[P_{ij}=\frac{A_{ij}}{\sum_{k}A_{ik}}, \tag{5}\] where \(k\) indicates all dipole-allowed final transition states from initial state \(i\), and \(A\) is the transition rate, here taken from Ref. [35]. The transition probability from the \(4p\) state to the \(3d\) state is \(P_{4p-3d}\approx 1.2\times 10^{-3}\), and from the \(3d\) to the \(2p\) state is \(P_{3d-2p}\approx 1\). The visible photon emission rate is thus \[\frac{dn_{pexc}(\vec{r},t)}{dt}=n_{e}(\vec{r},t)<\upsilon_{en}(\vec{r},t)>P_ {4p-3d}\ P_{3d-2p}.\] We then proceed to work out an analytic prediction of the peak photon emission (\(x=0,y=0\)) scaling as a function of the initial plasma density and temperature with a few assumptions explained in Appendix B: \[\Gamma= \int_{0}^{\infty}n_{e}(x=0,y=0)<\upsilon_{en}(x=0,y=0)>dt \tag{6}\] \[= C\ n_{0}^{2}\sqrt{Rk_{B}T_{0}}\ \exp\left(-\frac{K_{th}}{Rk_{B}T_{0}} \right),\] where \(C\) is scaling constant left as a free parameter to fit to the data, \(n_{0}\) and \(T_{0}\) are the initial electron density and temperature at \(x=0,y=0\), respectively, \(K_{th}\) is the free electron kinetic energy threshold for exciting a ground-state atom to the \(4p\) state, and \(R=0.09\) is the empirically acquired temperature decay ratio; that is, the ratio between the initial temperature and the final temperature after the initial fast expansion as shown in Fig. 3 (a). This value was obtained from our simulation and may vary in other circumstances, though it agreed well with the experimental data over a relatively wide range of initial plasma parameters. The detailed derivation of 6 can be found in Appendix B. The other possible light emission process, recombination, can occur through three different primary modes: three-body recombination, radiative recombination, and dielectronic recombination [38]. Radiative recombination dominates the production of visible photons [39], wherein a single free electron recombines with a singly ionized ion into one of its high Rydberg states, yielding a highly-exited Helium atom. The rate of electron-ion radiative recombination can be approximated as follows [40]: \[\alpha_{r}=2.7\times 10^{-13}Z^{2}\ T_{e}[eV]^{-1/2}\ \ [\mathrm{cm}^{3}/ \mathrm{s}], \tag{7}\] where \(Z=1\) is the charge state of He\({}^{+}\). The recombination rate [41] is, therefore, \[\frac{dn_{p\mathrm{pcc}}}{dt}=\alpha_{r}n_{e}^{2}, \tag{8}\] assuming quasi-neutrality. Because it is more energetically favorable, we assume that most of the recombined electrons start in a highly-excited state and then eventually de-excite to the ground state in a radiative process. To estimate an upper bound of the detectable photon emission rate from a single recombination event, we look at the strongest visible/NIR line in Helium atoms, the \(3d-2p\) transition. We then assume that electrons in high Rydberg states eventually go through the dominant transition process, \(4f-3d-2p\)[35; 42]. The details of the model are discussed in Appendix A. The de-excitation transition probability for \(4f\) states to \(3d\) states is \(P_{4f-3d}\approx 1\). The local visible photon emission rate from electron-ion recombination is thus \[\frac{dn_{pcc}(\vec{r},t)}{dt}=\alpha_{r}n_{e}(\vec{r},t)^{2}\ P_{4f-3d}P_{3d-2p}. \tag{9}\] Note that 9 is predicated on the assumption that all recombination events lead to the emission of a detectable photon, which is not strictly true, but does provide an upper-bound estimation of the recombination photon emission rate. The collisional excitation and recombination rates per unit volume are orders of magnitude less than the electron density, so their effects on the expansion of the plasma are negligible. This allows us to model the time-resolved plasma expansion first and then numerically estimate the visible photon emission density from excitation and recombination using Eq. 6 and Eq. 9 at each moment in time, \(t\), and at every position, \(\vec{r}\). The cross-section in Eq. 1 is numerically interpolated from Ref. [43]. Our simulation indicates that the visible/NIR photon emission rate from the collisional excitation is three orders of magnitude greater than that from recombination. Recall that in our estimation, the photon emission rate arising from collisional excitation is a lower bound, while that from plasma recombination is an upper bound. Thus, even with the simplifications introduced in our model, the assertion that collisional excitation principally dominates the light emission process remains valid. Figures 8 (a) illustrates the time-resolved simulated photon emission pattern from an expanding Helium plasma. ## III Experimental Setup The plasma channel is ionized by a \(50\,\mathrm{fs}\), \(450\,\mathrm{mJ}\) pulse from a terawatt-class Ti:Sapphire laser system. The wavefront of the pulse is optimized with a deformable mirror before being sent through an axicon lens (\(\alpha=1^{\circ}\)) that generates a non-diffracting Bessel-beam focus over a length of \(\sim 150\,\mathrm{cm}\). The experimental vacuum chamber is filled with Helium gas, creating the plasma source within. The vacuum system consists of two vacuum chambers connected with vacuum pipes and a six-way vacuum cross as shown in Fig. 4. The plasma is formed inside the vacuum pipe and viewed through an optical window. The gas pressure is monitored by a vacuum gauge and recorded on a shot-by-shot basis. The laser intensity profile is recorded upstream and downstream of the axicon lens. The former is used as an input parameter for the simulation, and the latter is used to monitor the Bessel focus profile. The diagnostic system views the plasma light at \(1\,\mathrm{m}\) downstream of the axicon. A commercial camera lens (AT-X M100 PRO D Macro from Tokina, \(f=100\,\mathrm{mm}\)) is installed immediately outside the vacuum window. A \(6\,\mathrm{\SIUnitSymbolMicro}\)-\(\mathrm{OD}\) notch filter (central wavelength= \(785\,\mathrm{nm}\), FWHM= \(33\,\mathrm{nm}\)) filters out most of the scattered laser light that appears as a background to the plasma light. The imaged plasma light is recorded with a CMOS GigE camera chip (Sony IMX265), forming the primary source of data collected in this experiment. The camera is externally triggered by a trigger pulse train from a Signal Delay Generator (SDG), which is synchronized with the Ti:Sapphire laser system. The timing of the laser system is precise to the sub-picosecond scale; the SDG is precise to the \(10\,\mathrm{\SIUnitSymbolDegree}\) of picoseconds scale. However, the CMOS GigE camera has a significant internal timing jitter between the arrival of the trigger signal to the initiation of the exposure time, on the scale of \(10\,\mathrm{\SIUnitSymbolMicro}\)s. The shortest integration time for this type of camera is on the order of \(10\,\mathrm{\SIUnitSymbolMicro}\)s, whereas the dynamic timescale of light emission from the plasma is \(10\,\mathrm{\SIUnitSymbolMicro}\)s to \(100\,\mathrm{\SIUnitSymbolMicro}\)s of nanoseconds. Therefore, this type of camera always captures a time-integrated signal of the plasma light that is much longer than the dynamic timescale. Despite the limitations of CMOS GigE cameras, their cost-effectiveness makes them a preferred choice in high-radiation environments. For example, in the FACET-II accelerator tunnel at SLAC National Accelerator Laboratory, approximately \(10\,\mathrm{G}\mathrm{g}\mathrm{E}\) cameras are employed solely to monitor the laser and plasma source. During operations, FACET has reported the loss of more than \(20\,\mathrm{camers}\) per year due to high radiation exposure. Therefore, cost-efficient CMOS cameras (typically a few hundred dollars each) are far more suitable than would be a fast (nanosecond) gated camera, which can cost tens of thousands of dollars. In Section IV, we introduce an innovative analysis method to extract time-resolved plasma light emission data from time-integrated measurements using inexpensive GigE cameras. ## IV Analysis Methodology ### Statistical Analysis As explained in the previous section, CMOS GigE cameras are preferable in high-radiation environments but are conventionally unsuitable for high-timing-precision measurements. In this section, we discuss an inventive statistical analysis to extract time-resolved light emission from multi-shot time-integrated data. The first step is quantifying the camera-trigger-jitter distribution, which is device-dependent. Physically, a trigger pulse is sent at time \(t_{\mathrm{SDG}}\) to the GigE camera. After processing in the camera chip, the exposure time begins at \(t_{\mathrm{start}}\). This delay between \(t_{\mathrm{SDG}}\) to \(t_{\mathrm{start}}\) varies shot-to-shot; we call this the camera-trigger-jitter, which follows a distribution \(J(t)\). The precision of the laser arrival time at the target plasma location (\(\mathcal{O}(10\,\mathrm{ps})\)) is orders of magnitude smaller than the camera-trigger-jitter (\(\mathcal{O}(10\,\mathrm{\SIUnitSymbolMicro}\mathrm{s})\)). As a result, the laser arrival time can serve as a fiducial signal to investigate \(J(t)\). The diagnostic camera is first synchronized to the laser arrival-time, so that the laser pulse always arrives within the duration of the camera exposure time and the laser scattered light is captured by the camera in every shot. This starting time is defined as \(t=0\) in Fig. 5. The trigger signal is then advanced/delayed away from \(t=0\). Because the exposure starting-time (and ending-time) jitters, the laser pulse will sometimes fall outside the exposure window as the delay time is scanned, resulting in images without laser signal. Eventually, the trigger advance/delay is great enough that none of the images taken capture the laser. The observed probability of a laser pulse falling inside the camera exposure window as a function of advance/delay time, \(j(t)\), is plotted in Fig. 5. Note that the falling edge in this figure corresponds to a relative _delay_ of the SDG trigger, meaning that the laser arrives _ahead_ of the camera exposure's starting-time. This implies that the falling edge of \(j(t)\) is the cumulative distribution function of the camera-trigger-jitter distribution, \(J(t)\), \[j(t)=\int_{-\infty}^{t}J(t)dt. \tag{10}\] The cumulative distribution function of a normal distribution is \[j(t)=\frac{1}{2}\left(1+\text{erf}\left(\frac{t-\mu_{J}}{\sigma_{J}\sqrt{2}} \right)\right), \tag{11}\] where \(\mu_{J}\) is the mean of \(J(t)\), \(\sigma_{J}\) is the deviation of of \(J(t)\), and erf is the error function. The rising and falling edge data presented in Fig. 5 is fitted with Eq. 11 and the camera-trigger-jitter distribution, \(J(t)\), is thus determined from Eq. 10, assuming normal distribution and \(\sigma_{J}\) is given by the fitting, and plotted in Fig. 6 The observable plasma light lasts for an extended duration of time \(dt_{\text{glow}}\). Most of the images captured by the camera either include both the plasma light and background light from the laser pulse, or neither. (Note: Though the notch filter removes much of the scattered laser light, some always remain visible on the camera.) For certain trigger delay times, however, it is possible for the camera to capture a portion of the plasma light while the prompt laser light falls outside of the camera's exposure window. This happens when the exposure begins after the laser arrives but before the plasma light ends. The longer the plasma light decay time, \(dt_{\text{glow}}\), the more these "lucky shots" accumulate. The purple area under the curve in Fig. 6 represents these "lucky shots" (shots with plasma light but without the laser signal). The ratio of "lucky shots" to all recorded shots \(R\) is expressed as \[R\ =\int_{t_{\text{laser}}}^{t_{\text{laser}}+dt_{\text{glow}}}J(t-\ t_{ \text{SDG}})\,dt \tag{12}\] Figure 4: Experimental setup. 50 fs laser pulses generated by a Ti:Sapphire laser system enter Chamber A through the rightmost window in this figure, bouncing off mirrors M1, M2, M3, M4, and the deformable mirror (DM). The leakage light from M5 exits the vacuum chamber and splits into three paths to the near-field camera (NF), far-field camera (FF), and the Shack-Hartmann wavefront sensor (WF) to monitor the laser position, pointing, and wavefront. (L1, L2: lenses; MO: microscope objective.) The main laser beam reflects off mirrors M5 and M6, passing through a 1\({}^{\circ}\) axicon lens (AX), which produces a Bessel focus within the vacuum pipe connecting Chamber A and Chamber B. The Bessel focus is centered at the six-way cross (CR), as denoted by the yellow region indicating plasma formation. The laser beam is terminated at the beam dump (BD), while the leaked light from mirror M7 is used for imaging the Bessel focus by a single lens (L5) to a camera (BC). On the diagnostic side, a commercial macro lens lens (L4) is used to image the plasma light onto a camera (AG). The vacuum pressure is measured shot-to-shot using a pressure gauge (PG) on the figure’s far left. (a) shows an example of the raw plasma light image taken by camera AG. Figure 5: Measurement of the probability of a laser pulse appearing within the camera’s exposure time at various relative delays, measured with 30, 40, 50, and 1000 \(\mu\)s exposure settings. Jitter of the camera’s exposure start time and end time cause the falling and rising ramps to be continuous functions instead of step functions. Repeatable measurements of the falling edge at the various exposure settings (30, 40, 50, and 1000 \(\mu\)s) confirm that altering the exposure duration does not affect the start-time jitter. The magenta and cyan curves show Eq. 11 fitted to the rising and falling ramps, respectively. where \(J(t-t_{\rm SDG})\) is the camera-trigger-jitter distribution centered about a given SDG trigger time \(t_{\rm SDG}\), and \(t_{laser}\equiv t_{\rm SDG_{0}}\), where \(t_{\rm SDG_{0}}\) is the timing \(t=0\) at Fig. 5. Accordingly, we solve for \(dt_{\rm glow}\), giving \[dt_{glow}|_{t_{\rm SDG}}=\left.\frac{-J(t_{\rm laser})+\sqrt{J(t_{\rm laser})^{2 }-2\;I\;J^{\prime}(t_{\rm laser})}}{J^{\prime}(t_{\rm laser})}\right|_{t_{\rm SDG}} \tag{13}\] where \(J^{\prime}\) denotes the time derivative of \(J\), and \(I=R\int_{-\infty}^{\infty}J(t-t_{\rm SDG})\,dt\). Using Eq. 13, we can measure the characteristic plasma emission time scale without a nanosecond gated camera, so long as we record a statistically sufficient number of images with our GigE camera. ### Data Processing In this subsection, we explain how the raw experimental data is processed before performing the statistical analysis described in the previous subsection. We collected data at various SDG delays across a \(120\,\mu\)s interval with \(10\,\mu\)s increments. For each SDG delay, we set the camera exposure time to 30, 40, 50, and \(1000\,\mu\)s. We captured 300 shots for each combination of SDG delay and exposure time, resulting in a total of \(12,000\) data shots. Scattered laser light fills the vacuum chamber during the experiment, resulting in plasma light emission images with high background noise at the laser frequency. The background noise level is hundreds of times higher for shots that include prompt laser light than for those that do not, making it easy to identify shots where the laser arrival time fell within the camera exposure window. Due to the high and fluctuating noise level, distinguishing the plasma signal from the background noise is somewhat challenging. A dynamic statistical threshold is employed to determine whether any particular image includes a signal from plasma emission light. A region of interest where the plasma light may appear is identified as the "plasma region" and the rest of the image is tagged as the "background region". We work out the 99.7% error bound for a population mean (EBM), that is, the probability of the average of a random sample falling below \(\bar{x}_{\rm max,99.7}\) is 99.7% based on \[\bar{x}_{\rm max,99.7}=\bar{x}+z_{99.7}\frac{\sigma}{\sqrt{n}}, \tag{14}\] where \(z_{99.7}=2.778\) is the \(Z\) score for 99.7% probability, \(\bar{x}\) is the average light count value of the "background region", \(\sigma\) is the standard deviation of the "background region", and \(n\) is the sample size of the "plasma region". When the average value of the "plasma region" exceeds the threshold defined in Eq. 14, it is considered a positive signal of plasma light emission. This dynamic statistical threshold effectively adapts to the varying background intensities and is more efficient than a fixed intensity threshold. By Eq. 13, we determined that the decay time scale of the detectable signal for our laser-ionized Helium plasma light emission is \(194\pm 14\,\)ns. We identified 204 shots that exhibited plasma light emission without laser light based on Eq. 14. Each image is rotated to ensure the plasma light filament appears completely horizontal. Since the plasma filament is homogeneous over the majority of its length, we took the projection of these images along the laser propagation axis and plotted them as a waterfall plot in Fig 7(a), sorted by total intensity of the region of interest. Each column in the waterfall plot represents one plasma-light-only shot. Because the camera exposure window starts at \(t_{\rm start}\), which is after the arrival of the laser pulse, \(t_{\rm laser}\), the plasma light in each shot is integrated from \(t_{\rm start}\) to the end of the detectable light emission, \(t_{\rm endEmission}\). Since the decay time scale of the plasma light is orders of magnitude shorter than the characteristic camera jitter time scale, the 204 plasma-light-only shots can be treated as being roughly uniformly distributed in time within the 194 ns plasma light emission window, yielding an effective time resolution of approximately 1 ns. Every column in Fig. 7 corresponds to an integrated plasma light signal from \(t_{\rm start}\) to \(t_{\rm endEmission}\), (where \(t_{\rm start}\) shifts \(\sim 1\,\)ns further away from \(t_{\rm laser}\) from one column to the next), thus subtracting each column from its neighboring column yields a time-resolved plasma light emission plot with a resolution of \(\sim 1\,\)ns. We apply a Gaussian noise filter to the waterfall plot in Fig. 7 (a), then subtract one column from the next to yield the time-resolved plasma light emission pattern shown in Fig. 7 (b). The results of this analysis demonstrate the ability Figure 6: A conceptual diagram illustrating the relationship between camera jitter, laser arrival time, and plasma light time scale. The blue curve represents the camera starting time jitter distribution measured and calculated by Eq. 10. The red dashed line indicates the laser arrival time. Following the laser arrival, if a plasma is formed, the plasma light continues to glow for an extended duration, \(dt_{glow}\), until the intensity is undetectable. This is represented as the purple area shown in the figure, though the scale of \(dt_{glow}\) is greatly exaggerated for illustration. When the camera exposure starts at time \(I\) (dashed green line), the captured image contains both laser and plasma light. When it starts at time \(III\) (dot-dashed green line), no light is captured in the image. However, when it starts at a propitious timing, _e.g._, at time \(II\) (solid green line), the camera exposure begins after the arrival of the laser but before the plasma light dissipates, resulting in the capture of only the plasma light without the laser light. to achieve \(\mathcal{O}(1\,\mathrm{ns})\) time-resolved imagery of the plasma light using a cost-effective GigE CMOS camera. Finally, The camera records the light emission function projected along the camera axis, the radial emission function is retrieved with Abel inversion [44], as shown in Fig. 8 (b). ## V Results and Discussion We compare the results of our simulation work and our experimental data analysis in Fig. 8, and observe good agreement overall. Figure 8 (a) and (b) show the time evolution of the simulated and experimentally observed plasma light emission along the \(x\) axis (\(y=0\)) at an arbitrary \(z\) location within the homogeneous middle region of the plasma filament, respectively. The simulations used to generate Fig. 8 (a) are described in section II, and the data analysis methods lead to Fig. 8 (b) is described in Section IV B. Figure 8 (a) and (b) are normalized for comparison. Figure 8 (c) shows the photon emission density from Fig. 8 (a) and (b) at the center location (\(x=0,y=0\)) as a function of time. We see good agreement, including the approximate time of the second peak in brightness between 75 and 100 nanoseconds. There is a slight discrepancy in the first 10 ns, where the data shows a sharp rise from zero, and the simulation immediately yields a high photon emission density. This is due to the fact that the model we used in our simulation does not take into account the atomic decay lifetime of the excited atoms, which leads to a slight delay between the formation of the plasma and the onset of photon emission. The lifetime of the \(4p\) state is 3.975 ns and the \(3d\) state is 15.696 ns [35], which qualitatively matches the delayed peak observed at 13.5 ns. Figure 8 (d) compares the full photon emission density profile along \(x\) at a few different times, 10, 40, and 80 ns. We again see generally good agreement, though the data does not seem to capture the \(\sim 400\,\mu\)m-wide diffuse region of light at later times. This is likely an artifact of our limited signal-to-noise ratio in the experimental data, and the noise level can be observed in Fig 8 (c) as well after \(\sim 100\) ns. Lastly, Figure 9 (a) and (b) show the experimental results of a laser energy scan and a gas pressure scan, respectively. The data points of each plot correspond to the peak photon emission observed in the _time-integrated_ (_i.e._ the entire plasma light lifetime falls within the camera integration time), spatially resolved data (average of 100 shots). The lines corre Figure 8: Light emission from the plasma as a function of radius and time from (a) simulation and (b) measurement. Measurements were taken at \(z=1\) m from the axicon. (c) shows the temporal evolution at \(r=0\) with simulation in red and experimental data in blue. The shaded region is the statistical temporal measurement error. (d) shows radial lineouts at 10, 40, and 80ns, solid lines are experiment and dashed lines simulation. Figure 7: Each column in (a) represents a single image (shot) containing plasma light but without a laser signal. The images have been summed over the laser/plasma axial dimension. The columns are sorted and aligned based on their total intensity within the region of interest. Each image integrates from \(t=t_{start}\) to \(t_{EndEmission}\). In the shot with the highest intensity (leftmost column), \(t_{start}\sim t_{laser}\); whereas in the rightmost column, \(t_{start}\sim t_{EndEmission}\). (b) shows the result after a Gaussian (low-pass) filter is applied to (a) and each column is subtracted from the next column, yielding time-resolved (spatially integrated) photon emission profile with the temporal resolution \(\sim 1\) ns. spond to an analytic prediction of the temporally integrated peak photon emission, \(\Gamma\), using Eq. 6. Despite the complexity of the plasma light emission process, the experimental data shows remarkable agreement with our simple prediction of the peak photon density scaling as a function of the initial plasma parameters, \(n_{0}\) and \(T_{0}\). ## VI Conclusions We have demonstrated an experimental comprehension of the decay process of the plasma light emitted from a thin, laser-ionized Helium plasma source, supported by numerical simulations. We used a three-step model to model the photon emission pattern, which showed that electron-neutral collisional excitation dominates over electron-ion recombination. The photon emission pattern was measured in the experiment from a \(6.9\times 10^{17}\) cm\({}^{-3}\), \(\sim 30\mu\)m wide plasma and agrees well with the simulation. We constructed an analytic model to predict the scaling of the time-integrated, on-axis light emission with the initial plasma density and temperature according to Eq. 6 and we saw the expected scaling in the experiment. We also presented a novel statistical approach for measuring the temporal evolution of plasma light with nanosecond resolution using a cost-effective GigE CMOS machine vision camera. By leveraging the prompt scattered laser light as a fiducial signal, we quantified the exposure timing jitter distribution of the camera and measured the observable plasma light lifetime at \(194\pm 14\) ns. Furthermore, by sorting and applying image processing techniques, we reconstructed the time-resolved plasma light emission profile with nanosecond-level precision. This statistical method is generalizable and offers new diagnostic possibilities in PWFA experiments and related applications where cost-effective diagnostics are a necessity. To prove effective, there must exist an appropriate temporal fiducial signal (e.g. prompt laser light) and the the detector's timing jitter distribution must exhibit statistical repeatability. In PWFA experiments, these two requirements are often met due to the presence of a laser and an electron beam, which can serve as reliable temporal fiducial signals. This technique allows commonly used sensors to achieve measurements previously exclusive to expensive gated sensors (_e.g._ gated cameras or gated spectrometers) on a statistical basis, giving inexpensive GigE CMOS cameras access to nanosecond dynamics. For example, this technique could be applied to Stark broadening of emission lines in the plasma glow [45] detected by a spectrometer; a CMOS camera equipped with a wide-angle lens can capture the formation of laser-ionized meter-scale plasma. Some potential applications in PWFA include imagining the dissipation of the plasma wake [46], investigating the plasma heating mechanism by an electron beam for laser-electron alignment purposes [47], and measuring the recovery time of a plasma-wakefield accelerator [48]. Finally, we come to two more significant conclusions regarding the work presented here. First, our results enhance our confidence in the validity of the plasma formation models. In particular, we have confirmed that the relatively inexpensive SSF simulation is able to accurately predict the laser ionization process, making it a valuable tool in the iterative plasma source design process. Second, our advances in understanding of the readily accessible plasma light emission have enhanced its utility as a diagnostic tool. Numerous studies have shown that the electron temperature of the PWFA-like plasma source impacts a wide range of applications [49; 50; 51], and the methods described in our work can make it possible to design and diagnose experiments that aim to study such effects with significantly greater confidence. Figure 9: _Time-integrated_ peak photon emission as a function of (a) laser energy and (b) gas pressure. Each data point represents the average result of 100 measurements. The solid lines show the calculated quantity from Eq. 6, where \(T_{e}\) is simulated using PIC simulations and \(n_{0}\) is assumed to correspond to a fully ionized gas at the given pressure. The purple dot-dashed line in (a) shows the laser energy at which 99% ionization occurs, approximately 147 mJ. The purple dot-dashed line in (b) shows the curve corresponding to calculation at which 99% ionization occurs, below which the gas is not expected to be fully ionized. The analytic formula agrees well with the data when the plasma is fully ionized. ###### Acknowledgements. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award No. DE-SC0017906 and the National Science Foundation under Grant Number PHY-2047083. ## Appendix A Excitation and recombination model Detectable visible/NIR photons are emitted by transitions in Helium between the \(n=3\) and \(n=2\) energy levels. Among these transitions, the \(3d-2p\) transition exhibits the highest rate and the most persistent atomic spectral line of neutral Helium [35]. Consequently, our focus is on processes that populate the \(3d\) state. To populate the \(3d\) state, electrons can transition from \(p\) or \(f\) states. Although the \(4f-3d\) transition has a higher transition than the \(4p-3d\) transition, populating the \(f\) states requires a higher excitation energy, which corresponds to a significantly smaller collisional cross-section. Therefore, in a collisional excitation scenario, the most probable transition from the ground state that will yield a visible/NIR photon is the sequence \(1s-4p-3d-2p\). On the other hand, in plasma electron-ion recombination, it takes less energy to capture a free electron into a highly excited state [52]. As a result, most recombination events result in highly excited states, thereby easily populating the \(4f\) state. Therefore, in a recombination and de-excitation process, the most probable transition leading to a visible/NIR photon is \(4f-3d-2p\). ## Appendix B Temporally integrated peak photon emission density The analytic formalism for the temporally integrated peak photon emission is derived by integrating \(n_{e}<\nu_{en}>\) from \(t=0\) to \(t=\infty\) with a few key approximations and assumptions. Recall \(<\nu_{en}>\) is given by Eq. 4. We note that the peak photon emission occurs at \(\vec{r}\simeq 0\) and for simplification, we drop the notation \(\vec{r}\simeq 0\). To calculate the integral in Eq. 4, we first plot the electron-impact excitation cross-section from Ref. [43] in Fig. 10, and fit it with the following function: \[\sigma(K)_{1s-4p}=A(K-\kappa-K_{th})\exp(-\frac{K}{K_{w}})+\psi, \tag{11}\] where \(A=2.1\times 10^{-4}\ \mathrm{cm}^{2},\kappa=34.2\ \mathrm{eV},K_{w}=26\ \mathrm{eV},\psi=7.5\times 1 0^{-3}\ \mathrm{cm}^{2}\) are the free parameters found by the fit. The integral in Eq. 4 becomes \[I_{exc}\approx A\int_{K_{th}}^{\infty} \exp(-\frac{K}{k_{B}T_{e}(t)}) \tag{12}\] \[((K-\kappa)\exp(-\frac{K}{K_{w}})+\psi)KdK.\] Because the electron temperature during the plasma glowing process (as shown in Fig. 3 (b)) is orders of magnitude less than the initial temperature, we make the following approximations, \(K_{th}>>k_{B}T_{e}\), \(C>>k_{B}T_{e}\), and \(B>>k_{B}T_{e}\). The integral now becomes \[I_{exc}\approx Ak_{B}T_{e}(t)\exp(-\frac{K_{th}}{k_{B}T_{e}(t)})(\psi K_{th}- \sigma_{1s-4p}), \tag{13}\] where \(\sigma_{1s-4p}=\frac{A}{K_{e}^{2}}\exp(\frac{K_{th}}{K_{w}})K_{th}^{4}\). Now, we plug the result of the integral in Eq. 3 back into Eq. 4, \[n_{e}(t)<\upsilon_{en}(t)> \approx A^{\prime}n_{e}(t)n_{n}(t)(k_{B}T_{e}(t))^{-3/2} \tag{14}\] \[k_{B}T_{e}(t)\exp(-\frac{K_{th}}{k_{B}T_{e}(t)}),\] where \(A^{\prime}\) is a collective constant. To calculate the temporally integrated photon emission, we integrate Eq. 14 from \(t=0\) to \(t=\infty\), \[\int_{0}^{\infty}n_{e}(t)<\upsilon_{en}(t)>dt \tag{15}\] \[\approx\int_{0}^{\infty}A^{\prime}n_{e}(t)n_{n}(t)(k_{B}T(t))^{-1 /2}\exp(-\frac{K_{th}}{k_{B}T(t)})dt.\] To continue the calculation, a few assumptions are made: \(n_{n}(t)=n_{0}\), \(k_{B}T_{e}(t)\approx k_{B}T_{e0}e^{-t/\zeta_{d}}\), \(n_{e}(t)\approx n_{0}e^{-t/\zeta_{d}}\), where \(\tau_{d}\) is the decay lifetime and \(T_{e0}\) is the plasma electron temperature before the decay. The first assumption is justified because the neutral particles fill in the plasma column much faster than the full expansion time scale, leading to a quasi-homogenous background of neutral gas. The second and third assumptions are justified by the lineout in Fig. 3(b). So Eq. 15 now becomes \[\int_{0}^{\infty} A^{\prime}n_{0}^{2}e^{-t/\zeta_{d}}(k_{B}T_{e0}e^{-t/\zeta_{d}} )^{-1/2}\exp\left(-\frac{K_{th}}{k_{B}T_{e0}e^{-t/\zeta_{d}}}\right)dt \tag{16}\] \[\approx A^{\prime}n_{0}^{2}(k_{B}T_{e0})^{-1/2}2\tau_{d}\] \[\left(e^{-\frac{K_{th}}{k_{B}T_{e0}}}-\frac{K_{th}\sqrt{\pi} \mathrm{Erfc}(\sqrt{\frac{Kth}{k_{B}T_{e0}}})}{\sqrt{K_{th}k_{B}T_{e0}}}\right),\] where Erfc is the complementary error function, and when \(k_{B}T_{e0}<<K_{th}\), Erfc\((\sqrt{\frac{Kth}{k_{B}T_{e0}}})\approx 0\). This simplifies Eq. 16 to \[\int_{0}^{\infty}n_{e}(t)<\upsilon_{en}(t)>dt \tag{17}\] \[\approx A^{\prime}n_{0}^{2}(k_{B}T_{e0})^{-1/2}2\tau_{d}e^{-\frac{K_{th} }{k_{B}T_{e0}}},\] We empirically find from our simulations that the decay lifetime \(\tau_{d}\) is linearly proportional to the initial kinetic energy as shown in Fig. 11. So Eq. 17 becomes \[\int_{0}^{\infty}n_{e}(t)<\upsilon_{en}(t)>dt\approx 2A^{\prime}n_{0}^{2}(k_{B}T_{e 0})^{1/2}e^{-\frac{K_{th}}{k_{B}T_{e0}}}. \tag{18}\] Finally, we note that the observed plasma light arises almost entirely from collisions occurring after the initial rapid plasma expansion, so we plug in the temperature after the initial rapid expansion \(T_{e0}=RT_{0}\) where we empirically find \(R=0.09\) from our simulations. Thus, we finally arrive at the following expression of proportionality for the time-integrated peak light emission \(n_{\tau,\text{peak}}\): \[\Gamma\propto n_{0}^{2}\sqrt{Rk_{B}T_{0}}\exp\left(-\frac{K_{th}}{Rk_{B}T_{0}} \right). \tag{20}\]
2309.09514
PanoMixSwap Panorama Mixing via Structural Swapping for Indoor Scene Understanding
The volume and diversity of training data are critical for modern deep learningbased methods. Compared to the massive amount of labeled perspective images, 360 panoramic images fall short in both volume and diversity. In this paper, we propose PanoMixSwap, a novel data augmentation technique specifically designed for indoor panoramic images. PanoMixSwap explicitly mixes various background styles, foreground furniture, and room layouts from the existing indoor panorama datasets and generates a diverse set of new panoramic images to enrich the datasets. We first decompose each panoramic image into its constituent parts: background style, foreground furniture, and room layout. Then, we generate an augmented image by mixing these three parts from three different images, such as the foreground furniture from one image, the background style from another image, and the room structure from the third image. Our method yields high diversity since there is a cubical increase in image combinations. We also evaluate the effectiveness of PanoMixSwap on two indoor scene understanding tasks: semantic segmentation and layout estimation. Our experiments demonstrate that state-of-the-art methods trained with PanoMixSwap outperform their original setting on both tasks consistently.
Yu-Cheng Hsieh, Cheng Sun, Suraj Dengale, Min Sun
2023-09-18T06:52:13Z
http://arxiv.org/abs/2309.09514v2
# PanoMixSwap - Panorama Mixing via Structural Swapping for Indoor Scene Understanding ###### Abstract The volume and diversity of training data are critical for modern deep learning-based methods. Compared to the massive amount of labeled perspective images, 360\({}^{\circ}\) panoramic images fall short in both volume and diversity. In this paper, we propose PanoMixSwap, a novel data augmentation technique specifically designed for indoor panoramic images. PanoMixSwap explicitly mixes various background styles, foreground furniture, and room layouts from the existing indoor panorama datasets and generates a diverse set of new panoramic images to enrich the datasets. We first decompose each panoramic image into its constituent parts: background style, foreground furniture, and room layout. Then, we generate an augmented image by mixing these three parts from three different images, such as the foreground furniture from one image, the background style from another image, and the room structure from the third image. Our method yields high diversity since there is a cubical increase in image combinations. We also evaluate the effectiveness of PanoMixSwap on two indoor scene understanding tasks: semantic segmentation and layout estimation. Our experiments demonstrate that state-of-the-art methods trained with PanoMixSwap outperform their original setting on both tasks consistently. The website for this paper can be found at [https://yuchenghsieh.github.io/PanoMixSwap](https://yuchenghsieh.github.io/PanoMixSwap). ## 1 Introduction Panoramic images have become increasingly popular in indoor scene understanding tasks because they provide a comprehensive 360\({}^{\circ}\) view of a specific room. With the widespread availability of 360\({}^{\circ}\) cameras, generating panoramic images has become more convenient. This inspired the development of various indoor panoramic datasets such as Stanford2D3D [2], Matterport3D [3], PanoContext [4] and Structured3D [5], as well as the emergence of related tasks such as semantic segmentation, layout estimation, and depth estimation. These tasks leverage the unique characteristics of indoor panoramic images to enable a more holistic and immersive understanding of indoor environments. Despite the availability of indoor panoramic datasets, these images are limited in volume and diversity compared to perspective images. For example, even Stanford2D3D [], one of the largest real-world indoor panoramic datasets, contains only 1,413 panoramic images across 270 scene layouts. This scarcity of data presents difficulties in training models that require both robustness and accuracy. To address this issue, data augmentation techniques are often employed to artificially expand the dataset and enhance the diversity of training samples, thereby mitigating the effects of limited data availability. Data augmentation in panoramic images poses unique challenges compared to traditional image data augmentation methods since the inherent structure and layout of panoramic images must be preserved during augmentation (_e.g_. for indoor panoramic images, ceilings must be on top of walls and floors). Some traditional data augmentation techniques, such as random cropping and free-angle rotation, may not be suitable for panoramic images as they can disrupt the intrinsic structure. This underscores the importance of developing novel and specialized data augmentation techniques for panoramic images. Current panoramic augmentations are either traditional methods that can preserve the panoramic formats, such as horizontal rotation and flipping, or methods specifically designed for panoramic images like PanoStretch proposed by Sun _et al_. []. However, these methods only work on a single image, which prevents them from combining the variability in different panoramic images as explored by other augmentation methods for perspective images (_e.g_. MixUp []). Therefore, present panoramic augmentation methods have limited capability to generate more diverse images. To address the limited diversity issue in current panoramic augmentations, we propose a novel panoramic augmentation technique called PanoMixSwap, which utilizes multiple panoramic views to augment data and take advantage of variations in different samples. By using two or more panoramic images, semantic masks, and room layouts, we can generate numerous combinations to diversify our training data. PanoMixSwap, as shown in Fig. 1, is inspired by the observation that every indoor panoramic image typically consists of three main parts: the room structure (_i.e_., layout), style of the background (including the ceiling, floor, and each wall), and the foreground furniture. We use these three main parts from three different indoor panoramic views to create a diverse set of augmented samples. Our method leverages a two-stage network to sequentially fuse the background style and foreground furniture into the chosen room layout. The resulting augmented images exhibit a wide range of diverse outputs while preserving the structure of the original panoramic images. We evaluate the effectiveness of our augmentation on two scene understanding tasks: semantic segmentation and layout estimation. By incorporating PanoMixSwap during training, we observe significantly improved performance compared to the original settings. Our key contributions to PanoMixSwap are summarized below. * We propose a novel data augmentation method PanoMixSwap for indoor panoramic images. PanoMixSwap generates cubical increased diverse images by mixing three source images while maintaining the structural integrity (_i.e_., layout). This approach addresses the issue of limited availability in the training data and enhances the variability of the augmented images. * We apply PanoMixSwap to two scene understanding tasks, semantic segmentation and layout estimation. PanoMixSwap consistently improves results compared to the original training setting. ## 2 Related Works **Data Augmentations.** In the field of computer vision, the size of the dataset plays a crucial role in determining the final performance of the model; hence data augmentation is an important technique for expanding training datasets. Existing data augmentation methods can be categorized into two types: (1) those that use only one training sample to derive one augmented sample and (2) those that use two or more training samples to derive one augmented sample, also called mixup. The first type of augmentation consists of a considerable amount of work, with traditional methods such as random cropping, image mirroring, and color jittering [] commonly used for 2D images, as well as more advanced approaches like AutoAugment [], [] and GAN-based methods [], []. Similarly, for panoramic images, horizontal rotation and flipping techniques and Panostretch [] introduced in Section 1 are widely used in panoramic-related tasks. On the other hand, the second type of augmentation, \(i.e.\), mixup, has been widely studied in 2D image processing, with several works proposing techniques for linearly interpolating two input data points along with their corresponding one-hot labels [], [], [], [], [], [], [], [], [], []. For example, Zhang \(et\)\(al.\)[] generate virtual training examples using mixup by linearly interpolating data points and their one-hot labels. Yun \(et\)\(al.\)[] introduce a random-cut rectangular region technique, where a portion of the image is removed and replaced with a patch obtained from another image. Mixup techniques have also been applied in the field of 3D point clouds [], []. However, to the best of our knowledge, no existing work currently applies the concept of mixup to panoramic images, which serves as a key factor motivating our proposed approach, PanoMixSwap. **360\({}^{\circ}\) perception.** The popularity of 360\({}^{\circ}\) cameras has recently surged, leading to an increased interest in vision tasks related to panoramic images []. Quirectangular projections (ERPs) are commonly used to represent and manipulate the wide field of view captured by these cameras. ERPs allow all captured information to be preserved in a single image. However, they also introduce distortion that can impede the performance of traditional convolution layers designed for perspective images. There has been extensive research on spherical convolution layers [], [], [], [], [], [] that are aware of these distortions. To use 360\({}^{\circ}\) panoramic images with conventional neuronal networks (CNNs) that have a wide range of available pre-trained models, multiple perspective projections are employed to project the image onto multiple planar images. However, this method results in a loss of information due to the projection process, which limits the field of view (FOV). Furthermore, generating planar images from 360\({}^{\circ}\) panoramic images requires additional computational resources and time, which increases exponentially with higher-resolution images. To address the problems associated with projection-related works, several newer methods propose different ways of padding [], [] and sampling [] image boundaries to remove inconsistencies in panoramic images. The icosahedron mesh [], [] provides a versatile and effective method for representing 3D shapes and scenes in computer vision, particularly for tasks that involve spherical or panoramic data. ## 3 PanoMixSwap The commonly-used panoramic data augmentations mostly take only one sample as input. However, the diversity of this kind of one-to-one mapping is rather limited. We propose PanoMixSwap to mix three panoramic views into one, which is as clean and high-fidelity as the source views. Thus, we can generate more diverse training samples which are beyond the conventional panoramic augmentation. ### Overview Let \(\mathbf{S}\) be a training sample consisting of an RGB image \(I\in\mathbb{R}^{H\times W\times 3}\), a semantic mask \(M\in[0,1]^{H\times W\times C}\) in the form of one-hot vector with \(C\) classes, and layout coordinates \(L\in\mathbb{R}^{T\times 2\times 2}\) recording the \(T\)-walls room corner junctions on floor and ceiling. An output _augmented sample_ by PanoMixSwap is the combination of three main parts from three samples-- room layout structure of _structure sample_\(\mathbf{S}_{\text{rs}}\), background style of _style sample_\(\mathbf{S}_{\text{bs}}\), and foreground furniture setups of _furniture sample_\(\mathbf{S}_{\text{fs}}\). An overview pipeline of PanoMixSwap is illustrated in Fig. 1. We first generate a _styled structure image_\(I_{\text{ss}}\) by mixing the background appearance from \(\mathbf{S}_{\text{bs}}\) and the room layout \(L_{\text{rs}}\) from \(\mathbf{S}_{\text{rs}}\): \[I_{\text{ss}}=\mathbf{StyleFusingBlock}(\mathbf{S}_{\text{bs}},L_{\text{rs}})\, \tag{1}\] where the **StyleFusingBlock** is detailed in Sec. 3.2. We finally can generate the augmented sample \(\mathbf{S}_{\text{aug}}\) by aligning the furniture setup of \(\mathbf{S}_{\text{fs}}\) with the room layout \(L_{\text{rs}}\) and then changing the background style using \(I_{\text{ss}}\): \[\mathbf{S}_{\text{aug}}=\mathbf{FurnitureFusingBlock}(\mathbf{S}_{\text{fs}},L_ {\text{rs}},I_{\text{ss}})\, \tag{2}\] where **FurnitureFusingBlock** is detailed in Sec. 3.3. ### Style Fusing Block There are two requirements about the generated styled structure image \(I_{\text{ss}}\): _i)_ the layout structure should be the same as the room layout \(L_{\text{rs}}\) from structure sample \(\mathbf{S}_{\text{rs}}\) and _ii)_ the background appearance should be similar to the style sample \(\mathbf{S}_{\text{bs}}\) with all the furniture removed. To achieve this, we employ a semantic conditioned generative model, SEAN [5]. Figure 1: **Pipeline of PanoMixSwap.** PanoMixSwap involves three major inputs: style sample, structure layout, and furniture sample. PanoMixSwap is composed of two blocks: Style Fusing Block and Furniture Fusing Block. The Style Fusing Block generates a foreground-free styled structure image that fuses the background style from the style image and the room layout from structure layout. Furniture Fusing Block transforms furniture from the furniture image onto the styled structure image to produce the final augmented image and semantic mask. Specifically, given a content semantic mask, SEAN generates the appearance of each semantic region based on the corresponding semantic region from a reference image. We use \(L_{\text{rs}}\) to generate the content semantic mask consisting of floor, ceiling, and walls where each wall is assigned a unique class. The reference semantic mask is generated in the same way using \(L_{\text{bs}}\). To prevent generating the foreground, the reference semantic mask is further covered by an additional 'others' class from the furniture and objects classes in \(M_{\text{bs}}\). We assume the number of walls is the same in \(L_{\text{rs}}\) and \(L_{\text{bs}}\), so the walls can be one-to-one corresponding. An overview of the Style Fusing Block is illustrated in (Fig. 2). ### Furniture Fusing Block The purpose of the Furniture Fusing Block is to fuse the furniture sample \(\mathbf{S}_{\text{fs}}\) with the room layout \(L_{\text{rs}}\) and the styled structure image \(I_{\text{ss}}\). To this end, we first align the image \(I_{\text{fs}}\) and the semantic mask \(M_{\text{fs}}\) from their original layout \(L_{\text{fs}}\) to the target layout \(L_{\text{rs}}\). The aligned image and mask are denoted as \(I_{\text{fs}\rightarrow\text{rs}}\) and \(M_{\text{fs}\rightarrow\text{rs}}\). The background pixels of \(I_{\text{fs}\rightarrow\text{rs}}\) are then replaced by \(I_{\text{ss}}\) to change the background style. The final _augmented sample_ is: \[\mathbf{S}_{\text{aug}}=\{mI_{\text{fs}\rightarrow\text{rs}}+(1-m)I_{\text{ ss}},\;M_{\text{fs}\rightarrow\text{rs}},\;L_{\text{rs}}\}\, \tag{3}\] where \(m\) is the foreground mask computed from \(M_{\text{fs}\rightarrow\text{rs}}\). Below are the details of the alignment process. Recap that we assume the number of walls is the same in \(L_{\text{fs}}\) and \(L_{\text{rs}}\), and they are one-to-one corresponding. We depict the overall process in Fig. 3. We first use the wall-wall boundary annotated in \(L_{\text{fs}}\) to split the image columns of \(I_{\text{fs}}\) into multiple _image column groups_. Each image column group is then processed by Horizontal Alignment Block and Vertical Alignment Block sequentially. In the Horizontal Alignment Block, we use PanoStretch [] to stretch each image column group from its original width to the corresponding wall width in \(L_{\text{rs}}\). In the Vertical Alignment Block, we apply backward warping to each image column to align with the ceiling-wall and floor-wall intersection in \(L_{\text{rs}}\). The source and destination coordinates for the backward warping are computed as follows. Let \(r\) be the destination row Figure 2: **Style Fusing Block.** Style Fusing Block is mainly composed of Style Encoder and Style Generator. The Style Encoder is responsible for extracting the embedded style vector for each semantic region of the style image. The Style Generator creates a foreground-free styled structure image by generating the appearance of each semantic region based on its corresponding style embedded vector. index of an image column; the source index is computed as \[\text{Source}(r)=\begin{cases}a_{\text{src}}-\alpha(a_{\text{dst}}-r),&\text{if }r<a_{\text{dst}}\\ b_{\text{src}}+\beta(r-b_{\text{dst}}),&\text{if }r>b_{\text{dst}}\\ a_{\text{src}}+(b_{\text{src}}-a_{\text{src}})\,\frac{(r-a_{\text{dst}})}{(b_ {\text{dst}}-a_{\text{dst}})},&\text{otherwise}\end{cases}\,, \tag{4}\] where \(a,b\) are the index of the ceiling-wall and floor-wall intersection, \(\alpha,\beta\) are hyperparameters. The equations in Eq. 4 correspond to the warping regions of ceiling, floor, and wall between source and destination. The image column groups are concatenated to form the aligned image \(I_{\text{fs}\to\text{rs}}\). Semantic mask \(M_{\text{fs}}\) is processed in the same way to get \(M_{\text{fs}\to\text{rs}}\). ## 4 Experiments We present the implementation details and visualizations of our PanoMixSwap in Section 4.1. We showcase the effectiveness of our novel data augmentation technique on indoor \(360^{\circ}\) semantic segmentation task in Section 4.2 and layout estimation task in Section 4.4. ### PanoMixSwap **Implementation Detail.** We focus on four-wall indoor panoramic images for simplicity. To train the encoder-generator model discussed in Sec. 3.2, we adopt a similar pipeline as Figure 3: **Furniture Fusing Block.** Horizontal Alignment Block takes each furniture column group to produce the width-aligned column group that matches the wall width of the corresponding styled structure column group using PanoStretch [12]. Vertical Alignment Block views both the width-aligned furniture column group and the styled structure column group into ceiling, wall, and floor parts. Then generate the final augmented column group by back warping three parts of the width-aligned furniture column group (denoted aligned furniture column group) to match the same height of three parts from the styled structure column group and replacing background pixel of the styled structure column onto the aligned furniture column group. We repeat the process for \(T\) times to get the final augmented image. proposed in SEAN [5] for training on both the Structured3D and Stanford2D3D datasets. Specifically, we set the input image size to \(H=256\) and \(W=512\), use the Adam optimizer with hyperparameters \(\beta_{1}=0.5\) and \(\beta_{2}=0.999\), and set the learning rate to 2e-4. We use a batch size of 2 and train the model for 60 epochs on a single NVIDIA GTX 1080 Ti GPU. The inference run-time for an image is about 2 seconds, so we apply our augmentation in an offline manner for efficiency. **Visualizations.** We illustrate the inputs and outputs of PanoMixSwap in Fig. 4. Our method can generate a high-quality image by incorporating the background style, room layout structure, and furniture information from three different input samples. We use high-quality augmented images to enrich the training set of different tasks. For instance, semantic segmentation training data can now be augmented to different room structures and background styles; we can also synthesize different room styles and furniture setups for a given ground-truth room layout. ### Semantic Segmentation **Model, Dataset and Evaluation.** In the semantic segmentation task, we use HoHoNet [5] and PanoFormer [1], which are two state-of-the-art 360 semantic segmentator. We evaluate PanoMixSwap's ability to handle real-world and synthetic data by conducting experiments on two datasets: Stanford2D3D [1] and Structured3D [5], whicht[https://www.overleaf.com/project](https://www.overleaf.com/project) respectively represent real-world and virtual-world environments. For Stanford2D3D [1], we use fold 5a and fold 5b for validation and the remaining folds for training following prior works. As for Structured3D [5], we follow the official training, testing, and validation setting, where there are 3,000 scenes for training and 250 scenes for validation, and 250 scenes for testing. We employ the class-wise mean intersection of union (mIoU) and mean accuracy (mACC) for semantic segmentation evaluation. **Implementation Detail.** In accordance with the original HoHoNet's setting [5], we adopt Figure 4: **Visualization of the results from our PanoMixSwap.** The augmented image (4th column) by our novel PanoMixSwap is a fusion of the room layout from the structure image (1st column), the background style from the style image (2nd column), and the furniture from the furniture image (3rd column). The images in the 1st and 2nd rows are from Structured3D [5] while the images from the 3rd row are from Stanford2D3D [1]. similar implementation settings. For low resolution input, a shallow U-Net with planar CNN is chosen, and the network is trained for 60 epochs on Structured3D [] and 300 epochs on Stanford2D3D [], using a batch size of 16 and a learning rate of 1e-3 with polynomial decay of factor 0.9. For high resolution input, ResNet-101 [] is used as the backbone, and the network is trained for 60 epochs on both Structured3D [] and Stanford2D3D [], with a batch size of 4 and a learning rate of 1e-4 with polynomial decay of factor 0.9. For both low resolution and high resolution images, Adam [] is employed as the optimizer for cross entropy loss. In the case of PanoFormer [], we use a batch size of 4 and an input resolution of 256*512 to train for 60 epochs. Additionally, Adam [] is employed as the optimizer for optimizing the cross entropy loss. To apply PanoMixSwap, we first generate an augmented dataset with the same quantity as the original training data and combine the augmented dataset and original training data into a single data set. **Quantitative Results.** The results of experiments on Stanford2D3D [], as shown in the upper section of Table 1, reveal that the inclusion of our augmentation technique during training leads to significantly higher mIoU and mACC scores on both HoHoNet and PanoFormer compared to the original work without PanoMixSwap, across all models and resolutions. Notably, in high resolution settings, training with PanoMixSwap yields a remarkable improvement of 4.02% in mIoU and 2.43% in mACC for HoHoNet. Based on these compelling results, it is evident that PanoMixSwap technique consistently enhances the mIoU and mACC in real-world indoor panoramic scenarios across different models and resolutions. In addition to real-world scenarios, we also evaluate our augmentation in virtual environment settings using Structured3D dataset [], as presented in the lower part of Table 1. The results demonstrate that training with PanoMixSwap leads to higher mIoU and mACC scores in both low and high resolution settings, further substantiating the effectiveness of our technique in virtual indoor panoramic scenarios. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Model & Image Size & PanoMixSwap & mIoU(\%) & mACC(\%) \\ \hline \multirow{8}{*}{Stanford2D3D} & \multirow{4}{*}{HoHoNet} & \(64\times 128\) & - & 31.67 & 46.27 \\ & & & ✓ & **34.60** & **47.76** \\ \cline{3-6} & & & - & 36.13 & 50.25 \\ \cline{2-6} & & & ✓ & **41.25** & **52.50** \\ \cline{2-6} & & & - & 52.00 & 65.00 \\ \cline{2-6} & & & ✓ & **56.02** & **67.43** \\ \cline{2-6} & PanoFormer & & - & 42.20 & 61.03 \\ \cline{2-6} & & & ✓ & **42.94** & **62.14** \\ \hline \multirow{8}{*}{Structured3D} & \multirow{4}{*}{HoHoNet} & \(64\times 128\) & - & 61.11 & 71.94 \\ & & ✓ & **62.50** & **73.64** \\ \cline{1-1} \cline{2-6} & & & - & 70.07 & 78.91 \\ \cline{1-1} \cline{2-6} & & ✓ & **72.40** & **81.00** \\ \cline{1-1} \cline{2-6} & & & - & 80.80 & 87.98 \\ \cline{1-1} \cline{2-6} & & ✓ & **81.96** & **88.52** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative comparison on semantic segmentation.** Our novel PanoMixSwap significantly improves two state-of-the-art semantic segmentators, HoHoNet and PanoFormer [], on Stanford2D3D [] and Structured3D []. ### Layout Estimation **Model, Dataset and Evaluation.** We utilize HorizonNet [] and LGT-Net [] to test the effectiveness of PanoMixSwap on cuboid layout estimation task, and use the dataset introduced in LayoutNet by Zou _et al_. to estimate cuboid layout. This dataset comprises 514 annotated cuboid room layouts from PanoContext [] and 552 annotated cuboid room layouts from Stanford2D3D []. We follow train/valid/test split in layoutNet []. For evaluation, we use standard evaluation metrics proposed by Zou _et al_. in cuboid layout estimation, including intersection of union of 3D room layout (3DIoU), corner error (CE), and pixel error (PE). **Implementation Detail.** We follow all of the training settings in HorizonNet [], which employs a learning rate of 3e-4, and a batch-size of 24 for 300 epochs. In addition, we utilize the training split of Stanford2D3D [] and PanoContext [] as training data. As for LGT-net we train for 1,000 epochs with a learning rate of 1e-4 and a batch-size of 6. We follow the combined dataset scheme suggested by Zou _et al_., which involved using the entire PanoContext [] and the training split of Stanford2D3D [] as the training data in LGT-net []. For both HorizonNet and LGT-net, we employ Adam optimizer with,, and PanoStretch during training. In training with PanoMixSwap, We apply image augmentation only to the images in the Stanford2D3D, and allocate half of the batch size to augmented data and the other half to training data. **Quantitative Results.** Table 4.3 presents a comparison between the performance of using PanoMixSwap during training and the original setting on Stanford2D3D. The results show that utilizing PanoMixSwap during training outperforms the original setting in 3DIoU, CE on HorizonNet and 3DIoU, PE on LGT-Net []. Especially on HorizonNet [], training with PanoMixSwap yields a significant improvement of 3.1% in 3DIoU. This signifies that PanoMixSwap has the capability to diversify the training room style and furniture setup, thereby enhancing the overall performance. ### Comparison Between SOTA Augmentation This section provides a comprehensive comparison between PanoMixSwap and 360 state-of-the-art data augmentation - PanoStretch proposed by Sun _et al_. on semantic segmentation task and layout estimation task. The comparison results of semantic segmentation and layout estimation are shown in Table. 3 and Table. 4, respectively. The results of the above two tables show that utilizing PanoMixSwap outperforms PanoStretch in above two tasks. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & PanoMixSwap & 3DIoU(\%) & CE(\%) & PE(\%) \\ \hline \multirow{2}{*}{HorizonNet} & - & 83.51 & 0.62 & **1.97** \\ & ✓ & **86.61** & **0.61** & 1.99 \\ \hline \multirow{2}{*}{LGT-Net} & - & 86.03 & 0.63 & 2.11 \\ & ✓ & **86.96** & 0.63 & **2.04** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative comparison on cuboid room layout estimation.** Our PanoMixSwap can improve HorizonNet and LGT-Net on LayoutNet dataset []. ### Qualitative Comparison on Downstream Tasks Fig. 5 presents a qualitative comparison of layout estimation and semantic segmentation. We use HoHoNet [] as semantic segmentator and HorizonNet [] as layout estimator. More qualitative results can be found in the supplementary materials. ## 5 Conclusion We present PanoMixSwap, a novel data augmentation method for 360\({}^{\circ}\) indoor panoramic images. PanoMixSwap aims to mix multiple panoramic images to address the issue of data scarcity in panoramic image datasets. Moreover, PanoMixSwap introduces an intuitive idea by decomposing a single indoor panoramic image into three distinct parts: foreground furniture, background style, and room layout parts. Then, it mixes multiple panoramic images by swapping these structural parts to generate diverse images. Finally, comprehensive experiments demonstrate that PanoMixSwap consistently improves state-of-the-art models on multiple 360\({}^{\circ}\) indoor scene understanding tasks. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & PanoStretch & PanoMixSwap & 3DIoU(\%) & CE(\%) & PE(\%) \\ \hline \multirow{3}{*}{HorizonNet} & ✓ & - & 83.88 & 0.63 & 2.00 \\ & - & ✓ & 85.15 & 0.62 & 1.98 \\ & ✓ & ✓ & **86.59** & **0.62** & **1.94** \\ \hline \multirow{3}{*}{LGT-Net} & ✓ & - & 85.98 & 0.65 & 2.11 \\ & - & ✓ & 86.60 & **0.62** & 2.06 \\ \cline{1-1} & ✓ & ✓ & **86.96** & 0.63 & **2.04** \\ \hline \hline \end{tabular} \end{table} Table 4: **Quantitative comparison between PanoMixSwap and 360 PanoStretch across the LayoutNet dataset [] on layout estimation task.** Figure 5: **Qualitative comparison on layout estimation and semantic segmentation** ## Acknowledgement This work is supported in part by Ministry of Science and Technology of Taiwan (NSTC 111-2634-F-002-022). We thank National Center for High-performance Computing (NCHC) for computational and storage resource. We especially thank Chun-Che Wu for providing invaluable guidance for our paper.
2309.06627
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.
François Hu, Philipp Ratz, Arthur Charpentier
2023-09-12T22:31:57Z
http://arxiv.org/abs/2309.06627v2
# A Sequentially Fair Mechanism ###### Abstract In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightforward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making. 1Universite de Montreal, Montreal, Quebec, Canada 2Universite du Quebec a Montreal, Montreal, Quebec, Canada [email protected], [email protected], [email protected] ## Introduction Recent media coverage has put the spotlight anew on a question that preoccupies the field of (algorithmic) fairness, namely what constitutes fairness and how to achieve it. We center our focus on group fairness, particularly on the concept of Demographic Parity fairness Calders, Kamiran, and Pechenizkiy (2009), with the objective of achieving independence between attributes and predictions while bypassing the use of labels. A point raised frequently is that only considering a single (binary or discrete) attribute is insufficient to determine whether a system is truly fair Kong (2022). Indeed studies emphasize that when a model considers only a single attribute, it overlooks subgroups defined by intersecting attributes. This notion is commonly referred as _fairness gerrymandering_Kearns et al. (2018) and it occurs "when we only look for unfairness over a small number of pre-defined groups" that are arbitrarily selected. Scholars in the field of algorithmic fairness have devised a range of methods to counter unfairness in predictions, for both regression Chzhen et al. (2020),b Gouic, Loubes, and Rigollet (2020) and classification Hardt, Price, and Srebro (2016); Agarwal, Dudik, and Wu (2019); Chiappa et al. (2020); Denis et al. (2021). However, these approaches consider fairness with respect to a single feature, making them susceptible to the criticism from above. As an example, Buolamwini and Gebru (2018) underscored this intersectional bias in Machine Learning (ML). They discovered that major face recognition algorithms exhibited preferences for recognizing men and lighter skin tones, resulting in reduced accuracy for women with darker skin tones, thereby exposing both gender and racial discrimination. A naive solution to this problem would be to create a single discrete feature that groups a set of sensitive variables. However, this approach has several drawbacks: _i)_ this methodology assigns similar weights to all attributes, hindering approximate fairness which allows the user to adjust the fairness constraint; _ii)_ further, this method complicates tracking procedure effects, as disentangling them from a combined variable is complex and also lacks prioritization across attributes; _iii)_ in applications, some sensitive feature might need more attention, like bias due to gender over age Macnicol (2006); Charpentier, Hu, and Ratz (2023). This prioritization bridges the gap between a status-quo of (unfair) predictions and a goal of fair predictions _w.r.t_ a set of sensitive features. Fairness considerations often leads to reduced predictive performance Menon and Williamson (2018); Chen, Johansson, and Sontag (2018), making adoption challenging. Presenting a level of unfairness could potentially facilitate its acceptance and adoption. ### Main contributions This highlights the need for a more holistic approach to consider fairness. In this article, we propose a methodology that is adaptable for optimal fair predictions involving Multiple Sensitive Attributes (MSA). More specifically: * We address the learning problem under the Demographic Parity constraint, involving MSA, by constructing multi-marginal 2-Wasserstein barycenters. Our method offers a closed-form solution, allowing us to develop an empirical data-driven approach that enhances fairness for any off-the-shelf estimators. * We rewrite the optimal fair solution into a sequential form by using the associativity of Wasserstein barycenters in univariate settings. This formulation seamlessly extends to approximate fairness achieving fairness-risk trade-off. * Our approach is demonstrated through numerical experiments on diverse datasets (both synthetic and real). It demonstrates high effectiveness in reducing unfairness while enabling clear interpretation of inter-sensitive feature correlations. We begin by introducing some necessary notation before formally presenting the problem. After deriving the main results we conduct extensive numerical experiments and illustrate the use of our methodology on a both synthetic and real-world datasets. Note that all proofs are relegated to the supplementary materials to ease the lecture of the article. ### Related work Much of our work extends earlier findings from the literature of algorithmic fairness using optimal transport, a mathematical framework for measuring distributional differences. The aim is to transform biased scores into equitable ones while minimizing their impact to uphold predictive performance. There are several methods that can broadly be classified into pre-, in- and post-processing methods. Our approach developed here falls into the latter category, as post-processing is computationally advantageous and allows a clear interpretation of the outputs. In regression, methods like [10] and [17] minimize Wasserstein distance to mitigate bias. Similarly, in classification, [13] and [14] use optimal transport for fairness. Instead of considering multiple fair attributes, [15] consider multiple fair prediction tasks through joint optimization. However, despite optimal transport's widespread use in algorithmic fairness, there is limited research on MSA and intersectional fairness. This article aims to fill this research gap. ### Notation Consider a function \(f\) and a random tuple \((\mathbf{X},\mathbf{A})\in\mathcal{X}\times\mathcal{A}\subset\mathbb{R}^{d}\times \mathbb{N}^{r}\), with positive integers \(d\) and \(r\). We denote \(\mathcal{V}\) the space of probability measures on \(\mathcal{Y}\subset\mathbb{R}\). Let \(\nu_{f}\in\mathcal{V}\) and \(\nu_{f|\mathbf{a}}\in\mathcal{V}\) be respectively the probability measure of \(f(\mathbf{X},\mathbf{A})\) and \(f(\mathbf{X},\mathbf{A})|\mathbf{A}=\mathbf{a}\). \(F_{f|\mathbf{a}}(u):=\mathbb{P}\left(f(\mathbf{X},\mathbf{A})\leq u|\mathbf{A}=\mathbf{a}\right)\) corresponds to the cumulative distribution function (CDF) of \(\nu_{f|\mathbf{a}}\) and \(Q_{f|\mathbf{a}}(v):=\inf\{u\in\mathbb{R}:F_{f|\mathbf{a}}(u)\geq v\}\) its associated quantile function. ## 2 Background on Wasserstein barycenters This section introduces the concepts of Wasserstein barycenter from one-dimensional optimal transport theory. Further details can be found in [1, 10]. ### Wasserstein distance We consider two probability measures, \(\nu_{1}\) and \(\nu_{2}\). The _Wasserstein distance_ quantifies the minimum "cost" of transforming one distribution into the other. Specifically, the squared Wasserstein distance between \(\nu_{1}\) and \(\nu_{2}\) is defined as \[\mathcal{W}_{2}^{2}(\nu_{1},\nu_{2})=\inf_{\pi\in\Pi(\nu_{1},\nu_{2})}\mathbb{ E}_{(Z_{1},Z_{2})\sim\pi}\left(Z_{2}-Z_{1}\right)^{2}\enspace,\] where \(\Pi(\nu_{1},\nu_{2})\) is the set of distributions on \(\mathcal{Y}\times\mathcal{Y}\) having \(\nu_{1}\) and \(\nu_{2}\) as marginals. A coupling that achieves this infimum is called optimal coupling between \(\nu_{1}\) and \(\nu_{2}\). ### Wasserstein barycenter Throughout this article, we will frequently make use of _Wasserstein Barycenters_[1]. The Wasserstein barycenter finds a representative distribution that lies between multiple given distributions in the Wasserstein space. It is defined for a family of \(K\) measures \((\nu_{1},\dots,\nu_{K})\) in \(\mathcal{V}\) and some positive weights \((w_{1},\dots,w_{K})\in\mathbb{R}_{+}^{K}\). The Wasserstein barycenter (or in short \(\mathcal{W}_{2}\)-barycenter), denoted as \(\operatorname{Bar}\left\{(w_{k},\nu_{k})_{k=1}^{K}\right\}\) is the minimizer \[\operatorname{Bar}(w_{k},\nu_{k})_{k=1}^{K}=\operatorname*{argmin}_{\nu}\sum_ {k=1}^{K}w_{k}\cdot\mathcal{W}_{2}^{2}\left(\nu_{k},\nu\right)\enspace. \tag{1}\] The work in [1] shows that in our configuration the barycenter exists and a sufficient condition of uniqueness is that one of the measures \(\nu_{i}\) admits a density w.r.t. the Lebesgue measure. Our study focuses on barycentric associativity within the unidimensional space \(\mathcal{Y}\subset\mathbb{R}\). This principle asserts that the global barycenter coincides with the barycenter of barycenters. This associativity is clear in Euclidean spaces [10]: the barycenter of points \(x_{1}\), \(x_{2}\), and \(x_{3}\) in \(\mathbb{R}^{d}\) with weights \(w_{1}\), \(w_{2}\), and \(w_{3}\) aligns with the barycenter of \(x_{1,2}\) and \(x_{3}\) with weights \(w_{1}+w_{2}\) and \(w_{3}\), where \(x_{1,2}\) is the barycenter of \(x_{1}\) and \(x_{2}\). In the following proposition, we explore the relevance of this barycentric associativity, particularly for the 2-Wasserstein barycenter within the unidimensional space. It's important to note that the Wasserstein barycenter loses its associativity in a multi-dimensional framework. **Proposition 1** (Associativity of the \(\mathcal{W}_{2}\)-barycenter): _Consider a collection of positive integers \(K_{1},K_{2},\dots,K_{r}\), where their sum is denoted as \(K=K_{1}+K_{2}+\dots+K_{r}\). Let the sets be defined as follows:_ \[B_{1}=(w_{1,k},\nu_{1,k})_{k=1}^{K_{1}},\dots,B_{r}=(w_{r,k},\nu_{r,k})_{k=1}^{ K_{r}}\enspace,\] _where \(\{w_{i,k}\}_{i,k}\) are positive and non-zero weights summing to 1 and \(\{\nu_{i,k}\}_{i,k}\) represent univariate measures. In this context, the overall Wasserstein barycenter \(\operatorname{Bar}\left\{B_{1}\cup\dots\cup B_{r}\right\}\) can be expressed as the barycenter \(\operatorname{Bar}(\tilde{w}_{i},\operatorname{Bar}(\tilde{B}_{i})_{i=1,\dots,r}\), where_ \[\tilde{w}_{i}:=\sum_{k^{\prime}=1}^{K_{i}}w_{i,k^{\prime}}\enspace\text{and} \enspace\tilde{B}_{i}:=\left(\frac{w_{i,k}}{\tilde{w}_{i}}\enspace,\;\nu_{i,k} \right)_{k=1,\dots,K_{i}}\enspace.\] In a nutshell, this formulation captures the relationship between the overall Wasserstein barycenter and the individual barycenters of its constituent sets, incorporating the relevant weights seamlessly. Given measures \((\nu_{1},\nu_{2},\nu_{3})\) with positive weights \((w_{1},w_{2},w_{3})\) summing to 1, mirroring the aforementioned barycentric concept in Euclidean space, we derive the following relations: \[\operatorname{Bar}\left\{(w_{1},\nu_{1}),(w_{2},\nu_{2}),(w_{3}, \nu_{3})\right\}\\ =\operatorname{Bar}\left\{(w_{1},\nu_{1}),(w_{2}+w_{3},\nu_{2,3})\right\} \\ =\operatorname{Bar}\left\{(w_{1}+w_{2},\nu_{1,2}),(w_{3},\nu_{3}) \right\}\enspace.\] Here, for \(i,j\in\{1,2,3\}\) (where \(i\neq j\)), the measure \(\nu_{i,j}=\operatorname{Bar}\left\{(\tilde{w}_{i},\nu_{i}),(\tilde{w}_{j},\nu_{j})\right\}\) is defined, with \(\tilde{w}_{k}=w_{k}/(w_{i}+w_{j})\). ## Problem formulation Let \((\mathbf{X},\mathbf{A},Y)\) be a random tuple with distribution \(\mathbb{P}\). Here, \(\mathbf{X}\in\mathcal{X}\subset\mathbb{R}^{d}\) denotes the \(d\) non-sensitive features, \(Y\in\mathcal{Y}\subset\mathbb{R}\) represents the target task, and \(\mathbf{A}=(A_{1},\dots,A_{r})\in\mathcal{A}:=\mathcal{A}_{1}\times\dots\times \mathcal{A}_{r}\) the \(r\) discrete sensitive attributes, where \(\mathcal{A}_{i}=\{1,\dots,K_{i}\}\) with \(K_{i}\in\mathbb{N}\). For example, in a binary case with \(r=2\), we could have \(A_{1}=\) gender and \(A_{2}=\) age. For convenience, we use the notation \(A_{i:i+k}:=(A_{i},A_{i+1},\cdots,A_{i+k})\) to denote the sequence of \(k+1\) sensitive features ranging from \(i\) to \(i+k\) (so \(\mathbf{A}=A_{1:r}\)). Further, we denote \(\mathcal{F}\) the set of predictors of the form \(f:\mathcal{X}\times\mathcal{A}\to\mathcal{Y}\) where we assume that each measure \(\nu_{f|\mathbf{a}}\) admits a density w.r.t. Lebesgue measure. More precisely, we require the following assumption to hold: **Assumption 2**: _Given \(f\in\mathcal{F}\), measures \(\{\nu_{f|\mathbf{a}}\}_{\mathbf{a}\in\mathcal{A}}\) are non-atomic with finite second moments._ ### Risk measure Within the statistical learning community, a central pursuit revolves around the minimization of a designated risk measure across the set \(\mathcal{F}\) encompassing all predictors. In particular, a Bayes regressor minimizing the squared risk, \[\text{(Risk Measure)}\quad\mathcal{R}(f):=\mathbb{E}\left(Y-f(\mathbf{X},\mathbf{A}) \right)^{2}\enspace,\] over the set \(\mathcal{F}\) is represented by \(f^{*}(\mathbf{X},\mathbf{A}):=\mathbb{E}[Y|\mathbf{X},\mathbf{A}]\). In our case, our objective is to characterize the optimal fair predictor, which minimizes the squared risk under a given fairness constraint. To do so, we introduce formally the Demographic Parity notion of fairness. ### Unfairness measure Demographic Parity (DP) will be used to determine the fairness of a predictor. Fairness considerations under DP offers the advantage of being applicable to both classification and regression tasks. In our study, the unfairness measure of the predictor on the feature \(A_{i}\) is given by \[\mathcal{U}_{i}(f)=\max_{a_{i}\in\mathcal{A}}\int_{u\in[0,1]}\big{|}\,Q_{f}(u) -Q_{f|a_{i}}(u)\,\big{|}\,du\enspace, \tag{2}\] while for the multiple sensitive features \(A_{i},\dots,A_{i+k}\), their collective unfairness is simply assessed through: \[\mathcal{U}_{\{i,\dots,i+k\}}(f)=\mathcal{U}_{i:i+k}(f)=\mathcal{U}_{i}(f)+ \dots+\mathcal{U}_{i+k}(f)\enspace. \tag{3}\] Hence, we naturally broaden the DP-fairness definition to encompass both exact and approximate fairness within the context of MSA framework. **Definition 3** (Fairness under Demographic Parity): _The overall unfairness of a predictor \(f\in\mathcal{F}\) w.r.t. the feature \(\mathbf{A}=A_{1:r}\), can be quantified by the unfairness measure,_ \[\text{(Unfairness measure)}\quad\mathcal{U}(f)=\mathcal{U}_{1:r}(f)\enspace.\] _Then \(f\) is called exactly fair if and only if_ \[\mathcal{U}(f)=0\enspace.\] _Given \(\mathbf{\varepsilon}=\varepsilon_{1:r}:=(\varepsilon_{1},\varepsilon_{2},\dots, \varepsilon_{r})\) where each \(\varepsilon_{i}\in[0,1]\), \(f\) is called approximately fair under DP with \(\mathbf{\varepsilon}\) relative improvement (\(\mathbf{\varepsilon}\)-RI) if and only if each individual unfairness satisfies_ \[\mathcal{U}_{i}(f)\leq\varepsilon_{i}\times\mathcal{U}_{i}(f^{*})\enspace.\] In other words, in the context of approximate fairness, we are interested in the relative (fairness) improvement of a fair predictor with respect to Bayes' rule \(f^{*}\)(see [10] for further details). ### Preliminary results Recall the Wasserstein barycenter defined in Eq. (1). We consider measures \((\nu_{f|\mathbf{a}})_{\mathbf{a}\in\mathcal{A}}\) with corresponding weights \((p_{\mathbf{a}})_{\mathbf{a}\in\mathcal{A}}\), where \(p_{\mathbf{a}}:=\mathbb{P}(\mathbf{A}=\mathbf{a})\). It is assumed that \(\min_{\mathbf{a}}\{p_{\mathbf{a}}\}\geq 0\). Encompassing these measures is the Wasserstein barycenter, denoted \(\mu_{\mathcal{A}}:\mathcal{V}\to\mathcal{V}\) and it is defined as \[\mu_{\mathcal{A}}(\nu_{f}):=\operatorname{Bar}(p_{\mathbf{a}}, \nu_{f|\mathbf{a}})_{\mathbf{a}\in\mathcal{A}}\] \[=\operatorname*{argmin}_{\nu}\sum_{\mathbf{a}\in\mathcal{A}}p_{\mathbf{a}} \cdot\mathcal{W}_{2}^{2}\left(\nu_{f|\mathbf{a}},\nu\right)\enspace.\] Single Sensitive Attribute (SSA) caseWe consider a single sensitive attribute \(A\), belonging to the set \(\mathcal{A}=\{1,\dots,K\}\), with \(p_{a}:=\mathbb{P}(A=a)\). Let \(f_{B}\in\mathcal{F}\), and let its measure be the Wasserstein barycenter \(\nu_{f_{B}}=\mu_{\mathcal{A}}(\nu_{f^{*}})\), where we recall that \(f^{*}(\mathbf{X},A)=\mathbb{E}[Y|\mathbf{X},A]\) is the Bayes rule which minimizes the squared risk. Prior research conducted by [10, 11] demonstrates that, \[f_{B}=\operatorname*{argmin}_{f\in\mathcal{F}}\left\{\mathcal{R}(f):\mathcal{U }(f)=0\right\}\enspace.\] Therefore, \(f_{B}\) represents the optimal fair predictor in terms of minimizing unfairness-risk. Additionally, previous studies have offered a closed-form solution: for all \((\mathbf{x},a)\in\mathcal{X}\times\mathcal{A}\), \[f_{B}(\mathbf{x},a)=\left(\sum_{\alpha^{\prime}\in\mathcal{A}}p_{a^{\prime}}Q_{f^{* }|a^{\prime}}\right)\circ F_{f^{*}|a}\left(f^{*}(\mathbf{x},a)\right)\enspace.\] Note that this solution can be easily adapted not only to classification tasks [10], but also to multi-task learning involving classification and regression [13]. In this article, we extend this formulation to the MSA case \(\mathbf{A}=(A_{1},\dots,A_{r})\in\mathcal{A}=\mathcal{A}_{1}\times\dots\times \mathcal{A}_{r}\). For any \((\mathbf{x},\mathbf{a})\in\mathcal{X}\times\mathcal{A}\) we denote, \[f_{B_{i}}(\mathbf{x},\mathbf{a})=\left(\sum_{a^{\prime}_{i}\in\mathcal{A}_{i}}p_{a^{ \prime}_{i}}Q_{f^{*}|a^{\prime}_{i}}\right)\circ F_{f^{*}|a_{i}}\left(f^{*}( \mathbf{x},\mathbf{a})\right)\enspace, \tag{4}\] as the optimal fair predictor, ensuring fairness only across \(\mathcal{A}_{i}\) (\(\mathcal{A}_{i}\)-fair for short). By abuse of notation, we denote \(p_{a^{\prime}_{i}}:=\mathbb{P}(A_{i}=a^{\prime}_{i})\), \(F_{f^{*}|a^{\prime}_{i}}(u):=\mathbb{P}\left(f(\mathbf{X},\mathbf{A})\leq u|A_{i}=a^{ \prime}_{i}\right)\) and \(Q_{f^{*}|a^{\prime}_{i}}\) its associated quantile function. ## Optimal fair prediction with MSA We extend the fair characterization into a sequential framework to accommodate MSA. Building upon previous research in the SSA case [10, 11], we demonstrate that fairness in the MSA problem can also be framed as the optimal transport problem involving the 2-Wasserstein distance. The relationship between these concepts is established in the following proposition. **Proposition 4** (Fair characterization: global approach): _We assume that Assumption 2 holds, and we let_ \[f_{B}=\operatorname*{argmin}_{f\in\mathcal{F}}\left\{\mathcal{R}(f):\mathcal{U} (f)=0\right\}\enspace.\] _Subsequently, its measure satisfies \(\nu_{f_{B}}=\mu_{\mathcal{A}}(\nu_{f^{*}})\). Furthermore, this equation yields a closed-form solution for the optimal fair predictor_ \[f_{B}(\mathbf{x},\mathbf{a})=\left(\sum_{\mathbf{a}^{\prime}\in\mathcal{A}}p_{\mathbf{a}^{ \prime}}Q_{f^{*}|\mathbf{a}^{\prime}}\right)\circ F_{f^{*}|\mathbf{a}}\left(f^{*}(\bm {x},\mathbf{a})\right)\enspace. \tag{5}\] Considering the aforementioned proposition and Prop. 1, which confirms that the barycenter of barycenters aligns with the overall barycenter under suitable updated weights, a straightforward corollary arises. This corollary asserts that regardless of the selected "debiasing path" in the sequential fairness mechanism, the end result consistently leads to the same optimal fair solution. **Proposition 5** (Sequentially fair mechanism): _Under the assumption that Assumption 2 holds, the term \(\mu_{\mathcal{A}}(\nu_{f^{*}})\) defined in Prop. 4 can be equivalently expressed as follows:_ \[\mu_{\mathcal{A}}(\nu_{f^{*}})=\mu_{\mathcal{A}_{1}}\circ\mu_{\mathcal{A}_{2}} \circ\cdots\circ\mu_{\mathcal{A}_{r}}(\nu_{f^{*}})\enspace.\] _More generally, under any permutation, i.e. bijection function of the form \(\sigma:\mathcal{S}\to\mathcal{S}\) where \(\mathcal{S}:=\{1,2,\ldots,r\}\), the above expression can be rewritten as_ \[\mu_{\mathcal{A}}(\nu_{f^{*}})=\mu_{\mathcal{A}_{\sigma(1)}}\circ\mu_{ \mathcal{A}_{\sigma(2)}}\circ\cdots\circ\mu_{\mathcal{A}_{\sigma(r)}}(\nu_{f ^{*}})\enspace.\] Notably the expressions proposed in Prop. 5 allows us to establish a link between Eq. (4) and Eq. (5) of the form: \[f_{B}(\mathbf{X},\mathbf{A}) =\left(f_{B_{1}}\circ f_{B_{2}}\circ\ldots\circ f_{B_{r}}\right) \left(\mathbf{X},\mathbf{A}\right)\] \[=\left(f_{B_{\sigma(1)}}\circ f_{B_{\sigma(2)}}\circ\ldots\circ f _{B_{\sigma(r)}}\right)\left(\mathbf{X},\mathbf{A}\right)\enspace.\] Note that the \(\circ\) notation is used in a relaxed manner with regard to predictors, aimed at streamlining the presentation and alleviating the complexities of notation. In particular, by abuse of notation we establish the definition of \(f_{B_{i}}\circ f_{B_{j}}\) as follows (with \(f^{*}\) serving as the default function): \[\left(f_{B_{i}}\circ f_{B_{j}}\right)\left(\mathbf{x},\mathbf{a}\right)\] \[\qquad=\left(\sum_{a_{i}^{\prime}\in\mathcal{A}_{i}}p_{a_{i}^{ \prime}}Q_{f_{B_{j}}|a_{i}^{\prime}}\right)\circ F_{f_{B_{j}}|a_{i}}\left(f_{ B_{j}}(\mathbf{x},\mathbf{a})\right)\enspace.\] Introducing a sequential approach is pivotal for enhancing clarity. Indeed, this methodology helps in comprehending intricate concepts like approximate fairness with \(\mathbf{\varepsilon}\)-RI defined in Definition 3, which involves improving fairness relatively and approximately with \(\mathbf{\varepsilon}\) a preset level of relative fairness improvement. This perspective enables us to break down how various components of the sequential fairness mechanism interact to achieve fairness goals and allows for the interpretation of the intrinsic effects of adjusting fairness. Therefore, adopting a sequential perspective is a key step in attaining a deeper understanding of fairness and bias. ## Extension to approximate fairness Recall the previously introduced concept of \(\mathbf{\varepsilon}\)-RI fairness (or \(\mathbf{\varepsilon}\)-fairness for brevity), where \(\mathbf{\varepsilon}=\varepsilon_{1:r}=(\varepsilon_{1},\cdots,\varepsilon_{r})\) and each \(\varepsilon\in[0,1]\). In the context of the SSA framework, a methodology introduced by [10] employs geodesic parameterization. Specifically, considering \(A_{1}\in\mathcal{A}_{1}\)_w.l.o.g._, the predictor of the form: \[f_{B_{1}}^{\varepsilon_{1}}(\mathbf{X},\mathbf{A})=(1-\varepsilon_{1})\cdot f_{B_{1}}( \mathbf{X},\mathbf{A})+\varepsilon_{1}\cdot f^{*}(\mathbf{X},\mathbf{A})\enspace,\] achieves the optimal risk-fairness trade-off. Notably, we have \[f_{B_{1}}^{\varepsilon_{1}}\in\operatorname*{argmin}_{f\in\mathcal{F}}\{ \mathcal{R}(f):\mathcal{U}_{1}(f)\leq\varepsilon_{1}\cdot\mathcal{U}_{1}(f^{*} )\}\enspace.\] We denote the corresponding measure as \(\mu_{\mathcal{A}_{1}}^{\varepsilon_{1}}(\nu_{f^{*}}):=\nu_{f_{B_{1}}^{ \varepsilon_{1}}}\). In the following proposition, we extend sequentially this formulation to the context of MSA. **Proposition 6** (Characterization of approximate fairness): _Let Assumption 2 holds and let_ \[f_{B}^{\mathbf{\varepsilon}}=\operatorname*{argmin}_{f\in\mathcal{F}}\left\{ \mathcal{R}(f):\mathcal{U}(f)\leq\sum_{i=1,\ldots,r}\varepsilon_{i}\cdot \mathcal{U}_{i}(f^{*})\right\}\enspace,\] _then_ \[\nu_{f_{B}^{\mathbf{\varepsilon}}}=\mu_{\mathcal{A}}^{\mathbf{\varepsilon}}(\nu_{f^{*}} ):=\mu_{\mathcal{A}_{1}}^{\varepsilon_{1}}\circ\cdots\circ\mu_{\mathcal{A}_{r }}^{\varepsilon_{r}}(\nu_{f^{*}})\enspace.\] _Similarly to Prop. 5, this expression can also be reformulated by permuting indices._ The expression mentioned in Prop. 6 enables us to explicitly formulate an optimal closed-form predictor within the approximate fairness framework: for any permutation \(\sigma\in\mathcal{P}(\mathcal{S})\), \[f_{B}^{\mathbf{\varepsilon}}(\mathbf{X},\mathbf{A})=\left(f_{B_{\sigma(1)}}^{\varepsilon_{ \sigma(1)}}\circ f_{B_{\sigma(2)}}^{\varepsilon_{\sigma(2)}}\circ\ldots\circ f_{B _{\sigma(r)}}^{\varepsilon_{\sigma(r)}}\right)\left(\mathbf{X},\mathbf{A}\right)\enspace.\] ## Data-driven procedure For practical application on real data, the plug-in estimator of the Bayes rule \(f^{*}\) is denoted as \(\hat{f}\)--any DP-unconstrained ML model trained on a set _i.i.d._ instances of \((\mathbf{X},\mathbf{A},Y)\). Given \(\mathbf{x}\in\mathcal{X}\) and \(\mathbf{a}=(a_{1},\ldots,a_{r})\in\mathcal{A}\), the empirical counterpart of an optimal \(\mathcal{A}_{i}\)-fair predictor \(f_{B_{i}}\) is then defined as: \[\widehat{f_{B_{i}}}(\mathbf{x},\mathbf{a})=\left(\sum_{a_{i}^{\prime}\in\mathcal{A}_{i}} \hat{p}_{a_{i}^{\prime}}\hat{Q}_{f|a_{i}^{\prime}}\right)\circ\hat{F}_{f|a_{i}} \left(\hat{f}(\mathbf{x},\mathbf{a})\right)\enspace, \tag{6}\] Here, \(\hat{p}_{a_{i}}\), \(\hat{F}_{f|a_{i}}\), and \(\hat{Q}_{\hat{f}|a_{i}}\) are empirical counterparts of \(p_{a_{i}}\), \(F_{f^{*}|a_{i}}\), and \(Q_{f^{*}|a_{i}}\). Interestingly, aside from \(\hat{f}\), the other quantities can be constructed using an unlabeled dataset. Notably, [10] provides some statistical guarantees: if the estimator \(\hat{f}\) approximates \(f^{*}\) well, then given mild assumptions on distribution \(\mathbb{P}\), the post-processing method \(\widehat{f_{B_{i}}}\) is a good estimator of \(f_{B_{i}}\). By composition, \(\widehat{f_{B}}=\widehat{f_{B_{1}}}\circ...\circ\widehat{f_{B_{r}}}\) and \(\widehat{f_{B}^{\mathbf{\varepsilon}}}=\widehat{f_{B_{1}}^{\varepsilon_{1}}}\circ...\circ \widehat{f_{B_{r}}^{\varepsilon_{r}}}\) emerge as good estimators for \(f_{B}\) and \(f_{B}^{\mathbf{\varepsilon}}\) respectively, enabling accurate and fair estimation of the instances. Note that the unfairness of \(f\) is assessed on a hold-out set (or test set) using \(\widehat{\mathcal{U}}(f)\), the empirical counterpart of Eq. (3). Predictive performance uses mean squared error (MSE) for regression and Accuracy/\(F_{1}\)-score for classification on the same test set. ## Numerical experiments In this section, we conduct a comparative analysis of our methodology against DP-unconstrained methods and state-of-the-art approach given in [1]. Our findings demonstrate that our exact and approximate fairness approach stands out in terms of interpretability, adaptability and competitiveness. ### Case study on synthetic data Prior to showcasing our method on a real dataset, we opted to assess its performance using synthetic data. This step aims to provide a clearer insight into the effectiveness of the sequential fairness mechanism. Specifically, we consider synthetic data \((\mathbf{X},\mathbf{A},Y)\) with the following characteristics: * \(\mathbf{X}\in\mathbb{R}^{d}\): Comprises \(d\) non-sensitive features generated from a centered Gaussian distribution \(\mathbf{X}\sim\mathcal{N}_{d}(0,\sigma_{X}I_{d})\), where \(\sigma_{X}>0\) parameterizes its variance. * \(\mathbf{A}=A_{1:r}\in\{-1,1\}^{r}\): Represents \(r\) binary sensitive features, with \(A_{i}\sim 2\cdot\mathcal{B}(q_{i})-1\) following a Bernoulli law with parameter \(q_{i}=\mathbb{P}(\bar{X}>\tau_{i})\), where \(\bar{X}\sim\mathcal{N}(0,\sigma_{X})\). Here, \(\mathbf{\tau}=(\tau_{1},\ldots,\tau_{r})\) is a user-set parameter. * \(Y\sim\mathcal{N}(\mathbf{1}^{T}\mathbf{X}+\mathbf{1}^{T}\mathbf{A},1)\): Represents the regression task. Simulation schemeDefault parameters are set as follows: \(d=10,\,r=3\), \(\sigma_{X}=0.15\), and \(\mathbf{\tau}=(0,0.05,0.1)\). We generated 10,000 synthetic examples and divided the data into three sets (50% training, 25% testing, and 25% unlabeled). As a base model, we opt for a simple linear regression using default parameters from scikit-learn in Python. Comparatively, we assess our sequential approach against the un-calibrated base model. Interpreting intersectional fairnessIn the context of \(r=3\) sensitive features, Figure 1 showcases \(\hat{\mathcal{U}}_{3}\), an unfairness measure focusing on \(A_{3}\). The sequential fair approach, detailed in Prop. 5, enhances interpretability by addressing inter-correlations among sensitive features while striving for fair predictions. Specifically, this highlights the \(A_{1}\) and \(A_{2}\) correlation with \(A_{3}\). Pursuing exact fairness, the left side of Figure 1 demonstrates that making \(A_{1}\) and \(A_{2}\) fair can indeurently introduce unfairness in \(A_{3}\), revealing the fairness gerry-mandering issue mentioned in the introduction. Aligning with Prop. 5 and Prop. 6, both Figure 1 and Figure 2 exhibit varied numerical debiasing paths to achieve fairness across the three sensitive features. Importantly, this implies that achieving fairness for \(A_{1}\) before \(A_{2}\) is equivalent to the reverse, evident in Figure 1: Synthetic data with parameter \(\mathbf{\tau}=(0,0.05,0.1)\). A sequential unfairness evaluation, \(\mathcal{U}_{3}\), of (_left pane_) exact fairness, (_middle pane_) approximate \(A_{1,2,3}\)-fairness with \(\varepsilon\)-RI where \(\varepsilon=\varepsilon_{1,2,3}=(0.2,0.5,0.75)\) and (_right pane_) approximate \(A_{2,1,3}\)-fairness with \(\varepsilon_{2,1,3}\)-RI. Hashed color corresponds to exact fairness. Figure 2: (Risk, Unfairness) phase diagrams that shows the sequential fairness approach for (_left pane_) two sensitive features; (_right pane_) three sensitive features. In this study, Unfairness represents the overall unfairness \(\hat{\mathcal{U}}=\hat{\mathcal{U}}_{1:3}\). Bottom-left corner gives the best trade-off. Figure 2's (Risk, Unfairness) phase diagrams. Each step illustrated incurs a performance loss but garners fairness gains. The number of such points is influenced by the cardinality of the power set encompassing all sensitive features. Supplementary details on additional fairness experiments with this synthetic data are available in the supplementary materials. ### Case study on real data with two sensitive attributes To illustrate a possible application of our methodology and showcase its use in a real world use-case, we consider data collected in the _folktables_ package of [14]. This package compiles datasets sourced from the US Census, offering a basis for benchmarking ML models. Notably, with named features, we can add interpretation to our methodology. Our study centers on predicting an individual's total income (Income) within sunbelt states (AL, AZ, CA, FL, GA, LA, MS, NM, SC, TX) using standard filters (age \(\geq 18\), at least one hour of weekly work, total income \(>100\)$). As a secondary task, we consider the problem of predicting whether an individual is covered by public health insurance (Coverage), again with the standard filters (age \(<\) 65, income \(\leq\) 30,000$). For both prediction problems, we rely on provided standard features and treat _gender_ and _race_ as sensitive attributes1. Footnote 1: Code and synthetic data available at: [https://github.com/phi-ra/SequentialFairness](https://github.com/phi-ra/SequentialFairness) MethodologyAs our approach is applicable to both regression and classification tasks, we consider both problems. For the regression task, we aim to predict the log-income and for the classification task, we aim to predict whether an individual's income exceeds 50,000$. The sensitive features studied are _gender_ and _race_. In total, we have 600,041 observations, with 52.3% Male participants and 10.7% participants from the minority racial class. The data is split into 64% train, 20% test and 16% unlabeled data. As a base model, we opt for a light-GBM [13] model with early stopping, where the early stopping iterations are optimized using 5-fold cross validation on the training data. In an exact fairness framework (\(\mathbf{\varepsilon}=0\)), we evaluate numerical performance over ten runs with distinct seeds. Since, to our knowledge, there are no open-source fairness methods for MSA predictions, direct benchmarking is not feasible. Instead, for the classification task, we compare our approach with the state-of-the-art _ExponentiatedGradient_ method from the _Fairlearn_ package [21]. First, we ensure fairness on a single variable, enabling a comparison of baseline accuracy, \(F_{1}\)-Score, and unfairness. Next, our methodology is employed to ensure fairness across both variables, facilitating another comparison. ResultsFrom Table 1, in regression, the outcomes align with expectations. Post-processed forecasts exhibit slightly reduced predictive performance, yet offer nearly complete fairness. This achievement comes with minimal added computational cost, quantified by seconds of compute time. In classification, our method proves competitive against the benchmark model. Indeed, both methods provide efficient and fair outcomes for a single feature, though the post-processing of our method is significantly faster. With two sensitive features, the benchmark model underperforms in fairness (as anticipated), while our method excels across both tasks. Though performance slightly dips compared to a single sensitive feature, competitiveness persists--even against the benchmark model, which fails to achieve fairness across both sensitive features. Continuing with our application example, our data (see Figure 4) reveals that the median income for Male participants is 17,000$ higher than for Female participants. Similarly, the sensitive race's members have incomes 10,000$ lower than the rest of the population. While not the sole contributor, it's plausible that these sub-groups could gain from fair predictions. Figure 4 illustrates the log income distribution, highlighting significant subgroup disparities. For instance, the difference between genders within the sensitive racial group is smaller compared to other participants. For a decision maker, even when focusing solely on exact fairness, there are multiple routes to achieving fairness across both features. In our example, one could prioritize fairness in race scores first, followed by gender, or vice versa. Each step involves a performance loss but a fairness gain, depicted in the left panel of Figure 3. Although theory and the Figure demonstrate that the final outcome will be identical either way, practical implementation in steps presents a dilemma. The center panel of Figure 3 displays our method's average corrective effects on predictions for two subgroups. For women in the \begin{table} \begin{tabular}{|c c c c|} \hline & _Uncorrected_ & _Our Method_ & _Fairlearn_ \\ \hline \multicolumn{4}{|c|}{_Regression_} \\ \cline{2-4} MSE & 0.547 \(\pm\) 0.003 & 0.596 \(\pm\) 0.003 & N/A \\ Unfairness & 0.378 \(\pm\) 0.007 & **0.019 \(\pm\) 0.005** & N/A \\ Time (s) & N/A & 8.981 \(\pm\) 1.319 & N/A \\ \hline \multicolumn{4}{|c|}{_Classification, Income - one sensitive_} \\ \cline{2-4} Accuracy & 0.820\(\pm\) 0.001 & 0.809 \(\pm\) 0.001 & 0.808 \(\pm\) 0.001 \\ F1 & 0.753\(\pm\) 0.001 & 0.737 \(\pm\) 0.001 & 0.73 \(\pm\) 0.002 \\ Unfairness & 0.170\(\pm\) 0.001 & **0.003 \(\pm\) 0.002** & 0.021 \(\pm\) 0.002 \\ Time (s) & N/A & 6.319 \(\pm\) 0.422 & 100.8 \(\pm\) 10.467 \\ \cline{2-4} \multicolumn{4}{|c|}{_Classification, Income - two sensitive_} \\ \cline{2-4} Accuracy & 0.820\(\pm\) 0.001 & 0.804 \(\pm\) 0.001 & 0.808 \(\pm\) 0.001 \\ F1 & 0.753\(\pm\) 0.001 & 0.73 \(\pm\) 0.001 & 0.73 \(\pm\) 0.002 \\ Unfairness & 0.354\(\pm\) 0.006 & **0.009 \(\pm\) 0.005** & 0.207 \(\pm\) 0.005 \\ \multicolumn{4}{|c|}{_Classification, Coverage_} \\ \cline{2-4} Accuracy & 0.805 \(\pm\) 0.0 & 0.802 \(\pm\) 0.0 & 0.805 \(\pm\) 0.0 \\ F1 & 0.587 \(\pm\) 0.0 & 0.584 \(\pm\) 0.0 & 0.574 \(\pm\) 0.0 \\ Unfairness & 0.127 \(\pm\) 0.0 & **0.011 \(\pm\) 0.0** & 0.119 \(\pm\) 0.0 \\ Time (s) & N/A & 10.661 \(\pm\) 0.147 & 355.2 \(\pm\) 6.689 \\ \hline \end{tabular} \end{table} Table 1: Results for the correction of the biases for the gender and race features. For the classification tasks we first performed the fairness calibration on gender (one sensitive) and then on gender and race (two sensitive). sensitive racial group, correcting for race and gender boosts their predicted income. However, this poses a problem if both effects aren't corrected simultaneously. Women not in the sensitive racial group may oppose race-gender correction order due to a net deficit from race-only corrections. Conversely, sensitive subgroup women gain more from race corrections than gender, favoring the race-gender order. Our methodology offers flexibility for multiple fairness constraints. For instance, instead of consecutively rendering predictions fair on each feature, simultaneous steps are possible. In our example, if fair race predictions are sought without disadvantating women, adjusting \(\varepsilon\) for gender to one-sixth of race's size maintains average women's predictions while correcting for race. This adaptable approach is illustrated in the right panel of Figure 3, highlighting our methodology's strength. It ensures exact fairness results remain consistent regardless of the order in which scores become fair, while enabling decision makers to analyze stepwise fairness implementation effects. ## Conclusion We proposed a framework that expands the standard concept of fair scores from SSA to MSA. This extension ensures exact sequential fairness yields the same predictions regardless of correction order for the sensitive features. However, intermediate solutions can yield significant subgroup differences influenced by sensitive features. Our approach quantifies these differences and offers a way to mitigate them using approximate solutions. Our analysis raises intriguing research questions, including how optimal solutions change with fairness metrics beyond the employed DP constraint. The flexibility and user-friendliness of our methodology, supporting a comprehensive fairness approach across multiple debiasing steps, promotes the adoption of fair decision-making practices.
2310.00335
Anomaly Detection in Power Generation Plants with Generative Adversarial Networks
Anomaly detection is a critical task that involves the identification of data points that deviate from a predefined pattern, useful for fraud detection and related activities. Various techniques are employed for anomaly detection, but recent research indicates that deep learning methods, with their ability to discern intricate data patterns, are well-suited for this task. This study explores the use of Generative Adversarial Networks (GANs) for anomaly detection in power generation plants. The dataset used in this investigation comprises fuel consumption records obtained from power generation plants operated by a telecommunications company. The data was initially collected in response to observed irregularities in the fuel consumption patterns of the generating sets situated at the company's base stations. The dataset was divided into anomalous and normal data points based on specific variables, with 64.88% classified as normal and 35.12% as anomalous. An analysis of feature importance, employing the random forest classifier, revealed that Running Time Per Day exhibited the highest relative importance. A GANs model was trained and fine-tuned both with and without data augmentation, with the goal of increasing the dataset size to enhance performance. The generator model consisted of five dense layers using the tanh activation function, while the discriminator comprised six dense layers, each integrated with a dropout layer to prevent overfitting. Following data augmentation, the model achieved an accuracy rate of 98.99%, compared to 66.45% before augmentation. This demonstrates that the model nearly perfectly classified data points into normal and anomalous categories, with the augmented data significantly enhancing the GANs' performance in anomaly detection. Consequently, this study recommends the use of GANs, particularly when using large datasets, for effective anomaly detection.
Marcellin Atemkeng, Toheeb Aduramomi Jimoh
2023-09-30T10:44:05Z
http://arxiv.org/abs/2310.00335v1
# Anomaly Detection in Power Generation Plants with Generative Adversarial Networks ###### Abstract Anomaly detection is a critical task that involves the identification of data points that deviate from a predefined pattern, useful for fraud detection and related activities. Various techniques are employed for anomaly detection, but recent research indicates that deep learning methods, with their ability to discern intricate data patterns, are well-suited for this task. This study explores the use of Generative Adversarial Networks (GANs) for anomaly detection in power generation plants. The dataset used in this investigation comprises fuel consumption records obtained from power generation plants operated by a telecommunications company. The data was initially collected in response to observed irregularities in the fuel consumption patterns of the generating sets situated at the company's base stations. The dataset was divided into anomalous and normal data points based on specific variables, with 64.88% classified as normal and 35.12% as anomalous. An analysis of feature importance, employing the random forest classifier, revealed that "Running Time Per Day" exhibited the highest relative importance. A GANs model was trained and fine-tuned both with and without data augmentation, with the goal of increasing the dataset size to enhance performance. The generator model consisted of five dense layers using the \(tanh\) activation function, while the discriminator comprised six dense layers, each integrated with a dropout layer to prevent overfitting. Following data augmentation, the model achieved an accuracy rate of 98.99%, compared to 66.45% before augmentation. This demonstrates that the model nearly perfectly classified data points into normal and anomalous categories, with the augmented data significantly enhancing the GANs' performance in anomaly detection. As a result, this study recommends the use of GANs, particularly when coupled with large datasets, for effective anomaly detection tasks. Generative modelling, generative adversarial networks, zero-sum game, anomaly detection, power generation plants, telecommunication ## I Introduction The telecommunications industry stands as a prominent sector within the Information Communication Technologies (ICT) landscape, heavily reliant on a substantial supply of electrical power for its seamless operations. This dependency is particularly critical, as ICT companies consume approximately 3% of the world's total electrical energy [1]. However, the accessibility of reliable electrical power remains a persistent concern, especially in underdeveloped regions, notably across Africa. With the proliferation of telecommunication infrastructure, including base stations across the continent, the industry has been compelled to explore alternative energy sources. These alternatives encompass the use of gasoline or diesel generators and harnessing solar power, among others, to ensure continuous and robust communication networks. TeleInfra, a telecommunications company operating in Cameroon, is among those grappling with these challenges due to the nation's erratic power supply. The telecommunication equipment dispersed across both rural and urban areas necessitates a continuous electricity supply to fulfil its mission of establishing resilient communication networks. Unfortunately, the country primarily relies on hydropower (constituting 73% of its energy generation), which is prone to disruptions, especially during dry seasons marked by low water levels [2]. Moreover, access to electricity is limited, with only approximately 14% coverage in rural areas and varying from 65% to 88% in urban centres. Furthermore, the shift towards alternative power sources, notably the use of generators, has introduced new challenges in the form of irregularities or anomalies in fuel consumption at these base stations. Past research in [3], suggests that various factors such as the mismanagement of air-conditioning and lighting systems, as well as building design, may contribute to the heightened power consumption at these sites. Anomalies are defined as data points that deviate from the expected pattern within a dataset, often representing a distinct distribution within the broader dataset. It's worth noting that there are varying definitions and misconceptions about anomalies, which have been comprehensively addressed in [4]. This work offers a comprehensive overview of different typologies and subgroups, contributing to a more precise conceptualization of anomalies. In the context of the power generation plants dataset, anomalies can manifest through various factors, including malicious activities like fuel pilifrage. The task of investigating anomaly is often referred to as either anomaly detection, novelty detection or outlier detection [5], and many techniques have been used for the purpose, either for specific domains such as finance [6], health [7], Internet of Things [8], and so on, or the generic ones [9, 10]. Furthermore, as the world witnesses the continuous influx of vast datasets, sophisticated machine learning, and deep learning techniques have found application in the field of anomaly detection. As an illustration, the power generation plant dataset employed in this study was previously employed in [11] to identify anomalies using supervised machine learning methods--namely, logistic regression, support vector machines, k-Nearest Neighbors, and the Multilayer Perceptron. The latter work showed that the Multilayer Perceptron outperformed other models. The same dataset was also used in [12] to investigate and detect anomalies using a sort of label-assisted auto-encoders with results outperforming the work in [11]. We investigated the same dataset with Generative Adversarial Networks (GANs), GANs present multiple benefits for anomaly detection, primarily by generating high-quality and diverse samples that effectively capture the complexity and variability inherent in real data. GANs involve training a classifier that assigns a probability score to a sample, indicating whether it is categorized as "normal" or "anomalous". This differs from auto-encoders used in [12], where the input is compressed into a latent space, and classification is made based on reconstruction errors using the test samples. As indicated in [13], recent years have witnessed the remarkable proficiency of deep learning approaches, such as GANs, in learning complex representations in data, including high-dimensional, temporal, geographical, and graph data. This progress has significantly expanded the horizons of numerous learning tasks. Additionally, [14] has substantiated that GANs possess the capacity to learn the behavioural patterns inherent in typical data, given their ability to replicate intricate and high-dimensional data distributions. Consequently, this paper advocates the use of GANs for the purpose of anomaly detection within the power generation plants of the TeleInfra company, thereby complementing the research in [11, 12]. This research employed a time series dataset that was obtained through the power consumption activities at TeleInfra Telecommunication company, being the case study. Consequently, it could be limited in respect of time constraints, since the data was collected in a specific time period. The rest of this work is organized as follows: Section II discusses the method used in the analysis; Section III presents the data and data augmentation strategy; Section IV contains the results of the analysis, as well as the discussions; and Section V shows suggestions on the possible future direction of work in this line. ## II Methods ### _Generative Adversarial Network: Overview_ Anomaly detection methods have metamorphosed from machine learning methods into deep learning methods due to the continuous availability of datasets. GANs is a deep learning approach that represents an instance of modern generative modelling--it is typically used in replicating the distribution of datasets [15]. GAN is based on game theory, compared to many other generative models which are based on optimization. The GANs algorithm comprises two key components: the "generator" and the "discriminator." The "discriminator" functions as a deep-learning classifier, trained to differentiate between real and artificially generated examples drawn from the input domain. On the other hand, the "generator," also a deep-learning model, takes a random latent vector from a normal distribution as input and aims to generate examples that convincingly deceive the discriminator into perceiving them as real. These two components learn in tandem, iteratively refining their capabilities until they reach a point of equilibrium where the generator consistently produces authentic-looking examples. Also, after learning the true data distribution, GANs produce samples that are comparable to the training dataset. Thus, we train a classifier that gives a probability score of a sample as being _normal_ or _anomalous_, rather than compressing the input into a latent space and classifying the test samples based on the reconstruction error. ### _Building the GAN_ A noise sample \(\mathbf{z}\) is provided as input to the generator neural network, \(G\). Subsequently, the generator generates a synthetic sample, \(G(\mathbf{z})\). Both the generated sample and the actual data \(\mathbf{x}\) are then forwarded to the discriminator neural network, \(D\). The primary role of the discriminator is to produce a single value, representing the likelihood that the given input is a real sample. In this context, it outputs \(D(\mathbf{x})\) when the initial input is real, and conversely, it outputs \(D(G(\mathbf{z}))\) if the input is generated. The discriminator operates as a classifier. An adversarial framework is depicted in Figure 1. ### _Mathematical Framework of the GANs_ The adversarial framework, as introduced by [15], is mathematically represented as a minimax optimization between the generator \(G:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n}\) and the discriminator \(D:\mathbb{R}^{n}\rightarrow[0,1]\), with a designated target loss function. \[V(D,G)=\mathbb{E}_{\mathbf{x}\sim p_{data}}[\log D(\mathbf{x})]+\mathbb{E}_{ \mathbf{z}\sim p_{z}}[\log(1-D(G(\mathbf{z}))]. \tag{1}\] The first term, \(\mathbb{E}\mathbf{x}\sim pdata[\log D(\mathbf{x})]\), represents the discriminator's assessment of real data. Conversely, the second term, \(\mathbb{E}\mathbf{z}\sim pz[\log(1-D(G(\mathbf{z}))]\), is the prediction of the discriminator's on a fake data. Bearing in mind Equation (1) required that we solve the minimax problem: \[\begin{split}\min_{G}\,\max_{D}\,V(D,G)&=\min_{G} \,\max_{D}\bigl{(}\mathbb{E}_{\mathbf{x}\sim p_{data}}[\log D(\mathbf{x})]\\ &\quad+\mathbb{E}_{\mathbf{z}\sim p_{z}}[\log(1-D(G(\mathbf{z}))] \bigr{)}.\end{split} \tag{2}\] The generator aims to maximize its probability of success, which involves causing the discriminator to make mistakes. Consequently, the generator seeks to minimize the value function defined by Equation (2). In contrast, the discriminator strives to minimize the generator's likelihood of success and, therefore, aims to maximize Equation (2). Specifically, the discriminator endeavors to minimize \(D(G(\mathbf{z}))\) while maximizing \(D(\mathbf{x})\). Algorithm 1 outlines the computational procedures employed in GAN. ``` Data: Real data, \(\mathbf{x}\) and hyperparamter, \(k\) Result: Fake data, \(\tilde{\mathbf{x}}\) for each training iterationdo for\(k\) stepsdo Sample \(m\) noise samples \(\{z_{1},z_{2},\ldots,z_{m}\}\) and transform with Generator; Sample \(m\) real samples \(\{x_{1},x_{2},\ldots,x_{m}\}\) from real data; Update the Discriminator by ascending the gradient: \(\biguup[\biguup}\nabla_{\theta_{d}}\frac{1}{m}\sum_{i=1}^{m}\Big{[}\log D\left( x^{(i)}\right)+\log\left(1-D\left(G\left(z^{(i)}\right)\right)\right)\Big{]}\); end for Sample \(m\) noise samples \(\{z_{1},z_{2},\ldots,z_{m}\}\) and transform with Generator; Update the Generator by descending the gradient: \(\biguup[\biguup}\nabla_{\theta_{g}}\frac{1}{m}\sum_{i=1}^{m}\log\left(1-D \left(G\left(z^{(i)}\right)\right)\right)\); end for ``` **Algorithm 1**Algorithm of the GANs [15] ### _Performance Evaluation Metrics_ To assess the performance of the model, four essential model evaluation metrics were used: * Confusion Matrix: A matrix displaying the model's correct and incorrect predictions, comprising True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). * Accuracy, _Ac_: The probability of correct predictions is represented as the ratio of total correct predictions to the total number of observations in the test set. It is mathematically expressed as: \[Ac=\frac{TP+TN}{TP+FP+TN+FN}.\] (3) * Sensitivity, \(Sn\): The ratio of correct positive predictions to the total number of positives. It is also referred to as Recall or True Positive Rate: \[Sn=\frac{TP}{TP+FN}.\] (4) * Precision, \(P\): The ratio of correct positive predictions to the total number of positive predictions; also known as Positive Predictive Value: \[P=\frac{TP}{TP+FP}.\] (5) * F\({}_{1}\)-Score: It is defined by the harmonic mean of precision and recall: \[F_{1}\text{-Score}=2\times\frac{Sn\times P}{Sn+P}.\] (6) ## III Data Analysis ### _Dataset_ The dataset for this study was sourced from TeleInfra Telecommunication Company's base stations in Cameroon, where generators served as the primary power source. It covers the period from September 2017 to August 2018, initially containing 6010 records. After data cleaning, it was reduced to 5905 records. The dataset includes variables such as "Running Time," "Power Type," "Consumption Rate," "GENERATOR CAPACITY (kVA)," and more. For a detailed dataset discussion, please refer to [11]. ### _Pre-processing_ The dataset was painstakingly examined for possible defects like missing observations, and more importantly, relevant new variables were derived to corroborate the existing variables, and adequately represent some necessary measures in the bid to classify the labels into the anomaly and otherwise. As an example, the variable "Daily Consumption within a Period" was computed by dividing "Consumption HIS" by the "Number of Days." Similarly, "Running Time per Day" was determined by dividing "Running Time" by the "Number of Days," and "Daily Consumed Quantity between Visits" was calculated by dividing "Quantity Consumed between Visits" by the "Number of Days." ### _Feature Importance_ Due to the numerous variables constituting the dataset, it becomes pertinent to rank the features based on their perceived contribution to the resulting model. In this task, the random forest classifier, as illustrated in Figure 2, was employed. Among all the features, "Running Time Per Day" exhibited the highest importance, followed by "Daily Consumption within a Period," "Running Time," "Consumption HIS," and "Maximum Consumption per Day," in descending order of significance. Additionally, it was observed that the feature with the least importance among all was "Total Quantity (QTE) Left." Fig. 1: Building Block of a GAN ### _Anomaly Visualisation_ The scatterplot in Figure (a)a, illustrates the distribution of daily running times for the power plants at TeleInfra telecommunication company's base stations. Notably, some data points surpass the 24-hour threshold in Figure (a)a, identifying them as "anomalies." This suggests the presence of anomalies in fuel consumption concerning daily running times. Additionally, Figure (b)b reveals that certain data points deviate significantly from the rest of the dataset. Both Figures (a)a and (b)b are used to confirm the existence of an anomaly in the dataset. ### _Label Classification_ Following feature selection, the dataset's target variable was determined by categorizing data points into either anomaly or normal cases based on variables like "Running Time per Day" and "Maximum Consumption Per Day," among others. There are 3829 records (64.88%) classified as normal and 2073 (35.12%) as anomalies. Additionally, this highlights the presence of a class imbalance in the data labels. ### _Correlation Analysis_ Examining the correlation matrix (Figure 4) reveals notable correlations, such as the strong positive correlation of 0.83 between "Consumption HIS" and "Running Time." This suggests that higher Running Time corresponds to increased consumption. Consequently, these variables are deemed relevant for analyzing fuel consumption patterns and detecting anomalies at the base stations. ### _Data Augmentation_ As a deep learning model, GAN is data-demanding; its performance is dependent on the availability of a large volume of data. Consequently, a deep learning model requires consistent, accurate, and complete data [16], making it inevitable to develop techniques for increasing existing datasets, in the scenarios of limited datasets. One of the viable methods that is used in the literature is data augmentation, although it is commonly used for images and text data. Recent research used different augmentation techniques such as auto-encoder [17] and mask Token Replacement (MTR) [18] for tabular data. In this study, the dataset, which had been reduced to 5902 observations after data cleaning, was augmented by adding a random value from a uniform distribution (ranging from 0 to the standard deviation of each feature) to the existing values in each feature's row. As a Fig. 4: Correlation Matrix Fig. 3: Anomaly visualisation from the Running Time Per Day at the Base Stations Fig. 2: Feature Importance result, the augmented dataset now comprises a total of 187281 records, preserving the original features. ## IV Results and Discussion As recommended in previous research on anomaly detection [19], we followed specific strategies for optimizing our generator and discriminator networks. To be precise, we used the Adam optimizer for the generator network while opting for the stochastic gradient descent optimizer for the discriminator network. The generator network was designed, featuring a total of five dense layers. These layers were equipped with the \(\tanh\) activation function, and the binary cross-entropy loss function was chosen for its loss calculation. Conversely, the discriminator network exhibited a slightly different structure. It comprised six dense layers, with a crucial addition of dropout layers. These dropout layers played a pivotal role in ensuring that the model did not excessively adapt to the training data. Furthermore, we employed the sigmoid activation function in the final layer of the discriminator network to facilitate its discriminative capabilities. The training losses in Figure 5 show how both the discriminator and generator models converge. In Figure 4(a), we can observe the training loss of the GAN model trained on the dataset without data augmentation. A notable observation is that the training errors for both the discriminator and the Generator do not exhibit stabilization; instead, they fail to converge asymptotically toward zero as training progresses. This is due to the constrained quantity of data employed for training the model. The limited dataset size contributes to the challenges in achieving the desired convergence behaviour. Figure 4(b) provides insight into the training dynamics, displaying a noteworthy contrast. The training loss for both the discriminator and generator exhibits a consistent and smooth convergence toward zero. This favourable convergence behaviour can be attributed to the GAN being trained on an augmented dataset. The augmentation of the dataset plays a crucial role in promoting convergence and, consequently, achieving a level of performance that meets acceptable standards. By leveraging the augmented dataset, the GAN is better equipped to navigate the training process, leading to improved stability and convergence in the training loss. Figure 6 and Table I provide a comprehensive comparison of the performances between the GAN models with and without data augmentation. The comparison highlights a significant disparity in performance outcomes. Specifically, the GAN model that incorporates data augmentation demonstrates superior performance across all the evaluation metrics employed in this study. This observation underlines the effectiveness of data augmentation in enhancing the capabilities of deep learning models. ## V Conclusion Due to the inconsistent power supply in Cameroon, TeleInfra, a telecommunications company, has turned to alternative power sources, predominantly relying on generators. While generating power using generators, the company noticed irregularities in fuel consumption patterns. These anomalies were identified by assessing variables such as daily running time, daily consumption rate, and similar metrics. The dataset was collected on the different features relevant to detecting anomalies in the power generation plant. Building upon the groundwork in [11], who employed various machine learning techniques including K-Nearest Neighbors, Logistic Regression, Multilayer Perceptron, and Support Vector Machines, and further extending the research conducted in [12], which introduced a label-assisted auto-encoder approach, our \begin{table} \begin{tabular}{c|c|c} \hline \hline Metrics & Without data augmentation & With data augmentation \\ \hline \hline Accuracy Score & 0.6645 & 0.9899 \\ Precision & 0.5455 & 0.78538 \\ Recall & 0.0151 & 0.7966 \\ \(F_{1}\) Score & 0.0294 & 0.6457 \\ \hline \hline \end{tabular} \end{table} TABLE I: The evaluation metrics for the GAN model indicate that the augmented data version outperforms the non-augmented counterpart. Fig. 5: The training losses of the GANs for anomaly detection with and without data augmentation. study investigated GANs for anomaly detection. GAN trains a classifier that allocates a probability score to each sample, thereby determining its classification as either "normal" or "anomalous". This contrasts with the approach adopted in [12], where the input data is encoded into a latent space, and the classification of test samples is predicated on the evaluation of reconstruction errors. During the GAN training phase, an initial observation revealed suboptimal performance, potentially attributed to the limited size of the dataset. Consequently, to address this concern, data augmentation was implemented on the initial dataset to generate additional data points. Subsequently, when employing this augmented dataset, the GAN demonstrated superior performance compared to the models introduced in the works in [11, 12] in terms of accuracy. However, it's noteworthy that in certain metrics, such as precision, the GAN's performance did not match its performance in accuracy. The evaluation metrics effectively substantiate the advantage gained by incorporating augmented data during the model's training process. The augmented data's influence is evident in the augmented model's ability to consistently achieve superior results, thereby affirming the value of data augmentation as a strategy for optimizing the GAN's performance in various aspects of anomaly detection. ## Acknowledgment The authors would like to express their gratitude to the reviewers for their valuable comments. Additionally, MA extends his appreciation to Rhodes University for its support in conducting this research.
2309.12652
Classification of Classical Spin Liquids: Topological Quantum Chemistry and Crystalline Symmetry
Frustrated magnetic systems can host highly interesting phases known as classical spin liquids (CSLs), which feature {extensive} ground state degeneracy and lack long-range magnetic order. Recently, Yan and Benton et al. proposed a classification scheme of CSLs in the large-$\mathcal{N}$ (soft spin) limit [arXiv.2305.00155], [arXiv:2305.19189]. This scheme classifies CSLs into two categories: the algebraic CSLs and the fragile topological CSLs, each with their own correlation properties, low energy effective description, and finer classification frameworks. In this work, we further develop the classification scheme by considering the role of crystalline symmetry. We present a mathematical framework for computing the band representation of the flat bands in the spectrum of these CSLs, which extends beyond the conventional representation analysis. It allows one to determine whether the algebraic CSLs, which features gapless points on their bottom flat bands, are protected by symmetry or not. It also provides more information on the finer classifications of algebraic and fragile topological CSLs. We demonstrate this framework via concrete examples and showcase its power by constructing a pinch-line algebraic CSL protected by symmetry.
Yuan Fang, Jennifer Cano, Andriy H. Nevidomskyy, Han Yan
2023-09-22T06:45:47Z
http://arxiv.org/abs/2309.12652v2
# Classification of Classical Spin Liquids: Topological Quantum Chemistry and Crystalline Symmetry ###### Abstract Frustrated magnetic systems can host highly interesting phases known as classical spin liquids (CSLs), which feature extensive ground state degeneracy and lack long-range magnetic order. Recently, Yan and Benton _et al._ proposed a classification scheme of CSLs in the large-\(\mathcal{N}\) (soft spin) limit [arXiv:2305.00155, arXiv:2305.19189]. This scheme classifies CSLs into two categories: the algebraic CSLs and the fragile topological CSLs, each with their own correlation properties, low energy effective description, and finer classification frameworks. In this work, we further develop the classification scheme by considering the role of crystalline symmetry. We present a mathematical framework for computing the band representation of the flat bands in the spectrum of these CSLs, which extends beyond the conventional representation analysis. It allows one to determine whether the algebraic CSLs, which features gapless points on their bottom flat bands, are protected by symmetry or not. It also provides more information on the finer classifications of algebraic and fragile topological CSLs. We demonstrate this framework via concrete examples and showcase its power by constructing a pinch-line algebraic CSL protected by symmetry. ## I Introduction The investigation of magnetic systems lacking long-range order has a rich history, spanning several decades starting from the exploration of disorder effects in spin glasses [1; 2] and the proposal of resonating valence bond states [3; 4], which have become fundamental to contemporary research in strongly frustrated magnets. A particular concept of interest is that of classical spin liquids (CSLs), which emerge when spin models exhibit an extensively degenerate ground state manifold as a consequence of frustration, and fluctuations among ground states preclude any form of ordering [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. Despite their general instability to perturbations at absolute zero temperature (\(T=0\)), the substantial entropy that CSLs possess at low energies allows them to dominate the finite temperature physics in a finite region of model parameters. Moreover, CSLs often serve as 'parent states' or an intermediate temperature regime for quantum spin liquids (QSLs), which arise when quantum fluctuations introduce dynamics among the classical ground states [20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Therefore, understanding and classifying spin liquids is of great importance. Successful classification schemes for quantum spin liquids have been developed based on the projective symmetry group [30] and the modern perspective of gapped QSLs [31; 32]. In contrast, the classification of CSLs has made much slower progress. Previous works have attempted to classify frustrated classical spin systems using constraint counting [7], linearization around given spin configurations [33], supersymmetry-inspired constructions [34], or identification of topological invariants tailored to specific lattices [17]. Recently, a more general scheme has been proposed for classifying CSLs based on their energy spectrum [35; 36]. More concretely, the scheme utilizes the connection between CSLs and physics of flat bands at the bottom of the Hamiltonian's spectrum, responsible for the extensive degeneracy of the classical ground states ((Fig. 1(a))). In this classification scheme, CSLs are divided into two categories. The first is characterized by a topological invariant that persists as long as the lowest flat bands in its energy spectrum remain separated by a gap from the higher dispersive bands. This category is called "fragile topological" classical spin liquids (FT-CSLs, Fig. 1(b)) since the topological characteristics can be made to disappear by adding spins to the unit cell without closing the spectral gap. The second category, called algebraic CSLs (Fig. 1(c)), occupies the boundaries between FT-CSLs where the spectral gap closes, as illustrated schematically in Fig. 1(d). The eigenvector configurations around the gap-closing points determine the emergent Gauss's law describing the algebraic spin correlations in this category of CSLs. In this work, we advance the above classification scheme by investigating the consequences of crystalline symmetry on the classification of classical spin liquids, in analogy to how crystalline symmetries enrich the classification of topological phases. Specifically, motivated by the classification of band representations in Topological Quantum Chemistry (TQC) [37], we develop a mathematical framework for determining how the flat bands transform under symmetry, and elucidate whether symmetry protects the gap closing between the bottom flat bands and the higher dispersive bands. If the gap closing is protected, then the symmetry forbids a FT-CSL, which is a significant constraint for model building or material analysis. Further details of the symmetry representations yield information on the topological classification of FT-CSLs and the degeneracy structure of the algebraic CSLs. Importantly, the present mathematical framework goes beyond how TQC is used to classify electron band structures. Specifically, the symmetry data of the lattice spins is generally not sufficient to classify the CSL. An additional piece of information - the emergent, virtual lattice of the constraint terms in the Hamiltonian and their crystalline symmetry properties - is crucial and must be incorporated into the symmetry analysis. In the following, we briefly review the constraint Hamiltonian formalism of CSLs and its classification in Sec. II. We then give the recipe of the abstract crystalline symmetry analysis in Sec. III. This recipe is then applied to two known models on the Kagome model in Sec. IV as a demonstration. We then introduce a new pinch-line model with symmetry-protected nodal line degeneracies guided by our insight from the symmetry classification in Sec. V. Finally we summarize our results in Sec. VI. ## II Brief review of the constraint Hamiltonian and CSL classification We study spin models in the limit of a large number of spin components \(\mathcal{N}\). This is equivalent to adopting a "soft spin" approximation, where the constraint on the spin length \(\mathbf{S}^{2}=1\) is enforced only on average as \(\langle S^{2}\rangle=1\), by introducing a Lagrange multiplier or "chemical potential" to the spins in the Self-Consistant Gaussian Approximation, a method generalized from the Luttinger-Tisza method [38, 39]. This approximation has been demonstrated to be valid for many Heisenberg candidate CSLs [40, 41, 11, 8]. The Hamiltonians of these CSLs can be written in the _constraint form_, \[\mathcal{H}=\sum_{\mathbf{R}\in\text{u.c.}}\sum_{I=1}^{M}[\mathcal{C}_{I}( \mathbf{R})]^{2}\, \tag{1}\] where for a given constraint index \(I\), the _constraint_\(\mathcal{C}_{I}(\mathbf{R})\) is a scalar that defines a linear combination of a local cluster of spins inside and around the unit cell located at \(\mathbf{R}\) (see Eqs. (10), (16) for concrete examples). The Hamiltonians we consider are translationally invariant and consist of sums of squared constrainers. A more explicit way to express \(\mathcal{C}_{I}(\mathbf{R})\) without referring to pictures of the lattice is to write \[\begin{split}\mathcal{H}&=\sum_{\mathbf{R}\in \text{u.c.}}\sum_{I=1}^{M}[\mathcal{C}_{I}(\mathbf{R})]^{2}\\ &=\sum_{\mathbf{R}\in\text{u.c.}}\sum_{I=1}^{M}\left[\sum_{ \mathbf{r}}\mathbf{S}(\mathbf{r})\cdot\mathbf{C}_{I}(\mathbf{R},\mathbf{r}) \right]^{2}\.\end{split} \tag{2}\] Here, \(\mathbf{S}(\mathbf{r})=(S_{1},\dots,S_{N})(\mathbf{r})\) is the vector whose \(N\) components are the spins on the \(N\) sublattice sites respectively. For example, \(S_{b}(\mathbf{r})\) is the \(b\)-th sublattice site in the unit cell labelled by \(\mathbf{r}\). The term \(\sum_{\mathbf{r}}\mathbf{S}(\mathbf{r})\cdot\mathbf{C}_{I}(\mathbf{R}, \mathbf{r})\) is the constraint \(\mathcal{C}_{I}(\mathbf{R})\) written in a more explicit form, that is, \[\mathbf{C}_{I}(\mathbf{0},\mathbf{r})=\left(\begin{array}{cc} \sum_{j\in 1\text{st sub-lat. sites}}&c_{1,j}\delta_{\mathbf{r},\mathbf{a}_{1,j}}\\ \sum_{j\in 2\text{nd sub-lat. sites}}&c_{2,j}\delta_{\mathbf{r},\mathbf{a}_{2,j}} \\ \vdots\\ \sum_{j\in N-\text{th sub-lat. sites}}&c_{N,j}\delta_{\mathbf{r},\mathbf{a}_{N,j}} \end{array}\right)\ ; \tag{3}\] \[\mathbf{C}_{I}(\mathbf{R},\mathbf{r})=\mathbf{C}_{I}(\mathbf{0}, \mathbf{r}-\mathbf{R}) \tag{4}\] encodes exactly the information of how spins are summed together in \(\mathcal{C}_{I}(\mathbf{R})\). The \(c_{a,j}\) are numerical coefficients, and \(\delta_{\mathbf{r},\mathbf{a}_{a,j}}\) encodes the spin in the unit cell \(\mathbf{a}_{a,j}\) on sublattice \(a\), which is summed with coefficient \(c_{a,j}\). Consider a system with \(N\) sublattice sites and \(M\) linearly independent constrainers per unit cell, where \(M<N\). There is an extensive degeneracy for the ground states in this system since the condition of all constrainers being zero does not fix all the spins. Consequently, the Hamiltonian spectrum in momentum space has \(N-M\) degenerate flat bands, each corresponding to Figure 1: (a) The CSL Hamiltonian generally features one or more degenerate flat bands at the bottom of its spectrum. (b) Algebraic CSLs feature gap closing points between the bottom flat bands and the dispersive bands. The band touching point determines the emergent Gauss’s law. \(\mathcal{B}_{FB}\) is the flat band representation, and will be discussed in detail in the main text. (c) Fragile Topological CSLs (FT-CSLs) have no such gap-closing points, and are classified by their eigenvector homotopy. (d) The landscape of the CSL phase diagram consists of FT-CSLs whose boundaries are Algebraic CSLs. one set of these degenerate ground states. Above these flat bands, there are \(M\) higher dispersive bands that encode the finite-energy states. References [35; 36] have discussed in great detail how the structure of the bottom flat bands and the higher dispersive bands can be used to classify the CSLs. For simplicity, let us use the case of \(M=1\) (one constraint per unit cell) to illustrate this scheme. The scenario yields \(N-1\) flat bands at zero energy, which corresponds to spin states obeying the constraint \(\mathcal{C}(\mathbf{R})=0\); we have dropped the index \(I\) since there is only one constraint per unit cell. Additionally, there is a higher dispersive band describing spin states violating the constraint. The eigenvector of this dispersive band, denoted as \(\mathbf{T}(\mathbf{q})\), can be expressed analytically and is precisely the Fourier transform of the constraint \(\mathbf{C}_{I}(\mathbf{0},\mathbf{r})\). The dispersion of the higher band is \(\omega_{T}(\mathbf{q})=|\mathbf{T}(\mathbf{q})|^{2}\) (see example in Sec. IV.2 in addition to the detailed formulation in Refs. [35; 36]). The spectrum of the Hamiltonian encodes the classification of the CSL into one of two categories, determined by whether the dispersive band has a singular band touching point with the flat band or not. Within each category, a finer classification can be made by examining the configuration of eigenvectors around the gapless point (first category) or its global topology (second category). In more detail, the classification is as follows. **1. Algebraic CSL:** If a gap closure point exists between the bottom flat bands and the higher dispersive band, the system is an algebraic CSL with algebraically decaying spin correlations. Here, the ground states conform to a charge-free Gauss's law, which is derived from the Taylor expansion of \(\mathbf{T}(\mathbf{q})\) around the band touching point. Specifically, if for the \(a^{\text{th}}\) spin in the unit cell the lowest order term in the expansion is of order \(m_{a}\geq 1\) \[T_{a}(\mathbf{q})=\sum_{j=0}^{m_{a}}c_{aj}(-ik_{x})^{j}(-ik_{y})^{m_{a}-j}, \quad a=1,\dots,N, \tag{5}\] then the ground states described by the spin configurations orthogonal to \(\mathbf{T}(\mathbf{q})\), i.e., \[\mathbf{T}(\mathbf{q})\cdot\tilde{\mathbf{S}}(\mathbf{q})=0, \tag{6}\] in momentum space. Reverse-Fourier transforming this back to real space, we obtain the Gauss's law \[\sum_{a=1}^{N}\left(\sum_{j=0}^{m_{a}}c_{aj}^{*}(\partial_{x})^{j}(\partial_{ y})^{m_{a}-j}S_{a}\right)\equiv\sum_{a=1}^{N}D_{a}^{(m_{a})}S_{a}=0, \tag{7}\] where \(D_{a}^{(m_{a})}\) denotes the differential operator of order \(m_{a}\geq 1\) from Fourier transforming the \((-ik_{x})^{j}(-ik_{y})^{m_{a}-j}\) terms. This principle also applies to models with multiple constraints per unit cell (see Refs. [35; 36] for detailed discussions). **2. Fragile Topological CSL:** When the bottom flat band is entirely gapped from the higher dispersive band, \(\mathbf{T}(\mathbf{q})\) becomes a non-zero, smoothly defined vector field in the target manifold \(\mathbb{C}P^{N-1}\) (if complex) or \(\mathbb{R}P^{N-1}\) (if real) across the entire BZ. It can be classified by how it winds around the BZ, which is a \(d\)-torus, \(T^{d}\). The winding is encoded by the relative homotopy group \([T^{d},\mathbb{C}P^{N-1}]\) (or \([T^{d},\mathbb{R}P^{N-1}]\)) of the map \[\hat{\mathbf{T}}(\mathbf{q}):T^{d}\rightarrow\mathbb{C}P^{N-1}(\text{or } \mathbb{R}P^{N-1}) \tag{8}\] The homotopy class is invariant under smooth changes to the Hamiltonian as long as it maintains the constraint form and the gap between the bottom flat and upper dispersive bands. If the map \(\hat{\mathbf{T}}(\mathbf{q})\) belongs to a non-trivial homotopy class, the corresponding gapped phase is (fragile) topological. Otherwise, the CSL is topologically trivial. The fragility of the classification stems from the fact that adding (say, \(P\)) spins to the unit cell without closing the spectral gap changes the target manifold to \(\mathbb{C}P^{N+P-1}\) (or \(\mathbb{R}P^{N+P-1}\)), whose relative homotopy group may be trivial. The homotopy class may also change by closing the spectral gap, at which point a band touching characterizes an algebraic CSL. Thus, **the boundaries of fragile topological CSLs are algebraic CSLs.** The homotopy classification generalizes to systems with multiple degenerate flat bands or multiple higher bands (also see Refs. [35; 36] for detailed discussions). ## III Crystalline symmetry analysis In previous studies [35; 36], several models utilizing the constraint formalism have been reviewed and proposed (see Tables 2 and 3 in Ref. [36]). To classify a Hamiltonian written in the constraint form, it is typically necessary to Fourier transform it into momentum space, diagonalizing the Hamiltonian. An essential inquiry at this juncture concerns the ability to determine the class of the CSL solely based on the crystalline symmetry information of the model, without knowing the exact form of the constraint. The question is motivated by analogy to TQC [37], which uses symmetry to constrain the topology and connectivity of band structures without knowledge of the exact Hamiltonian. The answer to this question provides significant insights into the physics of any lattice model, such as discerning whether an algebraic CSL is protected by crystalline symmetry, or merely accidental. In this section, we outline the symmetry analysis employed to identify band touchings and determine the topology of the flat bands within the constraint Hamiltonian. As one may anticipate, this analysis has a close relation to the band irreducible representations (irreps) of the crystalline symmetry group used in TQC. However, this approach by itself is insufficient to produce the CSL classification, as we will now explain. One notable example that will be extensively examined later in Sec. IV is the comparison between the kagome antiferromagnetic (AFM) model and the kagome hexagon model. Although both models feature spins arranged on the kagome lattice, the former exhibits symmetry-protected gapless points on the bottom flat band, whereas the latter does not. While the band symmetry analysis does predict gap-closures at specific high symmetry points within the Brillouin Zone (BZ), it fails to discern whether these closures occur on the bottom flat band(s) (which is crucial for the CSL classification) or among the higher dispersive bands (which is irrelevant for the CSL classification). From this shortcoming, we discover that, apart from the irrep analysis of the microscopic spins and their lattice symmetries, the constrainers, whose centers define an "auxiliary lattice" that is generically distinct from the lattice of spins, play a critical role in determining the physics of the CSL. This second aspect of physics goes beyond the irrep analysis of the local spins and encapsulates crucial information regarding the properties of the flat bands. We now describe the symmetry classification scheme. Consider a CSL consisting of spins on a lattice \(S\), which is invariant under a space group \(\mathcal{G}\). Every local spin at a particular lattice site transforms as a representation of the site-symmetry group at that site. The local representation induces a band representation \(\mathcal{BR}_{S}\), which describes the symmetry of the entire spectrum, i.e., of both the dispersive bands and the flat bands [43; 44; 45; 37; 46]. While \(\mathcal{BR}_{S}\) contains information about band touching points in the spectrum, it does not distinguish band touching points between the dispersive bands themselves from band touchings between the dispersive and flat bands. Thus, to derive symmetry constraints specific to the latter, we need more information. The extra information lies in the constrainers. Specifically, the dispersive bands live in the Hilbert subspace spanned by the constrainers. Thus, to distinguish properties of the flat and dispersive bands, we must apply TQC to the constrainers. Let \(C\) denote the lattice comprised by the centers \(\mathbf{R}\) of each constraint \(\mathcal{C}_{I}(\mathbf{R})\) in Eq. (1). Note that this lattice is virtual, in a sense that it is generically distinct from the lattice \(C\) inhabited by physical spins. Since the constrainers must also satisfy the space group symmetry, each constraint transforms as a representation of the site-symmetry group of the corresponding site in the lattice \(C\), which induces a band representation \(\mathcal{BR}_{C}\) describing the symmetry of the dispersive bands. Since the flat bands and dispersive bands together comprise the entire spectrum, the band representation of the flat bands is determined by \[\mathcal{B}_{\text{FB}}=\mathcal{BR}_{S}\boxplus\mathcal{BR}_{C}. \tag{9}\] where \(\boxminus\) denotes the "difference" of two band representations, i.e., for each point in the BZ, the set of representations in \(\mathcal{BR}_{S}\) not contained in \(\mathcal{BR}_{C}\). Examples of band representations are listed in Table 1, where \(+\) indicates appending irreps at different momenta, \(\oplus\) indicates the addition of irreps at the same momentum, and \(\boxminus\) indicates the union of two band representations, defined analogously to \(\boxminus\). The symmetry data determines the symmetry enforced band touching points and the topology of the gapped flat bands, as follows: * If \(\mathcal{BR}_{C}\nsubseteq\mathcal{BR}_{S}\), then the spectrum has symmetry protected band touching points on the bottom flat band and the model is an algebraic CSL. The degeneracy of symmetry protected band touching points between the flat bands and the dispersive bands are determined by the subduced representations of \(\mathcal{BR}_{C}\) and \(\mathcal{BR}_{S}\) at each momentum \(\mathbf{q}\). Specifically, the number of dispersive bands is \(\dim(\mathcal{BR}_{C}\downarrow\mathbf{q}\cap\mathcal{BR}_{S}\downarrow\mathbf{ q})\) and similarly, the number of zero energy states \(n_{0}\) can be deduced from the band representations (see Appendix A). * If \(\mathcal{BR}_{C}\subseteq\mathcal{BR}_{S}\), then the spectrum has either no band touching points on the bottom flat band, or the band touching points are not protected by symmetry. The system belongs to FT-CSL class. In this case, at each \(\mathbf{q}\) the irrep of the constrainer band is contained in \(\mathcal{BR}_{S}\), and the flat bands are fully gapped from the dispersive bands. The topology of the flat bands is determined by its symmetry data vector \(\mathcal{B}_{\text{FB}}\) and can be either atomic (trivial) or non-trivial but fragile, classified using the symmetry indicators developed for electron band structures [47; 48; 49; 50]. The classification of flat bands in our constraint Hamiltonians is mathematically the same as the classification of flat bands in the bipartite single-particle band structures studied in Ref [51]. In Appendix B we prove their equivalence by constructing a map between the two systems, which relies on the introduction of the auxiliary lattice defined by the constrainers. Application: two kagome models We now demonstrate the symmetry analysis with a concrete example of the kagome AFM model versus the kagome hexagon model (cf. Fig. 2(a) for the kagome lattice). The two models have the same band representation \(\mathcal{BR}_{S}\) because they both feature spins on the kagome lattice. However, their CSL properties are different. The former is an algebraic CSL whose spectrum has a gapless point on the bottom flat bands, and the ground state fluctuations obey the emergent Maxwell Gauss's law \(\mathbf{\nabla}\cdot\mathbf{E}=0\). The latter is a topological CSL, with two degenerate bottom flat bands that are fully gapped away from the higher dispersive band. Its higher dispersive band eigenvector \(\mathbf{T}(\mathbf{q})\) is a three-component real vector on \(S^{2}\), which has homotopy classes known as skyrmion number. The distinction between the two models can only be discerned by considering the symmetry of the constrainers. ### Kagome AFM model The Hamiltonian of the kagome AFM model is \[\mathcal{H}_{\text{KAFM}} =\sum_{\langle i,j\rangle}S_{i}S_{j}+2\sum_{i}S_{i}^{2}\] \[=\sum_{\triangle}\left(\sum_{i\in\triangle}S_{i}\right)^{2}+\sum_ {\bigtriangledown}\left(\sum_{i\in\bigtriangledown}S_{i}\right)^{2}\] \[\equiv\sum_{\mathbf{R}\text{ for }\triangle}[\mathcal{C}_{\text{ KAFM1}}(\mathbf{R})]^{2}+\sum_{\mathbf{R}\text{ for }\bigtriangledown}[\mathcal{C}_{\text{ KAFM2}}(\mathbf{R})]^{2}, \tag{10}\] The Hamiltonian consists of two constrainers \(\mathcal{C}_{\text{ KAFM1,2}}(\mathbf{R})\), defined by sums over the spins on an up- or down-pointing triangle centered at \(\mathbf{R}\), respectively (see Fig. 2(b)); the sums in Eq. (10) are over the constraint centers. Ground states are hence defined by the two constraints: \[\mathcal{C}_{\text{ KAFM1, KAFM2}}(\mathbf{R})=0\quad\forall\ \mathbf{R}\, \tag{11}\] on every triangular plaquette. Diagonalizing the Hamiltonian in momentum space, we obtain the spectrum shown in Fig. 2(c). There is one bottom flat band with gapless points where the dispersive bands touch. Using the techniques developed in Refs. [35; 36], expanding the dispersive band eigenvector at the gapless point yields the Gauss's law, which turns out to be that of Maxwell's theory: \[\begin{split}& 3\partial_{x}(-S_{2}+S_{3})+\sqrt{3}\partial_{y}(2S_{1}-S_{ 2}-S_{3})\\ &\equiv\partial_{x}E_{x}+\partial_{y}E_{y}=0.\end{split} \tag{12}\] This is manifested in the pinch points in the equal-time spin correlation function shown in Fig. 2(d). We now apply the symmetry analysis introduced in the previous section to prove that the band touching between the flat and dispersive bands is symmetry required. This model is in space group \(\mathcal{G}=P6/mmm\). The spins are located at the \(3\)f Wyckoff position, which forms a kagome lattice. Each classical spin transforms as the scalar irrep \(\text{A}_{\text{g}}\) of the site symmetry group. Thus, \[\mathcal{BR}_{S}=(\text{A}_{\text{g}})_{3\text{f}}\uparrow\mathcal{G}. \tag{13}\] Its irreps at high symmetry points are listed in Table 1. The constrainers are centered around the 2c Wyckoff position, forming a honeycomb lattice where the honeycomb sites are located at the center of each triangle in the kagome lattice. Each constraint transforms as the scalar irrep \(\text{A}_{1}^{\prime}\) of the site symmetry group. Thus, \[\mathcal{BR}_{C,\text{ KAFM}}=(\text{A}_{1}^{\prime})_{2c}\uparrow\mathcal{G}. \tag{14}\] Applying Eq. (9), we get the irreps of the flat band (see Table 1) \[\begin{split}\mathcal{B}_{\text{FB, KAFM}}&=\mathcal{BR}_{S}\boxminus\mathcal{BR}_{C,\text{ KAFM}}\\ &=(\text{A}_{\text{g}})_{3\text{f}}\uparrow\mathcal{G}\boxminus( \text{A}_{1}^{\prime})_{2c}\uparrow\mathcal{G}\\ &=\left(\Gamma_{5}^{+}\ominus\Gamma_{4}^{-}\right)+(\text{K}_{1} )+\left(\text{M}_{3}^{-}\right).\end{split} \tag{15}\] At the \(\Gamma\) point, there is an irrep difference \(\Gamma_{5}^{+}\ominus\Gamma_{4}^{-}\). The fact that an irrep difference appears rather than a sum indicates a band touching point at \(\Gamma\) enforced by symmetry. ### Kagome hexagon model We now discuss the kagome-hexagon model [13] as an example of a fragile topological CSL with short-ranged correlations. Its Hamiltonian is defined as \[\mathcal{H}_{\text{KH}}=\sum_{\mathbf{R}\in\text{all hexagons}}[\mathcal{C}_{ \text{KH}}(\mathbf{R})]^{2}\, \tag{16}\] where the sum of \(\mathbf{R}\) runs over hexagon centers on the kagome lattice (indicated in Fig. 3(a,b)), or equivalently the centers of all unit cells. The constraint \(\mathcal{C}_{\text{KH}}(\mathbf{R})\) is the sum of the six spins around each hexagon as labeled in Fig. 3(a,b): \[\mathcal{C}_{\text{KH}}(\mathbf{R})=\sum_{i\in\text{ hex. at }\mathbf{R}}S_{i}. \tag{17}\] The ground states are hence defined by the constraint \(\mathcal{C}_{\text{KH}}(\mathbf{R})=0\) on every hexagonal plaquette. Diagonalizing \(\mathcal{H}_{\text{KH}}\) in momentum space yields a spectrum with three bands, of which the lowest two are flat and degenerate (Fig. 3(c)). This is the case of one constraint per unit cell discussed in Sec. II, and the eigenvector of the top band can be found by Fourier-transforming the constraint (as done in Refs. [35; 36]), \[\mathbf{T}(\mathbf{q})=\begin{pmatrix}\cos(\sqrt{3}q_{x})\\ \cos\left(-\frac{\sqrt{3}}{2}q_{x}+\frac{3}{2}q_{y}\right)\\ \cos\left(-\frac{\sqrt{3}}{2}q_{x}-\frac{3}{2}q_{y}\right)\end{pmatrix}, \tag{18}\] and its dispersion is \(\omega(\mathbf{q})=\left|\mathbf{T}(\mathbf{q})\right|^{2}\). One can then explicitly see that there are no band touchings between the upper band and the two flat bands at any point in the BZ, and consequently no pinch points in the correlation function (Fig. 3(d)). Accordingly, the real space correlations remain short ranged with a correlation length on the order of the nearest-neighbor distance at \(T=0\). The ground state fluctuations are not described by any effective Gauss's law due to the absence of gapless points. We now use symmetry to explain why the flat band is gapped. As in the previous example, this model is in space group \(\mathcal{G}=P6/mmm\), and with spins are located at the 3f Wyckoff position, forming a kagome lattice. The band representation is thus the same as in Eq. (13): \[\mathcal{BR}_{S}=(\mathrm{A_{g}})_{3\uparrow}\mathcal{G}. \tag{19}\] The symmetry of the constrainers are different, however. They are located at the 1a Wyckoff positions, which form a triangular lattice with each site at the center of a hexagon in the kagome lattice. Each constrainer transforms as an irrep \(\mathrm{A_{1g}}\) of the site symmetry group. Thus, their lattice irreps are \[\mathcal{BR}_{C,\mathrm{KH}}=(\mathrm{A_{1g}})_{1\mathrm{a}}\uparrow\mathcal{ G}, \tag{20}\] which is distinct from the previous example of the kagome AFM (contrast with the Eq. (14)). Applying Eq. (9), we obtain (see Table 1) \[\begin{split}\mathcal{B}_{\mathrm{FB,KH}}&=\mathcal{ BR}_{S}\boxplus\mathcal{BR}_{C,\mathrm{KH}}\\ &=(\mathrm{A_{g}})_{3\uparrow}\mathcal{G}\boxplus\left(\mathrm{A _{1g}}\right)_{1\mathrm{a}}\uparrow\mathcal{G}\\ &=\left(\Gamma_{5}^{+}\right)+(\mathrm{K_{5}})+\left(\mathrm{M_{3 }^{-}}\oplus\mathrm{M_{4}^{-}}\right).\end{split} \tag{21}\] In contrast to Eq. (15), no \(\boxplus\) signs appear in the last line of Eq. (21). Thus, all irreps of \(\mathcal{BR}_{C,\mathrm{KH}}\) are included in \(\mathcal{BR}_{S}\), which indicates that the flat band is fully gapped. However, the irreps that appear in \(\mathcal{B}_{\mathrm{FB,KH}}\) do not correspond to any sum of elementary band representations [47; 49; 50]. Thus, the flat bands have fragile topology, i.e., there is no way to understand them as coming from localized degrees of freedom. Comparing these two examples on the kagome lattice proves that the irreps of the spins on the lattice, which Figure 3: (a) Kagome lattice for the Kagome hexagon model (Eq. (16)). (b) Constrainer of the Kagome-Hexagon model. Classical spins are arranged on a kagome lattice, with ground states defined by the constraint that the sum of spins on each hexagonal plaquette must vanish (Eqs. (17)). (c) Spectrum \(\omega(\mathbf{q})\) that arises from diagonalizing the Hamiltonian (Eq. (16)) in momentum space. There are two degenerate flat bands at the bottom of the spectrum and a dispersive upper band with no band touchings between the upper and lower bands. (d) Spin structure factor showing an absence of singularities. Figure 2: (a) Kagome lattice for the Kagome AFM model (Eq. (10)). (b) The two constrainers of the kagome model involve sites in the shaded regions. The ground states are defined by the constraint that the sum of spins on each triangle must vanish (Eq. (10)). (c) Spectrum \(\omega(\mathbf{q})\) that arises from diagonalizing the Hamiltonian Eq. (10). There is one flat band at the bottom of the spectrum and two higher dispersive bands with gap-closing points on the flat band. (d) Spin structure factor showing pinch points at the position of gap-closing points. determine \(\mathcal{BR}_{S}\), do not provide enough information to determine the class of the CSL. Specifically, \(\mathcal{BR}_{S}\) requires a gap-closing at the \(\Gamma\) point because of the two-dimensional irrep \(\Gamma_{5}^{+}\), but it does not specify whether the band touching is between the flat band and a dispersive band (kagome AFM), between the two degenerate flat bands (kagome-hexagonal) or even between the two dispersive bands. To distinguish between these possibilities from symmetry requires the irreps of the constrainers, which are contained in \(\mathcal{BR}_{C}\). We note that the two constrainter models of spins on the kagome lattice discussed above can be mapped onto the two bipartite models of electrons discussed in the main text and Fig. 1 in Ref. [51], with the distinction that the physical spins are situated on only one sublattice - the other, virtual sublattice corresponds to the centers of the constrainers. The band representation analysis is however identical to that in Ref. [51], as follows from the equivalence we prove in Appendix B. ## V Construction of new classical spin liquid models Let us now showcase the usefulness of the crystalline symmetry analysis by designing a new CSL model. It has symmetry protected _nodal lines_ on the bottom flat band, i.e. the gapless points form a line. In spin structure factor, the nodal lines host pinch points around each point on the line, hence called _pinch lines_ in spin liquid literature [14]. Although also in the algebraic CSL category, the pinch-line spin liquids exhibit very different physics than the more common algebraic CSLs with a gapless point. It is thus interesting to construct robust pinch-line models protected by symmetry for future study. More specifically, we require the degeneracy nodal lines to be guaranteed to exist when both the number of constrainers and their symmetry property is fixed, but not the exact form of constrainers. Similar examples can also be found in Refs. [14; 36]. We consider a model with a cubic unit cell and spins located at the face centers (see Fig. 4(a)). In the large-\(\mathcal{N}\) limit, these spins are effectively treated as scalars or soft spins. The orientation of each spin is pointing out of the face, in the positive \(\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\) directions respectively. For example, the spin on the \(x-\)normal face in points along \(x-\)direction. The model is symmetric under the symmetries of the magnetic space group \(\mathcal{G}=P4^{\prime}32^{\prime}\)[52; 53] (No. 207.42 in the standard notation of Ref. [54]), which is generated by \(C_{4,[001]}\mathcal{T}\), \(C_{3,[111]}\) and \(C_{2,[110]}\), where the first subscript denotes the rotation order and the second three the rotation axis; \(\mathcal{T}\) denotes time-reversal. Let us denote the spin on \(x/y/z\)-normal face as \(S_{x/y/z}\) (note that they are not spin components but labels of spins on different sites). The symmetries act on the spins by: \[C_{3,[111]}: (S_{x},S_{y},S_{z})\mapsto(S_{z},S_{x},S_{y}) \tag{22}\] \[C_{2,[110]}: (S_{x},S_{y},S_{z})\mapsto(S_{y},S_{x},S_{z})\] (23) \[C_{4,[001]}\mathcal{T}: (S_{x},S_{y},S_{z})\mapsto(S_{y}(-\mathbf{a}_{x}),-S_{x},S_{z}), \tag{24}\] where \(S_{y}(-\mathbf{a}_{x})\) indicates that the transformed spin on the \(x\)-normal face center of the current unit cell comes from the \(S_{y}\) spin of the \(-\mathbf{a}_{x}\) unit cell. We use the constrainers centered at the center of each cube (see Fig. 4(b)): \[\mathcal{C}_{1}(\mathbf{R}) =S_{x}(\mathbf{R})-S_{y}(\mathbf{R})-S_{x}(\mathbf{R}-\mathbf{a}_ {x})+S_{y}(\mathbf{R}-\mathbf{a}_{y})\] \[\mathcal{C}_{2}(\mathbf{R}) =S_{y}(\mathbf{R})-S_{z}(\mathbf{R})-S_{y}(\mathbf{R}-\mathbf{a}_ {y})+S_{z}(\mathbf{R}-\mathbf{a}_{z})\] \[\mathcal{C}_{3}(\mathbf{R}) =S_{z}(\mathbf{R})-S_{x}(\mathbf{R})-S_{z}(\mathbf{R}-\mathbf{a}_ {z})+S_{x}(\mathbf{R}-\mathbf{a}_{x}), \tag{25}\] which obey the crystalline symmetries described in the preceding paragraph; the minus signs come from the transformation under \(C_{4}\mathcal{T}\). The Hamiltonian is then \[\mathcal{H}_{\mathsf{PL}}=\sum_{\mathbf{R}}\left[\mathcal{C}_{1}^{2}(\mathbf{R })+\mathcal{C}_{2}^{2}(\mathbf{R})+\mathcal{C}_{3}^{2}(\mathbf{R})\right] \tag{26}\] The Fourier transformation of the constrainers are \[\mathbf{T}_{1}(\mathbf{q}) =2i\left(\sin\frac{q_{x}}{2},\ -\sin\frac{q_{y}}{2},\ 0\right)^{T}\] \[\mathbf{T}_{2}(\mathbf{q}) =2i\left(0,\ \sin\frac{q_{y}}{2},\ -\sin\frac{q_{z}}{2}\right)^{T}\] \[\mathbf{T}_{3}(\mathbf{q}) =2i\left(-\sin\frac{q_{x}}{2},\ 0,\ \sin\frac{q_{z}}{2}\right)^{T} \tag{27}\] The three vectors span the subspace of the higher dispersive bands. Note that there are two dispersive bands and one flat band (see Figs. 4(c,d)) since the identity \(\mathbf{T}_{1}(\mathbf{q})+\mathbf{T}_{2}(\mathbf{q})+\mathbf{T}_{3}(\mathbf{ q})=0\) implies that the rank of the three vectors at a generic \(\mathbf{q}\) is 2. The line nodes are along the high-symmetry line \(\Delta=(u,0,0)\) and its symmetry related partners in momentum space. The Gauss's law can be obtained by examining the eigenvector configuration around a gapless point. At the \(\Gamma\) point (\(\mathbf{q}=\mathbf{0}\)), the charge-free Gauss's law is \[\partial_{x}E_{x}-\partial_{y}E_{y} =0,\] \[\partial_{y}E_{y}-\partial_{z}E_{z} =0, \tag{28}\] \[\partial_{z}E_{z}-\partial_{x}E_{x} =0.\] The two linearly independent constraints reflect the fact that both dispersive bands touch the bottom flat band at \(\Gamma\). Along the line \(\Delta=(0,u,0)\), the Gauss's law is \[\partial_{x}E_{x}-\partial_{z}E_{z}=0, \tag{29}\] while the degree of freedom \(E_{y}\) becomes "gapped" and is not involved in the Gauss's law. This is reflected in the spectrum in Fig. 4(c) where there is one higher dispersive band closes that touches the bottom flat band, and another stays gapped away from the origin. We now apply the symmetry analysis to determine the symmetry protected touchings between the dispersive bands and the flat bands. The spins are located at the 3d Wyckoff position of the aforementioned group \(P4^{\prime}32^{\prime}\), corresponding to the face centers of the cubic unit cell. Each spin transforms as the irrep \(\mathrm{B}_{1}\) of the site symmetry group \(4^{\prime}22^{\prime}\)[52, 53]. Thus, \[\mathcal{BR}_{S}=(\mathrm{B}_{1})_{\mathrm{3d}}\uparrow\mathcal{G}. \tag{30}\] Its irreps at high symmetry points are listed in Table 2. The constrainers are centered at the 1a Wyckoff position which is at the center of a cube. The constrainers transform as the representation \({}^{1}\mathrm{E}\oplus{}^{2}\mathrm{E}\) of the site symmetry group \(4^{\prime}32^{\prime}\). Thus, \[\mathcal{BR}_{C}=\left({}^{1}\mathrm{E}\right)\uparrow\mathcal{G}\boxplus\left( {}^{2}\mathrm{E}\right)\uparrow\mathcal{G}. \tag{31}\] Applying Eq. (9) and using the relevant band representations listed in Table 2, yields the representations in the flat band \[\mathcal{B}_{\mathrm{FB}} =\mathcal{BR}_{S}\boxminus\mathcal{BR}_{C} \tag{32}\] \[=\left(\mathrm{B}_{1}\right)_{\mathrm{3d}}\uparrow\mathcal{G} \boxminus\left({}^{1}\mathrm{E}\right)_{1a}\uparrow\mathcal{G}\boxminus \left({}^{2}\mathrm{E}\right)_{1a}\uparrow\mathcal{G}.\] The representations at high symmetry points and along pertinent high symmetry lines are listed in Table 2. Importantly, Table 2 shows that along the \(\Delta\) line, there are irreps in \(\mathcal{BR}_{C}\) which do not appear in \(\mathcal{BR}_{S}\), guaranteeing that the dispersive band touches the flat band along the \(\Delta\) line. ## VI Summary In summary, we have developed the mathematical formalism for analysing the effects of crystalline symmetries on band touchings and topology of CSLs. In short, comparing the band representations generated by the constrainers to those generated by the spins determines whether the CSL is gapless (algebraic) or gapped. In the former case, the band representations also determine the degeneracy and location of gapless points in the BZ, while in the latter case, the symmetry data encodes the topology of the gapped band. Our symmetry analysis goes beyond applying TQC to the spins on the lattice: on the contrary, as we have shown with explicit examples, understanding the symmetry of the constrainer terms in the Hamiltonian is imperative to deduce the band crossings. This is different than the TQC classification of electron band structures, which holds independent of the form of the one-body Hamiltonian. We note that while the mathematical formalism of the 'difference' of band representations in Eq. (9) is identical to that developed for electron band structures Figure 4: (a) The cubic lattice. It has three sites per unit cell sitting on the face centers, forming three sublattices indicated in red, blue and green. (b) Constrainers of the cubic model. (c) Spectrum \(\omega(\mathbf{q})\) from diagonalizing the Hamiltonian (Eq. (26)) in momentum space at \(q_{y}=0\). There is one flat band at the bottom of the spectrum and two higher dispersive bands, with band touchings between the flat and dispersive bands along the lines \(q_{x,z}=0\). (d) Spin structure factor at \(q_{y}=0\). (e,f) Spectrum and spin structure factor at \(q_{y}=\pi/4\). on bipartite lattices in Ref. [51], there are important distinctions. First, the physical spins only inhabit one sublattice (S, which need not be bipartite), and there are no intersubalattice 'hopping' terms, which are central to the construction in Ref. [51]. Second, the dual, virtual lattice (C) is determined solely by the centers of the constrained clusters in Eq. (1) and depends crucially on the form of the spin interactions. In fact, the constrainer spin Hamiltonian can be thought of, in a sense, as a square of a one-particle hopping Hamiltonian (see Appendix B). The formalism we have derived is a powerful tool for both understanding the robustness of spectral features in known models as well as for reverse-engineering CSLs with desired properties. We have demonstrated the latter by introducing a new CSL with symmetry-protected nodal line degeneracies. The symmetry formalism will be an essential part of the comprehensive classification of the CLSs in the large-\(\mathcal{N}\) limit going forward. Our study paves way to a high-throughout examination of all possible lattice models that have specific spectral features of interest. ## Acknowledgements H.Y. thank Owen Benton for helpful discussions. H.Y. and A.H.N. were supported by the U.S. National Science Foundation Division of Materials Research under the Award DMR-1917511. H.Y. and J.C. gratefully acknowledge support from the Simons Center for Geometry and Physics at Stony Brook University at which this project was initiated. Y.F. and J.C. acknowledge support from the National Science Foundation under Grant No. DMR-1942447. J.C. also acknowledges the support of the Flatiron Institute, a division of the Simons Foundation, and the Alfred P. Sloan Foundation through a Sloan Research Fellowship.
2309.14223
Wigner measures of electromagnetic waves in heterogeneous bianisotropic media
We study the propagation of high-frequency electromagnetic waves in randomly heterogeneous bianisotropic media with dissipative properties. For that purpose we consider randomly fluctuating optical responses of such media with correlation lengths comparable to the typical wavelength of the waves. Although the fluctuations are weak, they induce multiple scattering over long propagation times and/or distances such that the waves end up travelling in many different directions with mixed polarizations. We derive the dispersion and evolution properties of the Wigner measure of the electromagnetic fields, which describes their angularly-resolved energy density in this propagation regime. The analysis starts from Maxwell's equations with general constitutive equations. We first ignore the random fluctuations of the optical response and obtain uncoupled transport equations for the components of the Wigner measure on the different propagation modes (polarizations). Then we use a multi-scale expansion of the Wigner mesure to obtain the radiative transfer equations satisfied by these components when the fluctuations are no longer ignored. The radiative transfer equations are coupled through their collisional parts, which account for the scattering of waves by the random fluctuations and their possible changes in polarization. The collisional kernels describing these processes depend on the power and cross-power spectral densities of the fluctuations at the wavelength scale. The overall derivation is based on the interpretation of Wigner transforms and Wigner measures in terms of semiclassical pseudo-differential operators in their standard quantization.
Jean-Luc Akian, Éric Savin
2023-09-25T15:28:29Z
http://arxiv.org/abs/2309.14223v3
# Wigner measures of electromagnetic waves in heterogeneous Bianisotropic media ###### Abstract. We study the propagation of high-frequency electromagnetic waves in randomly heterogeneous bianisotropic media with dissipative properties. For that purpose we consider randomly fluctuating optical responses of such media with correlation lengths comparable to the typical wavelength of the waves. Although the fluctuations are weak, they induce multiple scattering over long propagation times and/or distances such that the waves end up travelling in many different directions with mixed polarizations. We derive the dispersion and evolution properties of the Wigner measure of the electromagnetic fields, which describes their angularly-resolved energy density in this propagation regime. The analysis starts from Maxwell's equations with general constitutive equations. We first ignore the random fluctuations of the optical response and obtain _uncoupled_ transport equations for the components of the Wigner measure on the different propagation modes (polarizations). Then we use a multi-scale expansion of the Wigner mesure to obtain the radiative transfer equations satisfied by these components when the fluctuations are no longer ignored. The radiative transfer equations are _coupled_ through their collisional parts, which account for the scattering of waves by the random fluctuations and their possible changes in polarization. The collisional kernels describing these processes depend on the power and cross-power spectral densities of the fluctuations at the wavelength scale. The overall derivation is based on the interpretation of Wigner transforms and Wigner measures in terms of semiclassical pseudo-differential operators in their standard quantization. Key words and phrases:Maxwell's equations, Bianisotropic dielectric media, Kinetic model, Transport equation, Radiative transfer. Corresponding author: E. Savin, ONERA-The French Aerospace Lab, 6 chemin de la Vauve aux Granges, FR-91123 Palaiseau cedex, France ([email protected]). to describe the corresponding propagation regimes [3, 46, 48]. Recent mathematical works on electromagnetic waves in random media have considered layered half-spaces [22], waveguides [4], or beam propagation in open environments [11], for example. In these works a preferred direction of propagation can be identified, and the electromagnetic field can be decomposed into plane wave components transverse to that direction. Their amplitudes are random fields of which evolution is driven by the random fluctuations of the dielectric parameters and can be studied in different asymptotic regimes. For example, the cumulative scattering effects induced by weak inhomogeneities at large distances of propagation in a waveguide or a beam are typically manifested in the exponential decay of the average fields, the enhancement of their random fluctuations, and depolarization. We rather investigate in this paper the situation where no preferred direction of propagation can be identified beforehand as waves travel in open random media. The wavelength and the correlation length of the random fluctuations of the electromagnetic parameters are small and comparable, while the propagation distance and/or time lapse are large. The amplitude of the random fluctuations is also weak. This high-frequency scaling is of interest if the objective is for instance to probe some inhomogeneities from remote sensors. In this setting, waves are multiply scattered by the inhomogeneities as they travel in all directions, and their polarization and phase get randomized. It is then known that the energy density is the relevant quantity to focus on, and that it can be evaluated from the Wigner transform of the wave field and its non-negative high-frequency limit measure-the Wigner measure [23, 36, 38, 54]. The evolution of the Wigner measure is described by radiative transfer equations [15, 30, 39, 52] with linear collisional operators of which kernels depend on the correlation structure of the inhomogeneities [9, 47]. Actually, the sole information needed to account for the fluctuations of the electromagnetic parameters is their power spectral densities, _i.e._ the spatial Fourier transforms of their auto-correlation functions. These radiative transfer equations for electromagnetic vector waves were already derived in [40, 47] for random fluctuations in isotropic media, and subsequently in [18, 19] for random fluctuations in bianisotropic media with homogeneous mean background parameters and no dissipation. Our aim is to extend these results to more general constitutive equations, for heterogeneous materials with dispersive and dissipative properties. Radiative transfer equations have a geometrical interpretation [34] notably in terms of bicharacteristic rays that is, rays in phase space, which parallels classical asymptotic ray methods [7, 32]. The latter are applicable to synthetic aperture radar (SAR) simulation for example [6], including the investigation of multipath and vibration signatures in radar imaging and remote sensing or the computation of antenna transfer functions in heterogeneous media whenever the approximations of geometrical optics hold. Ray tracing solvers are needed in this respect [24]. One of the possible applications of this research concerns the development of ray tracing algorithms in phase space for complex media such as plasmas [51]. Ray tracing has been intensively developed in the last decade in the context of computer graphics [28, 42], notably, and one also expects that the application programming interfaces that have been implemented in the video game industry, among others, could be used in the context of computational electromagnetics alike. The rest of the paper is organized as follows. In Sect. 2 we introduce the basic physical framework and notations used throughout, and we summarize our main results. The relevance of considering a Wigner transform and its non-negative limit measure for the analysis of high-frequency wave propagation phenomena is emphasized. We first consider the transport regime in the absence of fluctuations of the optical response that models instantaneous effects, and then the radiative transfer regime when weak random fluctuations are taken into account. The detailed derivation of the transport equations is outlined in Sect. 3, and the detailed derivation of the radiative transfer equations is outlined in Sect. 4. We use pseudo-differential calculus and the interpretation of Wigner transforms in terms of semiclassical pseudo-differential operators, choosing the same quantization for both of them. We argue that this choice clarifies the analyses as compared to other approaches where the Wigner transforms and the operators have different quantizations. A summary and outlook are finally drawn in Sect. 5. ## 2. Maxwell's equations in bianisotropic media and main results ### Physical setting We wish to analyze high-frequency electromagnetic wave propagation phenomena in general heterogeneous, bianisotropic dielectric media. The Ampere and Faraday laws read: \[\frac{\partial\mathbf{D}}{\partial t}=\mathbf{\nabla}\times\mathbf{H}-\mathbf{J}\,,\quad\frac{ \partial\mathbf{B}}{\partial t}=-\mathbf{\nabla}\times\mathbf{E}\,, \tag{1}\] where \(\mathbf{B}\) is the magnetic flux density, \(\mathbf{D}\) is the electric flux density, or electric displacement field, \(\mathbf{E}\) is the electric field, \(\mathbf{H}\) is the magnetic field, and \(\mathbf{J}\) is the density of electric current. Taking the divergence of Eq. (1) the fluxes are subjected to Gauss laws: \[\mathbf{\nabla}\cdot\mathbf{D}=\rho\,,\quad\mathbf{\nabla}\cdot\mathbf{B}=0\,, \tag{2}\] for \(\rho\) being the density of electric charge, which is then related to the current by the equation of continuity of charge: \[\frac{\partial\rho}{\partial t}+\mathbf{\nabla}\cdot\mathbf{J}=0\,.\] Here we consider open media in \(\mathbb{R}^{3}\) with no external charge (\(\rho=0\)) and no external current (\(\mathbf{J}=\mathbf{0}\)). Besides, Gauss laws hold for all times provided that they hold at some initial time. The Ampere and Faraday laws are supplemented with constitutive equations for general linear dispersive media with the properties of causality, time-invariance, and locality (see _e.g._[14, 29, 31]), which read: \[\begin{split}&\mathbf{D}(\mathbf{x},t)=\mathbf{\epsilon}_{0}(\mathbf{x})\mathbf{E}(\bm {x},t)+(\mathbf{\epsilon}_{d}\star\mathbf{E})(\mathbf{x},t)+\mathbf{\xi}_{0}(\mathbf{x})\mathbf{H}( \mathbf{x},t)+(\mathbf{\xi}_{d}\star\mathbf{H})(\mathbf{x},t)\,,\\ &\mathbf{B}(\mathbf{x},t)=\mathbf{\zeta}_{0}(\mathbf{x})\mathbf{E}(\mathbf{x},t)+(\mathbf{ \zeta}_{d}\star\mathbf{E})(\mathbf{x},t)+\mathbf{\mu}_{0}(\mathbf{x})\mathbf{H}(\mathbf{x},t)+(\mathbf{ \mu}_{d}\star\mathbf{H})(\mathbf{x},t)\,,\end{split} \tag{3}\] where \(\mathbf{\epsilon}_{0}(\mathbf{x})\) is the spatially variable permittivity tensor, \(\mathbf{\mu}_{0}(\mathbf{x})\) is the spatially variable permeability tensor, and \(\mathbf{\xi}_{0}(\mathbf{x})\) and \(\mathbf{\zeta}_{0}(\mathbf{x})\) are spatially variable magnetoelectric tensors. Also \(\star\) stands for the time convolution product: \[(\mathbf{\epsilon}_{d}\star\mathbf{E})(\mathbf{x},t)=\int_{0}^{t}\mathbf{\epsilon}_{d}(\mathbf{x},\tau)\mathbf{E}(\mathbf{x},t-\tau)\mathrm{d}\tau\] with similar expressions for the other terms. In Eq. (3) \(\mathbf{P}=\mathbf{\epsilon}_{d}\star\mathbf{E}\) stands for the polarization field and \(\mathbf{M}=\mathbf{\mu}_{0}^{-1}(\mathbf{\mu}_{d}\star\mathbf{H})\) stands for the magnetization field, but these constitutive equations also account for coupled magnetoelectric effects through the magnetoelectric tensors. Here we consider the subclass of bianisotropic materials for which the following symmetry relationships hold: \(\mathbf{\zeta}_{0}=\mathbf{\xi}_{0}^{\star}\), _i.e._ the conjugate transpose of \(\boldsymbol{\xi}_{0}\); and \(\boldsymbol{\epsilon}_{0}=\boldsymbol{\epsilon}_{0}^{*}\), \(\boldsymbol{\mu}_{0}=\boldsymbol{\mu}_{0}^{*}\). These symmetry conditions reduce the number of independent constitutive parameters in \(\boldsymbol{\epsilon}_{0}\), \(\boldsymbol{\mu}_{0}\), and \(\boldsymbol{\xi}_{0}\) to \(21\). Now plugging Eq. (3) into Eq. (1), we arrive at Maxwell's \(6\times 6\) system for the electric field \(\boldsymbol{E}\) and magnetic field \(\boldsymbol{H}\): \[\frac{\partial}{\partial t}\left(\begin{bmatrix}\boldsymbol{\epsilon}_{0}& \boldsymbol{\xi}_{0}\\ \boldsymbol{\xi}_{0}^{*}&\boldsymbol{\mu}_{0}\end{bmatrix}+\begin{bmatrix} \boldsymbol{\epsilon}_{d}&\boldsymbol{\xi}_{d}\\ \boldsymbol{\zeta}_{d}&\boldsymbol{\mu}_{d}\end{bmatrix}\star\right)\begin{pmatrix} \boldsymbol{E}\\ \boldsymbol{H}\end{pmatrix}+\boldsymbol{\nabla}\times\begin{bmatrix} \boldsymbol{0}&-\boldsymbol{I}\\ \boldsymbol{I}&\boldsymbol{0}\end{bmatrix}\begin{pmatrix}\boldsymbol{E}\\ \boldsymbol{H}\end{pmatrix}=\boldsymbol{0}\,, \tag{4}\] where \(\boldsymbol{I}\) is the \(3\times 3\) identity matrix. We introduce the following matrices of electromagnetic tensors: \[\boldsymbol{K}_{0}(\boldsymbol{x})=\begin{bmatrix}\boldsymbol{\epsilon}_{0}( \boldsymbol{x})&\boldsymbol{\xi}_{0}(\boldsymbol{x})\\ \boldsymbol{\xi}_{0}^{*}(\boldsymbol{x})&\boldsymbol{\mu}_{0}(\boldsymbol{x}) \end{bmatrix}\,,\quad\boldsymbol{K}_{d}(\boldsymbol{x},t)=\begin{bmatrix} \boldsymbol{\epsilon}_{d}(\boldsymbol{x},t)&\boldsymbol{\xi}_{d}( \boldsymbol{x},t)\\ \boldsymbol{\zeta}_{d}(\boldsymbol{x},t)&\boldsymbol{\mu}_{d}(\boldsymbol{x},t )\end{bmatrix}\,. \tag{5}\] \(\boldsymbol{K}_{0}\) is called the optical response, or (generalized) susceptibility tensor, and models instantaneous effects, while \(\boldsymbol{K}_{d}\) is called the (generalized) susceptibility kernel, and models memory as well as dissipation effects [29, 31]. We assume throughout the paper that \(\boldsymbol{K}_{0}\), which is Hermitian, is also positive definite, as in [18, 19]. A negative definite optical response gives rise to a negative index of refraction, as in metamaterials [53]. **Example 1**.: _An isotropic, lossless medium is \(\boldsymbol{K}_{0}(\boldsymbol{x})=\mathrm{diag}(\epsilon_{0}(\boldsymbol{x} )\boldsymbol{I},\mu_{0}(\boldsymbol{x})\boldsymbol{I})\) and \(\boldsymbol{K}_{d}=\boldsymbol{0}\). A chiral medium is a reciprocal bi-isotropic medium with \(\boldsymbol{\epsilon}_{0}=\epsilon_{0}\boldsymbol{I}\), \(\boldsymbol{\mu}_{0}=\mu_{0}\boldsymbol{I}\), and a purely imaginary magnetoelectric tensor \(\boldsymbol{\xi}_{0}=\mathrm{i}\xi\boldsymbol{I}\), where \(\epsilon_{0}\) and \(\mu_{0}\) are the permittivity and permeability constants, and \(\xi\in\mathbb{R}\) is the magnetoelectric constant. An example of a dissipative, local homogeneous material (in the terminology of [14]) is the Lorentz model with damping in [45] with:_ \[\boldsymbol{K}_{0}=\begin{bmatrix}\epsilon_{0}\boldsymbol{I}&\boldsymbol{0} \\ \boldsymbol{0}&\mu_{0}\boldsymbol{I}\end{bmatrix}\,,\quad\widehat{\boldsymbol{K }}_{d}(\omega)=\begin{bmatrix}\widehat{\epsilon}_{d}(\omega)\boldsymbol{I}& \boldsymbol{0}\\ \boldsymbol{0}&\boldsymbol{0}\end{bmatrix}\,, \tag{6}\] _where \(\widehat{\epsilon}_{d}(\omega)=\frac{\epsilon_{0}\omega_{p}^{2}}{-\omega^{2}+ \mathrm{i}\omega\Gamma+\omega_{d}^{2}}\) is the susceptibility (in frequency domain), \(\omega_{p}\) is the plasma frequency such that \(\omega_{p}^{2}=\frac{Ne^{2}}{m\epsilon_{0}}\), \(\omega_{0}\) is a characteristic frequency for the motion of an electron with mass \(m\) and charge \(e\), \(N\) is the electron density, and \(\Gamma\) is a damping loss rate. The Fourier transform in time domain is \(\widehat{\boldsymbol{K}}_{d}(\omega)=\int_{\mathbb{R}}\mathrm{e}^{-\mathrm{i} \omega t}\,\boldsymbol{K}_{d}(t)\mathrm{d}t\). Drude material is \(\Gamma=0\) and \(\omega_{0}=0\). Other examples are described in e.g. [14, 18, 19, 31]._ We then introduce the scalar product \(\left\langle\boldsymbol{u},\boldsymbol{v}\right\rangle=\left(\boldsymbol{K}_{ 0}\boldsymbol{u}\right)^{*}\boldsymbol{v}=\boldsymbol{u}^{*}\boldsymbol{K}_{0} \boldsymbol{v}\) (for \(\boldsymbol{K}_{0}^{*}=\boldsymbol{K}_{0}\) where \(\boldsymbol{K}_{0}^{*}\) stands for the conjugate transpose of \(\boldsymbol{K}_{0}\)), and the electromagnetic energy density \(\mathcal{E}(\boldsymbol{x},t)\) and the energy flow density (Poynting vector) \(\boldsymbol{\mathcal{F}}(\boldsymbol{x},t)\): \[\begin{split}\mathcal{E}(\boldsymbol{x},t)&=\frac{1}{2} \left\langle\begin{pmatrix}\boldsymbol{E}\\ \boldsymbol{H}\end{pmatrix},\begin{pmatrix}\boldsymbol{E}\\ \boldsymbol{H}\end{pmatrix}\right\rangle\\ &=\frac{1}{2}\boldsymbol{E}(\boldsymbol{x},t)^{*}\boldsymbol{ \epsilon}_{0}(\boldsymbol{x})\boldsymbol{E}(\boldsymbol{x},t)+\frac{1}{2} \boldsymbol{H}(\boldsymbol{x},t)^{*}\boldsymbol{\mu}_{0}(\boldsymbol{x}) \boldsymbol{H}(\boldsymbol{x},t)\\ &\quad\quad+\Re\mathfrak{e}\{\boldsymbol{E}(\boldsymbol{x},t)^{*} \boldsymbol{\xi}_{0}(\boldsymbol{x})\boldsymbol{H}(\boldsymbol{x},t)\}\,,\\ \boldsymbol{\mathcal{F}}(\boldsymbol{x},t)&=\overline{\boldsymbol{E}( \boldsymbol{x},t)}\times\boldsymbol{H}(\boldsymbol{x},t)\,,\end{split} \tag{7}\] such that in the undamped case \(\boldsymbol{K}_{d}(\boldsymbol{x},t)\equiv\boldsymbol{0}\): \[\frac{\partial\mathcal{E}}{\partial t}+\boldsymbol{\nabla}\cdot\boldsymbol{ \mathcal{F}}=0\,,\quad\frac{\mathrm{d}}{\mathrm{d}t}\int\mathcal{E}( \boldsymbol{x},t)\mathrm{d}\boldsymbol{x}=0\,.\] The conservation law above establishes how the energy density is spread in space, but it does not describe how it propagates. Our main objective in this paper is to describe how this quantity evolves both in space and direction for high-frequency waves induced by, say, strongly oscillating initial conditions, and for a randomly fluctuating optical response and a non-vanishing susceptibility kernel \(\mathbf{K}_{d}\). In the next section we show how this goal can be achieved using Wigner transforms of the wave fields and their high-frequency limits. We illustrate it with the wave equation with constant speed, before we turn to Maxwell's system (4) and state our main results in Sect. 2.3. ### High-frequency limit and Wigner transform Maxwell's equations (4) are subjected to highly oscillating initial data for the electromagnetic field \(\mathbf{u}=(\mathbf{E},\mathbf{H})\) at \(t=0\). Consider for example the WKB form [21, 23, 50]: \[\mathbf{u}(\mathbf{x},0)=\mathbf{u}_{\varepsilon}^{\mathrm{WKB}}(\mathbf{x})=\Re\{\mathbf{u}_{I}( \mathbf{x})\,\mathrm{e}^{\frac{i}{\varepsilon}S_{I}(\mathbf{x})}\}\,, \tag{8}\] where \(\mathbf{u}_{I}\) is a square integrable function on \(\mathbb{R}^{3}\); \(S_{I}\) is an integrable scalar function on \(\mathbb{R}^{3}\) (at least locally), as well as its derivative \(\mathbf{\nabla}_{\mathbf{x}}S_{I}\); and \(\Re\mathfrak{e}\) stands for the real part. The small parameter \(\varepsilon\) represents the typical wavelength of oscillations of these data. These oscillations are inherited by the actual electromagnetic field \(\mathbf{u}(\mathbf{x},t)\) satisfying (4) at all times, which prevents it from converging nicely as the wavelength \(\varepsilon\) gets smaller and smaller in the high-frequency limit. A way to tackle this shortcoming is to consider quadratic observables of \(\mathbf{u}\) instead, as emphasized in _e.g._[1, 8, 9, 10, 26, 44, 47] for different types of waves. In particular, the Wigner transform [23, 36, 38, 54] of the solutions of Eq. (4) shall be considered since its evolution in the limit \(\varepsilon\to 0\) can be derived explicitly. It provides a phase space description of how the associated energy density propagates in this very limit. The advantages of this approach over more classical methods such as the WKB method, which is suggested by the type of data (8), are discussed in [21, 50] for example. Notably, the Wigner transform needs much lower regularity assumptions on \(\mathbf{u}_{I}\) and \(S_{I}\) than in the WKB method, and evades the singularities (focal points or caustics) that can develop in finite times with the latter method. Let \(\vartheta\in[0,1]\). For temperate distributions \(\mathbf{u},\mathbf{v}\) defined on \(\mathbb{R}^{m}\), the Wigner transform is: \[\mathbf{W}_{\varepsilon}^{\vartheta}[\mathbf{u},\mathbf{v}](\mathbf{x},\mathbf{k})=\frac{1}{(2\pi )^{m}}\int_{\mathbb{R}^{m}}\mathrm{e}^{\mathrm{i}\mathbf{k}\cdot\mathbf{y}}\,\mathbf{u}\,( \mathbf{x}-\varepsilon(1-\vartheta)\mathbf{y})\,\mathbf{v}^{*}\,(\mathbf{x}+\varepsilon \vartheta\mathbf{y})\,\,\mathrm{d}\mathbf{y}\,. \tag{9}\] We denote by \(\mathbf{W}_{\varepsilon}^{\vartheta}[\mathbf{u}]:=\mathbf{W}_{\varepsilon}^{\vartheta}[ \mathbf{u},\mathbf{u}]\) the self Wigner transform of \(\mathbf{u}\). Now let \((\mathbf{u}_{\varepsilon})\) be a bounded sequence in \(L^{2}(\mathbb{R}^{m})\), the set of square integrable functions defined on \(\mathbb{R}^{m}\). Then it can be shown that, up to extracting a subsequence if need be, \(\mathbf{W}_{\varepsilon}^{\vartheta}[\mathbf{u}_{\varepsilon}]\) has a weak limit (in the sense of temperate distribution) as \(\varepsilon\to 0\) which is independent of the choice of \(\vartheta\) in Eq. (9). Also if \(\mathbf{W}[\mathbf{u}_{\varepsilon}]\) is such a limit, it is a non-negative, Hermitian matrix-valued measure. These results are detailed in _e.g._[23, 54]. Here the notation of [23, p. 330] is used for the limit \(\mathbf{W}\) (independent of \(\varepsilon\)) of the family \((\mathbf{u}_{\varepsilon})\) (dependent of \(\varepsilon\)) but clearly the measure \(\mathbf{W}[\mathbf{u}_{\varepsilon}]\) is independent of \(\varepsilon\). The scalar Wigner measure is the trace of the latter \(W[\mathbf{u}_{\varepsilon}]=\operatorname{Tr}\mathbf{W}[\mathbf{u}_{\varepsilon}]\) and can be directly related to the energy density of the sequence \((\mathbf{u}_{\varepsilon})\) in the high-frequency limit. Taking the Wigner transform of the sequence of WKB data (8) for example, one obtains: \[\mathbf{W}[\mathbf{u}_{\varepsilon}^{\mathrm{WKB}}](\mathbf{x},\mathbf{k}) =\mathbf{u}_{I}(\mathbf{x})\mathbf{u}_{I}^{*}(\mathbf{x})\delta(\mathbf{k}-\mathbf{\nabla }_{\mathbf{x}}S_{I}(\mathbf{x}))\,,\] \[W[\mathbf{u}_{\varepsilon}^{\mathrm{WKB}}](\mathbf{x},\mathbf{k}) =|\mathbf{u}_{I}(\mathbf{x})|^{2}\delta(\mathbf{k}-\mathbf{\nabla}_{\mathbf{x}}S_{I} (\mathbf{x}))\,.\] If these data are propagated by the wave equation in \(\mathbb{R}^{3}\) with constant speed \(c\): \[\frac{1}{c^{2}}\frac{\partial^{2}\boldsymbol{u}_{\varepsilon}}{ \partial t^{2}}-\Delta\boldsymbol{u}_{\varepsilon}=\boldsymbol{0}\,, \quad\boldsymbol{x}\in\mathbb{R}^{3}\,,\;t>0\,,\] \[\boldsymbol{u}_{\varepsilon}(\boldsymbol{x},0)=\boldsymbol{0}\,, \;\frac{\partial\boldsymbol{u}_{\varepsilon}}{\partial t}(\boldsymbol{x},0)= \boldsymbol{u}_{\varepsilon}^{\text{WKB}}(\boldsymbol{x})\,, \quad\boldsymbol{x}\in\mathbb{R}^{3}\,,\] then for \(t>0\) Kirchhoff's formula yields [17, SS2.4]: \[\boldsymbol{u}_{\varepsilon}(\boldsymbol{x},t)=\int_{S^{2}}t\boldsymbol{u}_{ \varepsilon}^{\text{WKB}}(\boldsymbol{x}-c\hat{\boldsymbol{p}}t)\frac{ \mathrm{d}\Omega(\hat{\boldsymbol{p}})}{4\pi}\,,\] where \(S^{2}\) is the unit sphere of \(\mathbb{R}^{3}\) centered at \(\boldsymbol{0}\), and \(\mathrm{d}\Omega\) is its surface differential element. The scalar Wigner measure of \(\boldsymbol{u}_{\varepsilon}(\cdot,t)\) reads [23, SS3.3]: \[W[\boldsymbol{u}_{\varepsilon}](\boldsymbol{x},\boldsymbol{k},t)=\frac{1}{2} \sum_{\alpha=\pm}|\boldsymbol{u}_{I}(\boldsymbol{x}+\alpha c\hat{\boldsymbol {k}}t)|^{2}\delta(\boldsymbol{k}-\boldsymbol{\nabla}_{\boldsymbol{x}}S_{I}( \boldsymbol{x}+\alpha c\hat{\boldsymbol{k}}t))\,,\] with the usual notation \(\hat{\boldsymbol{k}}=\frac{\boldsymbol{k}}{|\boldsymbol{k}|}\) for \(\boldsymbol{k}\neq\boldsymbol{0}\). It provides a characterization of the evolution of the associated energy density both in space and direction. Because of the assumed low regularity of \(\boldsymbol{u}_{I}\) and \(S_{I}\), a classical WKB method could not be used with these data. Note that more general data can be considered as well as shown in [23]. ### Main results We derive the Wigner measure \(\boldsymbol{W}\) of the high-frequency electromagnetic field \(\boldsymbol{u}\) which obeys Maxwell's \(6\times 6\) system (4) when the optical response \(\boldsymbol{K}_{0}\) exhibits in addition random fluctuations. That is: \[\frac{\partial}{\partial t}\left(\boldsymbol{K}(\boldsymbol{x})\boldsymbol{u} (\boldsymbol{x},t)+\int_{0}^{t}\boldsymbol{K}_{d}(\boldsymbol{x},\tau) \boldsymbol{u}(\boldsymbol{x},t-\tau)\mathrm{d}\tau\right)+\boldsymbol{M}( \boldsymbol{\nabla}_{\boldsymbol{x}})\boldsymbol{u}(\boldsymbol{x},t)= \boldsymbol{0} \tag{10}\] for \(\boldsymbol{x},t\in\mathcal{O}\times\mathbb{R}_{+}^{*}\) in some open domain \(\mathcal{O}\subseteq\mathbb{R}^{3}\) and for highly oscillating initial conditions at \(t=0\) characterized by the small scale \(\varepsilon\ll 1\). Here: \[\boldsymbol{M}(\boldsymbol{k})=\begin{bmatrix}\boldsymbol{0}&-\boldsymbol{ \Omega}(\boldsymbol{k})\\ \boldsymbol{\Omega}(\boldsymbol{k})&\boldsymbol{0}\end{bmatrix} \tag{11}\] is the symmetric Maxwell operator for \(\boldsymbol{\Omega}(\boldsymbol{k})\) being the skew-symmetric matrix such that \(\boldsymbol{\Omega}(\boldsymbol{k})\mathbf{a}=\boldsymbol{k}\times\mathbf{a}\) for any vector \(\mathbf{a}\in\mathbb{C}^{3}\). Also the actual optical response \(\boldsymbol{K}(\boldsymbol{x})\) reads: \[\boldsymbol{K}(\boldsymbol{x})=\boldsymbol{K}_{0}(\boldsymbol{x})\left[ \boldsymbol{I}+\sigma\boldsymbol{V}\left(\frac{\boldsymbol{x}}{\ell_{c}} \right)\right]\,,\] where the dimensionless random matrix \(\boldsymbol{V}\) represents random fluctuations of the optical response \(\boldsymbol{K}_{0}\) of the background medium, and is such that \(\boldsymbol{K}_{0}\boldsymbol{V}=\boldsymbol{V}^{*}\boldsymbol{K}_{0}\) to preserve the Hermiticity of the actual optical response \(\boldsymbol{K}\). It is assumed that \((\boldsymbol{V}(\boldsymbol{y}),\,\boldsymbol{y}\in\mathbb{R}^{3})\) is a second-order, mean-square homogeneous (spatially stationary) random field with zero mean \(\mathbb{E}\left\{\boldsymbol{V}(\boldsymbol{y})\right\}=\boldsymbol{0}\) and integrable fourth-order autocorrelation tensor \(\boldsymbol{R}\) such that: \[\boldsymbol{R}(\boldsymbol{y}-\boldsymbol{y}^{\prime}):=\mathbb{E}\left\{ \boldsymbol{V}(\boldsymbol{y})\otimes\boldsymbol{V}(\boldsymbol{y}^{\prime}) \right\}=\int_{\mathbb{R}^{3}}\mathrm{e}^{\mathrm{i}\boldsymbol{k}\cdot( \boldsymbol{y}-\boldsymbol{y}^{\prime})}\,\widehat{\boldsymbol{R}}(\boldsymbol {k})\mathrm{d}\boldsymbol{k}\,, \tag{12}\] where \(\mathbb{E}\left\{\cdot\right\}\) stands for the mathematical expectation, or average. The second equality above stems from Bochner's theorem and the fact that \(\boldsymbol{y}\mapsto\boldsymbol{R}(\boldsymbol{y})\) is positive semi-definite. The power spectral density of that random field is the matrix: \[\widehat{\boldsymbol{R}}(\boldsymbol{k})=\frac{1}{(2\pi)^{3}}\int_{\mathbb{R }^{3}}\mathrm{e}^{-\mathrm{i}\boldsymbol{k}\cdot\boldsymbol{y}}\,\boldsymbol{ R}(\boldsymbol{y})\mathrm{d}\boldsymbol{y} \tag{13}\] which is positive. The autocorrelation tensor is normalized such that \(\int_{\mathbb{R}^{3}}\mathbf{R}(\mathbf{y})\mathrm{d}\mathbf{y}=\mathrm{O}(1)\) and \(\mathbf{R}(\mathbf{0})=\mathrm{O}(1)\). The length scale \(\ell_{c}\) is the correlation length and the dimensionless scalar \(\sigma\geq 0\) quantifies the amplitude of the fluctuations. These parameters are of the same order for all correlations of the fluctuations in the proposed model. At first, the random fluctuations of the optical response are disregarded, setting \(\sigma=0\). Let: \[\mathbf{L}_{0}(\mathbf{x},\mathbf{k}) =\mathbf{K}_{0}^{-1}(\mathbf{x})\mathbf{M}(\mathbf{k})\,,\] \[\mathbf{L}_{1}(\mathbf{x},\omega) =\mathrm{i}\omega\mathbf{K}_{0}^{-1}(\mathbf{x})\widehat{\mathbf{K}}_{d}(\mathbf{ x},\omega)\,, \tag{14}\] where \(\widehat{\mathbf{K}}_{d}(\mathbf{x},\omega)\) is the Fourier transform of \(\mathbf{K}_{d}(\mathbf{x},t)\) with respect to \(t\). \(\mathbf{L}_{0}\) being symmetric for the scalar product \(\langle\cdot\rangle\), let us introduce the eigen-expansion: \[\mathbf{L}_{0}=\sum_{\alpha}\omega_{\alpha}\mathbf{\Pi}_{\alpha} \tag{15}\] where \(\mathbf{\Pi}_{\alpha}\) is the projector on the \(\alpha\)-th eigen-subspace with associated real eigenvalue \(\omega_{\alpha}\), such that \(\sum_{\alpha}\mathbf{\Pi}_{\alpha}=\mathbf{I}\) and \(\mathbf{K}_{0}\mathbf{\Pi}_{\alpha}=\mathbf{\Pi}_{\alpha}^{*}\mathbf{K}_{0}\). The right eigenvectors \(\mathbf{b}_{\alpha}=(\mathbf{b}_{\alpha}^{1},\ldots\mathbf{b}_{\alpha}^{A})\) are such that \(\langle\mathbf{b}_{\alpha},\mathbf{b}_{\beta}\rangle=(\mathbf{K}_{0}\mathbf{b}_{\alpha})^{*} \mathbf{b}_{\beta}=\delta_{\alpha\beta}\mathbf{I}_{A}\), where \(A\) is the order of multiplicity of the \(\alpha\)-th eigenvalue and \(\mathbf{I}_{A}\) is the \(A\times A\) identity matrix; also \(\sum A=6\). Letting \(\mathbf{c}_{\alpha}=\mathbf{K}_{0}\mathbf{b}_{\alpha}\) be the left eigenvectors, one has therefore: \[\mathbf{L}_{0}\mathbf{b}_{\alpha} =\omega_{\alpha}\mathbf{b}_{\alpha}\,,\] \[\mathbf{c}_{\alpha}^{*}\mathbf{L}_{0} =\omega_{\alpha}\mathbf{c}_{\alpha}^{*}\,,\] \[\mathbf{\Pi}_{\alpha} =\mathbf{b}_{\alpha}\mathbf{c}_{\alpha}^{*}\,. \tag{16}\] Then it is shown that: \[\mathbf{W}=\sum_{\alpha}\delta(\omega+\omega_{\alpha})\mathbf{W}_{\alpha}\,, \tag{17}\] where \(\mathbf{W}_{\alpha}=\mathbf{\Pi}_{\alpha}\mathbf{W}=\mathbf{W}\mathbf{\Pi}_{\alpha}^{*}=\mathbf{\Pi} _{\alpha}\mathbf{W}\mathbf{\Pi}_{\alpha}^{*}\) are \(6\times 6\) matrix-valued measures for each mode of propagation \(\alpha\). Alternatively \(\mathbf{W}\) may be written: \[\mathbf{W}=\sum_{\alpha}\delta(\omega+\omega_{\alpha})\mathbf{b}_{\alpha}\mathbf{w}_{ \alpha}\mathbf{b}_{\alpha}^{*}\,, \tag{18}\] where \(\mathbf{w}_{\alpha}=\mathbf{c}_{\alpha}^{*}\mathbf{W}_{\alpha}\mathbf{c}_{\alpha}\) are \(A\times A\) matrix-valued measures. In Sect. 3, the latter are shown to satisfy the transport equations: \[\partial_{t}\mathbf{w}_{\alpha}+\{\omega_{\alpha},\mathbf{w}_{\alpha}\}+\mathbf{ \ell}_{\alpha}\mathbf{w}_{\alpha}+\mathbf{w}_{\alpha}\mathbf{\ell}_{\alpha}^{*}+ \mathbf{n}_{\alpha}\mathbf{w}_{\alpha}-\mathbf{w}_{\alpha}\mathbf{n}_{\alpha}=\mathbf{0}\,, \tag{19}\] where \(\{\omega_{\alpha},\mathbf{w}_{\alpha}\}=\mathbf{\nabla}_{\mathbf{k}}\omega_{\alpha} \cdot\mathbf{\nabla}_{\mathbf{x}}\mathbf{w}_{\alpha}-\mathbf{\nabla}_{\mathbf{x}}\omega_{ \alpha}\cdot\mathbf{\nabla}_{\mathbf{k}}\mathbf{w}_{\alpha}\) stands for the usual Poisson bracket, \(\mathbf{n}_{\alpha}\) is a \(A\times A\) skew-symmetric matrix given below by Eq. (44), and \(\mathbf{\ell}_{\alpha}=\mathbf{c}_{\alpha}^{*}\mathbf{L}_{1}\mathbf{b}_{\alpha}\). Taking the trace of Eq. (19) yields: \[\partial_{t}w_{\alpha}+\{\omega_{\alpha},w_{\alpha}\}+2\operatorname{Tr}(\mathbf{ \ell}_{\alpha}^{s}\mathbf{w}_{\alpha})=0\,,\] where \(w_{\alpha}=\operatorname{Tr}\mathbf{w}_{\alpha}=\operatorname{Tr}(\mathbf{K}_{0} \mathbf{W}_{\alpha})\), and \(\mathbf{\ell}_{\alpha}^{s}=\frac{1}{2}(\mathbf{\ell}_{\alpha}+\mathbf{\ell}_{\alpha}^{*})\) is the symmetric part of \(\mathbf{\ell}_{\alpha}\). The high-frequency energy density and Poynting vector (7) are then: \[\begin{split}\mathcal{E}(\mathbf{x},t)&=\frac{1}{2}\int_ {\mathbb{R}^{3}}\operatorname{Tr}(\mathbf{K}_{0}(\mathbf{x})\mathbf{W}(\mathbf{x},\mathbf{k},t)) \mathrm{d}\mathbf{k}=\frac{1}{2}\sum_{\alpha}\int_{\mathbb{R}^{3}}w_{\alpha}(\mathbf{ x},\mathbf{k},t)\mathrm{d}\mathbf{k}\,,\\ \mathbf{\mathcal{F}}(\mathbf{x},t)&=\frac{1}{2}\int_{\mathbb{R} ^{3}}\operatorname{Tr}(\mathbf{\nabla}_{\mathbf{k}}\mathbf{M}(\mathbf{k})\mathbf{W}(\mathbf{x},\mathbf{k},t ))\mathrm{d}\mathbf{k}=\frac{1}{2}\sum_{\alpha}\int_{\mathbb{R}^{3}}w_{\alpha}(\mathbf{ x},\mathbf{k},t)\mathbf{\nabla}_{\mathbf{k}}\omega_{\alpha}\mathrm{d}\mathbf{k}\,.\end{split} \tag{20}\] The evolution of the energy density and flow in phase space are thus described by \(\sum_{\alpha}w_{\alpha}\) which can also select the different modes of propagation. We observe however from Eq. (14) with Eq. (11), that \(\omega_{0}=0\) (\(\alpha="0"\)) is always an eigenvalue of \(\mathbf{L}_{0}\) with multiplicity \(A=2\) and eigenvectors (\(\hat{\mathbf{k}},\mathbf{0}\)) and (\(\mathbf{0},\hat{\mathbf{k}}\)) for \(\mathbf{k}\neq\mathbf{0}\). This mode is non propagative, though. Secondly, we analyze the influence of random fluctuations of the optical response \(\mathbf{K}_{0}\) of the background medium by setting \(\sigma>0\). We consider the scattering regime where \(\ell_{c}\) is of the order of the typical wavelength \(\lambda\), which is the scale of variation of the strongly oscillating initial data, _i.e._ it is small with respect to the typical propagation distance \(L\): \[\frac{\ell_{c}}{L}\approx\frac{\lambda}{L}=\varepsilon\,. \tag{21}\] Besides, the amplitude of fluctuations is also small with the scaling: \[\sigma^{2}\approx\frac{\ell_{c}}{L}\,. \tag{22}\] For a large distance of propagation \(L\) the electromagnetic waves are multiply scattered by the fluctuations of the background medium and their energy is spread over many directions of propagation. Considering these fluctuations, it is shown in Sect. 4 that the transport equations (19) are modified to the coupled radiative transfer equations: \[\partial_{t}\mathbf{w}_{\alpha}+\{\omega_{\alpha},\mathbf{w}_{ \alpha}\}+(\mathbf{\ell}_{\alpha}+\mathbf{\Sigma}_{\alpha})\mathbf{w}_{\alpha}+ \mathbf{w}_{\alpha}(\mathbf{\ell}_{\alpha}+\mathbf{\Sigma}_{\alpha})^{*}+\mathbf{n}_{ \alpha}\mathbf{w}_{\alpha}-\mathbf{w}_{\alpha}\mathbf{n}_{\alpha}\\ =\sum_{\beta}\int_{\mathbb{R}^{3}}\delta(\omega_{\beta}(\mathbf{x}, \mathbf{p})-\omega_{\alpha}(\mathbf{x},\mathbf{k}))\mathbf{\sigma}_{\alpha\beta}(\mathbf{x},\mathbf{k },\mathbf{p}):\mathbf{w}_{\beta}(\mathbf{x},\mathbf{p},t)\mathrm{d}\mathbf{p}\,, \tag{23}\] where \(\mathbf{\sigma}_{\alpha\beta}\) is a linear operator called differential scattering cross-section, and \(\mathbf{\Sigma}_{\alpha}\) is a \(A\times A\) matrix called total scattering cross-section. They are explicitly given in terms of the power spectral density matrix (13) of the random fluctuations \(\mathbf{V}\) (see Eq. (72) and Eq. (73), respectively). The \(A\times A\times B\times B\) tensor \(\mathbf{\sigma}_{\alpha\beta}(\mathbf{x},\mathbf{k},\mathbf{p})\) describes how, in the multiple scattering process, the \(B\times B\) matrix \(\mathbf{w}_{\beta}\) for the mode \(\beta\) with multiplicity \(B\) in the direction \(\mathbf{p}\) is locally converted to the \(A\times A\) matrix \(\mathbf{w}_{\alpha}\) for the mode \(\alpha\) with multiplicity \(A\) in the direction \(\mathbf{k}\): \[[\mathbf{\sigma}_{\alpha\beta}(\mathbf{x},\mathbf{k},\mathbf{p}):\mathbf{w}_{\beta}(\mathbf{x}, \mathbf{p},t)]_{aa^{\prime}}=\sum_{1\leq b,b^{\prime}\leq B}[\mathbf{\sigma}_{\alpha \beta}(\mathbf{x},\mathbf{k},\mathbf{p})]_{aa^{\prime}bb^{\prime}}[\mathbf{w}_{\beta}(\bm {x},\mathbf{p},t)]_{bb^{\prime}}\] for \(1\leq a,a^{\prime}\leq A\). The \(A\times A\) matrix \(\mathbf{\Sigma}_{\alpha}(\mathbf{x},\mathbf{k})\) describes how the matrix \(\mathbf{w}_{\alpha}\) in the direction \(\mathbf{k}\) is locally converted to all other modes \(\beta\) and directions. Lastly, the Dirac measure in the integral indicates that scattering occurs with possible mode conversion whenever for some directions \(\mathbf{p}\) and \(\mathbf{k}\) one has \(\omega_{\beta}(\mathbf{x},\mathbf{p})=\omega_{\alpha}(\mathbf{x},\mathbf{k})\). This includes the situation when no mode conversion occurs, \(\alpha=\beta\), and \(\omega_{\alpha}\) has locally the same value for possibly two different directions. For an isotropic, lossless background medium with \(\mathbf{K}_{0}(\mathbf{x})=\mathrm{diag}(\epsilon_{0}(\mathbf{x})\mathbf{I},\mu_{0}(\mathbf{x}) \mathbf{I})\) for example, one has three modes each of multiplicity 2 with eigenvalues \(\omega_{0}=0\) (non propagative), \(\omega_{1}(\mathbf{x},\mathbf{k})=+c_{0}(\mathbf{x})|\mathbf{k}|\), and \(\omega_{2}(\mathbf{x},\mathbf{k})=-c_{0}(\mathbf{x})|\mathbf{k}|\), where \(c_{0}=1/\sqrt{\epsilon_{0}\mu_{0}}\)[47]. Our results (19) and (23) generalize [47] to general bianisotropic dielectric media, and [18, 19] to the situation where these media are in addition heterogeneous and dissipative. We now turn to the detailed analysis of the high-frequency limit \(\varepsilon\to 0\) of Eq. (10) and how these results are derived. For that purpose, we use a classical pseudo-differential calculus, which also contributes to significantly simplify these derivations. ## 3. Transport of high-frequency electromagnetic waves in Bianisotropic media In this section the transport equations (19) for bianisotropic dielectric media are first derived. They describe the evolution of the high-frequency electromagnetic energy density without random fluctuations of the optical response. As illustrated in Sect. 2.2, the high-frequency limit \(\varepsilon\to 0\) in the previous setting shall be derived for quadratic observables of the electromagnetic field \(\boldsymbol{u}\). More particularly, we introduce the spatio-temporal Wigner transform of that field and its high-frequency limit, _i.e._ its Wigner measure, as in _e.g._[2]. This is because the spatial and temporal scales in Maxwell's system (10) play symmetric roles, and their oscillations should be accounted for altogether. The main objective here is to outline the (formal) mathematical tools used for the derivation of the Wigner measure's properties. They will prove useful in the subsequent Sect. 4 where the effects of the random fluctuations of the optical response will be considered. The analysis is derived from [23], where first-order hyperbolic systems with constant and slowly varying coefficients are addressed, [1], where arbitrary order hyperbolic systems with slowly varying coefficients are addressed, and [2], where time-varying random media and consistent pseudo-differential calculus and Wigner transforms with a single quantization are used. In this way we first recall in Sect. 3.1 how Wigner transforms are linked to semi-classical pseudo-differential operators [38, 54]. Then writing (10) as a semi-classical operator applied to a rescaled version of the electromagnetic field \(\boldsymbol{u}\) in Sect. 3.2, the dispersion properties of the Wigner measure (related to the Stokes parameters of the electromagnetic waves) are derived in Sect. 3.3, and the transport equations (19) are obtained Sect. 3.4. ### Semi-classical operators and Wigner measure From now on let us introduce the space-time variable \(\boldsymbol{s}=(\boldsymbol{x},t)\in\mathcal{O}\times\mathbb{R}\) and its dual variable \(\boldsymbol{\xi}=(\boldsymbol{k},\omega)\in\mathbb{R}^{4}\) in the wave vector-frequency Fourier domain. Let \(\boldsymbol{L}\) be a smooth, compactly supported real matrix-valued function of both the space-time variable \(\boldsymbol{s}\) and impulse variable \(\boldsymbol{\xi}\). For a vector field \(\boldsymbol{u}\) in \(L^{2}(\mathbb{R}^{4})\) endowed with the scalar product \(\left(\boldsymbol{u},\boldsymbol{v}\right)_{L^{2}}=\int_{\mathbb{R}^{4}} \boldsymbol{u}(\boldsymbol{s})\cdot\overline{\boldsymbol{v}}(\boldsymbol{s}) \,\mathrm{d}\boldsymbol{s}\) where \(\overline{\boldsymbol{v}}\) stands for complex conjugation, consider the (semiclassical) pseudo-differential operator: \[\boldsymbol{L}^{\vartheta}(\boldsymbol{s},\varepsilon\mathbf{D})\boldsymbol{u }(\boldsymbol{s})=\frac{1}{(2\pi)^{4}}\int_{\mathbb{R}^{4}\times\mathbb{R}^{4 }}\mathrm{e}^{\mathrm{i}\boldsymbol{\xi}\cdot(\boldsymbol{s}-\boldsymbol{ \tau})}\,\boldsymbol{L}((1-\vartheta)\boldsymbol{s}+\vartheta\boldsymbol{\tau },\varepsilon\boldsymbol{\xi})\boldsymbol{u}(\boldsymbol{\tau})\,\mathrm{d} \boldsymbol{\tau}\mathrm{d}\boldsymbol{\xi}\,, \tag{24}\] for \(\vartheta\in[0,1]\). This parameter defines the so-called quantization of the operator \(\boldsymbol{L}^{\vartheta}\). The case \(\vartheta=0\) corresponds to the standard quantization. It is simply denoted by \(\boldsymbol{L}(\boldsymbol{s},\varepsilon\mathbf{D})\) such that: \[\boldsymbol{L}(\boldsymbol{s},\varepsilon\mathbf{D})\boldsymbol{u}(\boldsymbol {s})=\frac{1}{(2\pi)^{4}}\int_{\mathbb{R}^{4}}\mathrm{e}^{\mathrm{i} \boldsymbol{\xi}\cdot\boldsymbol{s}}\,\boldsymbol{L}(\boldsymbol{s}, \varepsilon\boldsymbol{\xi})\widehat{\boldsymbol{u}}(\boldsymbol{\xi})\, \mathrm{d}\boldsymbol{\xi}\,, \tag{25}\] where: \[\widehat{\boldsymbol{u}}(\boldsymbol{\xi})=\int_{\mathbb{R}^{4}}\mathrm{e}^{- \mathrm{i}\boldsymbol{\xi}\cdot\boldsymbol{s}}\,\boldsymbol{u}(\boldsymbol{s} )\mathrm{d}\boldsymbol{s} \tag{26}\] stands for the space-time Fourier transform of \(\boldsymbol{u}(\boldsymbol{s})\). The case \(\vartheta=1/2\) corresponds to Weyl quantization, which is usually denoted by \(\boldsymbol{L}^{W}(\boldsymbol{s},\varepsilon\mathbf{D})\). Now let \(\mathbf{u},\mathbf{v}\) be temperate distributions defined on \(\mathbb{R}^{4}\) and let \(\mathbf{W}^{\vartheta}_{\varepsilon}[\mathbf{u},\mathbf{v}](\mathbf{s},\mathbf{\xi})\) be their Wigner transform (9) (with \(m=4\) here). Then one has the trace formula [23]: \[\left(\mathbf{L}^{\vartheta}(\mathbf{s},\varepsilon\mathbf{D})\mathbf{u},\mathbf{v}\right)_{ L^{2}}=\operatorname{Tr}\int_{\mathbb{R}^{4}\times\mathbb{R}^{4}}\mathbf{L}(\mathbf{s}, \mathbf{\xi})\mathbf{W}^{\vartheta}_{\varepsilon}[\mathbf{u},\mathbf{v}](\mathrm{d}\mathbf{s}, \mathrm{d}\mathbf{\xi})\,. \tag{27}\] For a sequence \((\mathbf{u}_{\varepsilon})\) uniformly bounded in \(L^{2}(\mathbb{R}^{4})\), one can establish in particular that there exists a positive measure \(\mathbf{W}[\mathbf{u}_{\varepsilon}]\) such that, up to extracting a subsequence if need be (see _e.g._[54, Theorem 5.2]): \[\lim_{\varepsilon\to 0}\left(\mathbf{L}^{\vartheta}(\mathbf{s},\varepsilon\mathbf{D}) \mathbf{u}_{\varepsilon},\mathbf{u}_{\varepsilon}\right)_{L^{2}}=\operatorname{Tr} \int_{\mathbb{R}^{4}\times\mathbb{R}^{4}}\mathbf{L}(\mathbf{s},\mathbf{\xi})\mathbf{W}[\mathbf{u}_ {\varepsilon}](\mathrm{d}\mathbf{s},\mathrm{d}\mathbf{\xi})\,,\quad\forall\mathbf{L}\,, \tag{28}\] independently of the quantization \(\vartheta\). We thus recover the Wigner measure \(\mathbf{W}[\mathbf{u}_{\varepsilon}]\) of \((\mathbf{u}_{\varepsilon})\) invoked in Sect. 2.2, which can also be interpreted as the weak limit of its self Wigner transform \(\mathbf{W}^{\vartheta}_{\varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{u}_{\varepsilon}]: =\mathbf{W}^{\vartheta}_{\varepsilon}[\mathbf{u}_{\varepsilon}]\). It describes the limit energy density of the sequence \((\mathbf{u}_{\varepsilon})\) in the phase space \(\mathbb{R}^{4}_{\mathbf{s}}\times\mathbb{R}^{4}_{\mathbf{\xi}}\). The observable \(\mathbf{L}(\mathbf{s},\mathbf{\xi})\) is used to select any quadratic observable or quantity of interest associated to this energy: the kinetic energy, or the free energy, or the Poynting vector as in Eq. (20) for example. Next, we recall some formulas of pseudo-differential calculus which involve Wigner transforms and semiclassical pseudo-differential operators. In [8] for example, various formal rules of pseudo-differential calculus were given for quantities like \(\mathbf{W}^{\frac{1}{2}}_{\varepsilon}[\mathbf{L}(\mathbf{x},\varepsilon\mathbf{D}_{\mathbf{x }})\mathbf{u}_{\varepsilon},\mathbf{v}_{\varepsilon}]\), mixing the quantization chosen for the Wigner transform (9) (\(\vartheta=\frac{1}{2}\)) and the standard quantization of the semiclassical operator \(\mathbf{L}(\mathbf{x},\varepsilon\mathbf{D}_{\mathbf{x}})\) (\(\vartheta=0\)). These formulas have been revisited in [2] using the same standard quantization for both the Wigner transforms and pseudo-differential operators, which seems more natural in view of the trace formula (27). They also contribute to significantly simplify the calculations below. They are recalled here, extending slightly the results of [2] for scalar-valued Wigner transforms to matrix-valued Wigner transforms. The proofs are straightforward applications of the proofs in [2, Appendix B] for the scalar case. For the chosen standard quantization \(\vartheta=0\), the Wigner transform (9) reads: \[\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{v}_{\varepsilon}](\mathbf{s},\mathbf{ \xi}):=\frac{1}{(2\pi)^{4}}\int_{\mathbb{R}^{4}}\mathrm{e}^{\mathrm{i}\mathbf{ \xi}\cdot\mathbf{\tau}}\,\mathbf{u}_{\varepsilon}(\mathbf{s}-\varepsilon\mathbf{\tau})\mathbf{v}_ {\varepsilon}^{*}(\mathbf{s})\,\mathrm{d}\mathbf{\tau}\,, \tag{29}\] dropping the superscript \(\vartheta\) from now on. The self Wigner transform \(\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{u}_{\varepsilon}]\) is denoted by \(\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]\), as implicitly done in Eq. (28). Let \(\mathbf{L}(\mathbf{s},\mathbf{\xi})\) be a smooth matrix-valued function defined on \(\mathbb{R}^{4}_{\mathbf{s}}\times\mathbb{R}^{4}_{\mathbf{\xi}}\) of which derivatives increase only slowly,1 Footnote 1: That is, \(\mathbf{L}\) satisfy for some \(m\geq 0\)[23, Prop. 1.8]: \[|\partial_{\mathbf{s}}^{\alpha}\partial_{\mathbf{\xi}}^{\beta}\mathbf{L}(\mathbf{s},\mathbf{\xi})| \leq C_{\alpha,\beta}(1+|\mathbf{\xi}|)^{m}\,,\quad\forall\alpha,\beta\in\mathbb{ N}^{4}\,.\] and which derivatives increase only slowly, [1] and recall the notation of Eq. (25) for the operator \(\mathbf{L}(\mathbf{s},\varepsilon\mathbf{D})\) with \(\vartheta=0\). Then one has: \[\mathbf{W}_{\varepsilon}[\mathbf{L}(\mathbf{s},\varepsilon\mathbf{D})\mathbf{u}_{ \varepsilon},\mathbf{v}_{\varepsilon}]=\mathbf{L}\left(\mathbf{s},\mathbf{\xi}\right)\mathbf{W}_{ \varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{v}_{\varepsilon}]-\frac{\varepsilon}{ \mathrm{i}}\mathbf{\nabla}_{\mathbf{s}}\mathbf{L}\cdot\mathbf{\nabla}_{\mathbf{\xi}}\mathbf{W}_{ \varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{v}_{\varepsilon}]\\ -\frac{\varepsilon}{\mathrm{i}}\left(\mathbf{\nabla}_{\mathbf{s}}\cdot \mathbf{\nabla}_{\mathbf{\xi}}\mathbf{L}\right)\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}, \mathbf{v}_{\varepsilon}]+\mathrm{O}(\varepsilon^{2})\,, \tag{30}\] and: \[\boldsymbol{W}_{\varepsilon}[\boldsymbol{u}_{\varepsilon},\boldsymbol{L}( \boldsymbol{s},\varepsilon\mathbf{D})\boldsymbol{v}_{\varepsilon}] =\left(\boldsymbol{L}(\boldsymbol{s},\boldsymbol{\xi}- \varepsilon\mathbf{D})\boldsymbol{W}_{\varepsilon}[\boldsymbol{u}_{ \varepsilon},\boldsymbol{v}_{\varepsilon}]\right)^{*}\] \[=\boldsymbol{W}_{\varepsilon}[\boldsymbol{u}_{\varepsilon}, \boldsymbol{v}_{\varepsilon}]\boldsymbol{L}^{*}(\boldsymbol{s},\boldsymbol{ \xi})-\frac{\varepsilon}{\mathrm{i}}\boldsymbol{\nabla}_{\boldsymbol{s}} \boldsymbol{W}_{\varepsilon}[\boldsymbol{u}_{\varepsilon},\boldsymbol{v}_{ \varepsilon}]\cdot\boldsymbol{\nabla}_{\boldsymbol{\xi}}\boldsymbol{L}^{*}+ \mathrm{O}(\varepsilon^{2})\,. \tag{31}\] Here \(\boldsymbol{\nabla}_{\boldsymbol{s}}\boldsymbol{A}\cdot\boldsymbol{\nabla}_{ \boldsymbol{\xi}}\boldsymbol{B}:=\boldsymbol{\nabla}_{\boldsymbol{x}} \boldsymbol{A}\cdot\boldsymbol{\nabla}_{\boldsymbol{k}}\boldsymbol{B}+ \partial_{t}\boldsymbol{A}\partial_{\omega}\boldsymbol{B}\), and the differential operator \(\varepsilon\mathbf{D}\) within the observable \(\boldsymbol{L}\) acts on \(\boldsymbol{W}_{\varepsilon}[\boldsymbol{u}_{\varepsilon},\boldsymbol{v}_{ \varepsilon}]\) so that: \[\boldsymbol{L}(\boldsymbol{s},\boldsymbol{\xi}-\varepsilon\mathbf{D}) \boldsymbol{W}_{\varepsilon}[\boldsymbol{u}_{\varepsilon},\boldsymbol{v}_{ \varepsilon}](\boldsymbol{s},\boldsymbol{\xi})=\frac{1}{(2\pi)^{4}}\int_{ \mathbb{R}^{4}}\mathrm{e}^{\mathrm{i}\boldsymbol{\eta}\cdot\boldsymbol{s}} \,\boldsymbol{L}(\boldsymbol{s},\boldsymbol{\xi}-\varepsilon\boldsymbol{\eta}) \widehat{\boldsymbol{W}}_{\varepsilon}[\boldsymbol{u}_{\varepsilon}, \boldsymbol{v}_{\varepsilon}](\boldsymbol{\eta},\boldsymbol{\xi})\mathrm{d} \boldsymbol{\eta}\,,\] where \(\widehat{\boldsymbol{W}}_{\varepsilon}[\boldsymbol{u}_{\varepsilon}, \boldsymbol{v}_{\varepsilon}](\boldsymbol{\eta},\boldsymbol{\xi})\) is the Fourier transform (26) of \(\boldsymbol{W}_{\varepsilon}[\boldsymbol{u}_{\varepsilon},\boldsymbol{v}_{ \varepsilon}](\boldsymbol{s},\boldsymbol{\xi})\) with respect to the space-time variable \(\boldsymbol{s}\). ### Maxwell's equations as a semiclassical operator Here Eq. (10) is conveniently written as a pseudo-differential operator for the derivation of the high-frequency regime \(\varepsilon\to 0\). We consider slowly varying materials characterized by the Hermitian optical response \(\boldsymbol{K}_{0}(\boldsymbol{x})\) which is independent of the small parameter \(\varepsilon\) and we ignore its random fluctuations for the time being (\(\sigma=0\)). Premultiplying it by \(\frac{\varepsilon}{\mathrm{i}}\), Maxwell's system (4) reads: \[\boldsymbol{L}_{\varepsilon}(\boldsymbol{s},\varepsilon\mathbf{D}_{\boldsymbol {s}})\boldsymbol{u}_{\varepsilon}=\mathbf{0}\,,\quad\boldsymbol{s}=( \boldsymbol{x},t)\in\mathcal{O}\times\mathbb{R}_{+}^{*}\,, \tag{32}\] where \(\mathbf{D}_{\boldsymbol{s}}=(\mathbf{D}_{\boldsymbol{x}},\mathbf{D}_{t})\) for \(\mathbf{D}_{\boldsymbol{x}}=\frac{\mathrm{i}}{\mathrm{i}}\boldsymbol{\nabla}_{ \boldsymbol{x}}\), \(\mathrm{D}_{t}=\frac{\mathrm{i}}{\mathrm{i}}\partial_{t}\). The pseudo-differential operator \(\boldsymbol{L}_{\varepsilon}\) is given by: \[\boldsymbol{L}_{\varepsilon}(\boldsymbol{s},\boldsymbol{\xi})=\omega \boldsymbol{I}+\boldsymbol{L}_{0}(\boldsymbol{x},\boldsymbol{k})+\frac{ \varepsilon}{\mathrm{i}}\boldsymbol{L}_{1}(\boldsymbol{x},\omega)\,, \tag{33}\] where the \(6\times 6\) matrices \(\boldsymbol{L}_{0}(\boldsymbol{x},\boldsymbol{k})\) and \(\boldsymbol{L}_{1}(\boldsymbol{x},\omega)\) are given by Eq. (14), and the symmetric Maxwell operator \(\boldsymbol{M}(\boldsymbol{k})\) is given by Eq. (11). In Eq. (33) it is assumed that the rescaled susceptibility kernel \(\boldsymbol{K}_{d}\) acts as: \[(\boldsymbol{K}_{d}\star\boldsymbol{u}_{\varepsilon})(\boldsymbol{x},t)=\int_ {0}^{t}\boldsymbol{K}_{d}\left(\boldsymbol{x},\frac{\tau}{\varepsilon}\right) \boldsymbol{u}_{\varepsilon}(\boldsymbol{x},t-\tau)\mathrm{d}\tau\,.\] A similar model is adopted in _e.g._[35]. Also \(\boldsymbol{u}_{\varepsilon}(\boldsymbol{x},t)\) stands for the electromagnetic field satisfying Eq. (32) for highly oscillating initial conditions characterized by the small scale \(\varepsilon\). We derive its Wigner measure and its evolution properties inside \(\mathcal{O}\) in the subsequent subsections, ignoring boundary effects for now. They can be addressed as in [1] for example, at least for the hyperbolic-elliptic set. **Remark 1**.: _All derivations involving the Wigner transforms and Wigner measures of \((\boldsymbol{u}_{\varepsilon})\) in the subsequent Sect. 3.3 and Sect. 3.4 are formal. They can be made rigorous as follows. For all \(\boldsymbol{s}\in\mathcal{O}\times\mathbb{R}_{+}^{*}\), let \(\mathcal{U}\) be an open set such that \(\overline{\mathcal{U}}\subset\mathcal{O}\times\mathbb{R}_{+}^{*}\), \(\boldsymbol{s}\in\mathcal{U}\), and let \(\varphi\in\mathcal{C}_{0}^{\infty}(\mathcal{O}\times\mathbb{R}_{+}^{*})\) with \(\varphi\equiv 1\) on \(\overline{\mathcal{U}}\). Multiplying Eq. (32) by \(\varphi\), we obtain an alternative equation where \(\boldsymbol{u}_{\varepsilon}\) is replaced by \(\varphi\boldsymbol{u}_{\varepsilon}\), with additional terms. The Wigner measures related to these additional terms are concentrated outside \(\overline{\mathcal{U}}\) because of the semiclassical pseudo-differential calculus [54, Theorem 4.24]. Besides, it can be shown that \(\boldsymbol{W}[\varphi\boldsymbol{u}_{\varepsilon}](\boldsymbol{s},\boldsymbol{ \xi})\) is actually independent of \(\varphi\) chosen as at the beginning of this remark. In the sequel we shall define the Wigner measure of \((\boldsymbol{u}_{\varepsilon})\), \(\boldsymbol{W}[\boldsymbol{u}_{\varepsilon}](\boldsymbol{s},\boldsymbol{\xi})\) for \(\boldsymbol{s}\in\mathcal{O}\times\mathbb{R}_{+}^{*}\), \(\boldsymbol{\xi}\in\mathbb{R}^{4}\setminus\{\boldsymbol{k}=\mathbf{0}\}\), as \(\boldsymbol{W}[\varphi\boldsymbol{u}_{\varepsilon}](\boldsymbol{s},\boldsymbol{ \xi})\), with this choice of \(\varphi\)._ ### Dispersion properties The foregoing pseudo-differential calculus and space-time Wigner transform of Sect. 3.1 are now used with Maxwell's system (32). Computing the space-time Wigner transform (29) of \(\mathbf{u}_{\varepsilon}\) from Eq. (32), yields: \[\mathbf{W}_{\varepsilon}\left[\mathbf{L}_{\varepsilon}(\mathbf{s},\varepsilon\mathbf{D}_{ \mathbf{s}})\mathbf{u}_{\varepsilon},\mathbf{u}_{\varepsilon}\right]-\mathbf{W}_{\varepsilon} \left[\mathbf{u}_{\varepsilon},\mathbf{L}_{\varepsilon}(\mathbf{s},\varepsilon\mathbf{D}_{ \mathbf{s}})\mathbf{u}_{\varepsilon}\right]=\mathbf{0}\,. \tag{34}\] However, invoking the rules (30) and (31) we get: \[\mathbf{W}_{\varepsilon}[\mathbf{L}_{\varepsilon}(\mathbf{s},\varepsilon\mathbf{D}_{ \mathbf{s}})\mathbf{u}_{\varepsilon},\mathbf{u}_{\varepsilon}]=\left(\omega\mathbf{I}+\mathbf{L}_{ 0}+\frac{\varepsilon}{\mathrm{i}}\mathbf{L}_{1}\right)\mathbf{W}_{\varepsilon}[\mathbf{u} _{\varepsilon}]-\frac{\varepsilon}{\mathrm{i}}\mathbf{\nabla}_{\mathbf{x}}\mathbf{L}_{0} \cdot\mathbf{\nabla}_{\mathbf{k}}\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]\\ -\frac{\varepsilon}{\mathrm{i}}(\mathbf{\nabla}_{\mathbf{x}}\cdot\mathbf{ \nabla}_{\mathbf{k}}\mathbf{L}_{0})\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]+ \mathrm{O}(\varepsilon^{2})\,, \tag{35}\] and: \[\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{L}_{\varepsilon}( \mathbf{s},\varepsilon\mathbf{D}_{\mathbf{s}})\mathbf{u}_{\varepsilon}]=\mathbf{W}_{ \varepsilon}[\mathbf{u}_{\varepsilon}]\left(\omega\mathbf{I}+\mathbf{L}_{0}^{*}-\frac{ \varepsilon}{\mathrm{i}}\mathbf{L}_{1}^{*}\right)-\frac{\varepsilon}{\mathrm{i} }\partial_{t}\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]\\ -\frac{\varepsilon}{\mathrm{i}}\mathbf{\nabla}_{\mathbf{x}}\mathbf{W}_{ \varepsilon}[\mathbf{u}_{\varepsilon}]\cdot\mathbf{\nabla}_{\mathbf{k}}\mathbf{L}_{0}^{*}+ \mathrm{O}(\varepsilon^{2})\,. \tag{36}\] Considering the leading-order term, one obtains as \(\varepsilon\to 0\) (see also for instance [54, Theorem 5.3]): \[\mathbf{L}_{0}(\mathbf{x},\mathbf{k})\mathbf{W}(\mathbf{s},\mathbf{\xi})-\mathbf{W}(\mathbf{s},\mathbf{\xi})\mathbf{ L}_{0}^{*}(\mathbf{x},\mathbf{k})=\mathbf{0} \tag{37}\] for the Wigner measure \(\mathbf{W}:=\mathbf{W}[\mathbf{u}_{\varepsilon}]\) (independent of \(\varepsilon\)) of the sequence \((\mathbf{u}_{\varepsilon})\) (dependent of \(\varepsilon\)) given by Eq. (28) in phase space \((\mathbf{s},\mathbf{\xi})\in\mathcal{O}\times\mathbb{R}_{+}^{*}\times\mathbb{R}_{\mathbf{ \xi}}^{4}\setminus\{\mathbf{k}=\mathbf{0}\}\). Again, the notation of [23, p. 330] is used and the left-hand side in Eq. (37) above is clearly independent of \(\varepsilon\). The Wigner measure is a non-negative, Hermitian matrix. Besides, reminding Eq. (15) one deduces from Eqs. (35)-(37): \[\mathbf{W}=\sum_{\alpha}\delta(\omega+\omega_{\alpha})\mathbf{W}_{\alpha}\,, \tag{38}\] where \(\mathbf{W}_{\alpha}=\mathbf{\Pi}_{\alpha}\mathbf{W}=\mathbf{W}\mathbf{\Pi}_{\alpha}^{*}=\mathbf{\Pi} _{\alpha}\mathbf{W}\mathbf{\Pi}_{\alpha}^{*}\); this is Eq. (17). We note that here we do not consider eigenvalues crossing, when their multiplicities possibly change. We leave this case for future works. It is considered for example in [20] for the Schrodinger equation. **Remark 2**.: _As noticed in Sect. 2.3, the non propagative mode \(\omega_{0}=0\) (for \(\alpha=\) "0") is always an eigenvalue of \(\mathbf{L}_{0}\) with multiplicity \(2\), with the eigenvectors \(\mathbf{a}_{1}=(\hat{\mathbf{k}},\mathbf{0})\) and \(\mathbf{a}_{2}=(\mathbf{0},\hat{\mathbf{k}})\) for \(\mathbf{k}\neq\mathbf{0}\). Recall that the optical response \(\mathbf{K}_{0}\) is positive definite and let \(\mathbf{\mathcal{K}}\) be the \(2\times 2\) matrix with elements \(\mathcal{K}_{jk}=\langle\mathbf{a}_{j},\mathbf{a}_{k}\rangle\), \(j,k=1,2\). One has \(\mathbf{\mathcal{K}}=\mathbf{Q}^{*}\mathbf{Q}\) where:_ \[\mathbf{Q}=\begin{bmatrix}\sqrt{\hat{\epsilon}_{0}}&\frac{\hat{\xi}_{0}}{\sqrt{ \hat{\epsilon}_{0}}}\\ 0&\sqrt{\hat{\mu}_{0}-\frac{|\hat{\xi}_{0}|^{2}}{\hat{\epsilon}_{0}}}\end{bmatrix}\,,\] _with \(\hat{\epsilon}_{0}(\mathbf{x},\mathbf{k})=\hat{\mathbf{k}}^{*}\mathbf{\epsilon}_{0}(\mathbf{x})\hat {\mathbf{k}}\), \(\hat{\mu}_{0}(\mathbf{x},\mathbf{k})=\hat{\mathbf{k}}^{*}\mathbf{\mu}_{0}(\mathbf{x})\hat{\mathbf{k}}\), and \(\hat{\xi}_{0}(\mathbf{x},\mathbf{k})=\hat{\mathbf{k}}^{*}\mathbf{\xi}_{0}(\mathbf{x})\hat{\mathbf{k}}\) for \(\mathbf{k}\neq\mathbf{0}\). Then the orthonormal eigenvectors with respect to the scalar product \(\langle\cdot,\cdot\rangle\) are \(\mathbf{b}_{0}^{j}=\mathbf{Q}^{-1}\mathbf{a}_{j}\) since \(\mathbf{a}_{j}^{*}\mathbf{a}_{k}=\delta_{jk}\), yielding:_ \[\mathbf{b}_{0}^{(1)}(\hat{\mathbf{k}})=\frac{1}{\sqrt{\hat{\epsilon}_{0}}}\begin{pmatrix} \hat{\mathbf{k}}\\ \mathbf{0}\end{pmatrix}\,,\quad\mathbf{b}_{0}^{(2)}(\hat{\mathbf{k}})=\frac{1}{\sqrt{\hat{ \epsilon}_{0}\hat{\mu}_{0}-|\hat{\xi}_{0}|^{2}}}\begin{pmatrix}-\frac{\hat{\xi}_{ 0}}{\sqrt{\hat{\epsilon}_{0}}}\hat{\mathbf{k}}\\ \sqrt{\hat{\epsilon}_{0}}\hat{\mathbf{k}}\end{pmatrix}\,. \tag{39}\] ### Evolution properties Gathering the first order terms proportional to \(\varepsilon\) in Eq. (34) one obtains in the limit \(\varepsilon\to 0\): \[\partial_{t}\boldsymbol{W}+\boldsymbol{\nabla}_{\boldsymbol{x}} \boldsymbol{W}\cdot\boldsymbol{\nabla}_{\boldsymbol{k}}\boldsymbol{L}_{0}^{*}- \boldsymbol{\nabla}_{\boldsymbol{x}}\boldsymbol{L}_{0}\cdot\boldsymbol{ \nabla}_{\boldsymbol{k}}\boldsymbol{W}-(\boldsymbol{\nabla}_{\boldsymbol{x} }\cdot\boldsymbol{\nabla}_{\boldsymbol{k}}\boldsymbol{L}_{0})\boldsymbol{W}+ \boldsymbol{L}_{1}\boldsymbol{W}+\boldsymbol{W}\boldsymbol{L}_{1}^{*}= \boldsymbol{0}\,. \tag{40}\] Next we multiply Eq. (40) on the left by \(\boldsymbol{\Pi}_{\alpha}\) and on the right by \(\boldsymbol{\Pi}_{\alpha}^{*}\). From the property of projectors \(\boldsymbol{\Pi}_{\alpha}\boldsymbol{\Pi}_{\beta}=\delta_{\alpha\beta} \boldsymbol{\Pi}_{\alpha}\) we first derive: \[\boldsymbol{\Pi}_{\alpha}\partial_{x_{j}}\boldsymbol{L}_{0} =\boldsymbol{\Pi}_{\alpha}\sum_{\beta}\big{(}(\partial_{x_{j}} \omega_{\beta})\boldsymbol{\Pi}_{\beta}+\omega_{\beta}(\partial_{x_{j}} \boldsymbol{\Pi}_{\beta})\big{)}\] \[=(\partial_{x_{j}}\omega_{\alpha})\boldsymbol{\Pi}_{\alpha}+( \partial_{x_{j}}\boldsymbol{\Pi}_{\alpha})(\omega_{\alpha}-\boldsymbol{L}_{0})\,.\] The same holds for partial derivatives with respect to \(k_{j}\). Besides: \[(\partial_{k_{j}}\boldsymbol{W})\boldsymbol{\Pi}_{\alpha}^{*} =\partial_{k_{j}}(\boldsymbol{W}\boldsymbol{\Pi}_{\alpha}^{*})- \boldsymbol{W}(\partial_{k_{j}}\boldsymbol{\Pi}_{\alpha}^{*})\] \[=\partial_{k_{j}}\boldsymbol{W}_{\alpha}-\boldsymbol{W}(\partial _{k_{j}}\boldsymbol{\Pi}_{\alpha})^{*}\] where again the same holds for partial derivatives with respect to \(x_{j}\). Therefore (with summation over the repeated index \(j\)): \[\boldsymbol{\Pi}_{\alpha}(\partial_{x_{j}}\boldsymbol{L}_{0})( \partial_{k_{j}}\boldsymbol{W})\boldsymbol{\Pi}_{\alpha}^{*} =(\partial_{x_{j}}\omega_{\alpha})(\partial_{k_{j}}(\boldsymbol{ \Pi}_{\alpha}\boldsymbol{W}_{\alpha})-(\partial_{k_{j}}\boldsymbol{\Pi}_{ \alpha})\boldsymbol{W}_{\alpha}-\boldsymbol{W}_{\alpha}(\partial_{k_{j}} \boldsymbol{\Pi}_{\alpha})^{*})\] \[\quad+(\partial_{x_{j}}\boldsymbol{\Pi}_{\alpha})(\omega_{ \alpha}-\boldsymbol{L}_{0})(\partial_{k_{j}}\boldsymbol{W}_{\alpha}- \boldsymbol{W}(\partial_{k_{j}}\boldsymbol{\Pi}_{\alpha})^{*})\] \[=(\partial_{x_{j}}\omega_{\alpha})(\partial_{k_{j}}\boldsymbol{W }_{\alpha}-(\partial_{k_{j}}\boldsymbol{\Pi}_{\alpha})\boldsymbol{W}_{\alpha} -\boldsymbol{W}_{\alpha}(\partial_{k_{j}}\boldsymbol{\Pi}_{\alpha})^{*})\] \[\quad-(\partial_{x_{j}}\boldsymbol{\Pi}_{\alpha})((\partial_{k_{j }}(\omega_{\alpha}-\boldsymbol{L}_{0}))\boldsymbol{W}_{\alpha}+(\omega_{ \alpha}-\boldsymbol{L}_{0})\boldsymbol{W}(\partial_{k_{j}}\boldsymbol{\Pi}_{ \alpha})^{*})\] since \((\omega_{\alpha}-\boldsymbol{L}_{0})\boldsymbol{W}_{\alpha}=\boldsymbol{0}\). Because \(\partial_{k_{j}}\) and \(\partial_{x_{j}}\) play symmetric roles, one also obtains: \[\boldsymbol{\Pi}_{\alpha}(\partial_{x_{j}}\boldsymbol{W})( \partial_{k_{j}}\boldsymbol{L}_{0}^{*})\boldsymbol{\Pi}_{\alpha}^{*} =(\boldsymbol{\Pi}_{\alpha}(\partial_{k_{j}}\boldsymbol{L}_{0})( \partial_{x_{j}}\boldsymbol{W})\boldsymbol{\Pi}_{\alpha}^{*})^{*}\] \[=(\partial_{k_{j}}\omega_{\alpha})(\partial_{x_{j}}\boldsymbol{W }_{\alpha}-(\partial_{x_{j}}\boldsymbol{\Pi}_{\alpha})\boldsymbol{W}_{\alpha}- \boldsymbol{W}_{\alpha}(\partial_{x_{j}}\boldsymbol{\Pi}_{\alpha})^{*})\] \[\quad-(\boldsymbol{W}_{\alpha}\partial_{x_{j}}(\omega_{\alpha}- \boldsymbol{L}_{0}^{*})+(\partial_{x_{j}}\boldsymbol{\Pi}_{\alpha})\boldsymbol{W }(\omega_{\alpha}-\boldsymbol{L}_{0}^{*}))(\partial_{k_{j}}\boldsymbol{\Pi}_{ \alpha})^{*}\,.\] Gathering these expressions, Eq. (40) yields: \[\partial_{t}\boldsymbol{W}_{\alpha}+\{\omega_{\alpha},\boldsymbol{W }_{\alpha}\}+(\boldsymbol{\nabla}_{\boldsymbol{x}}\omega_{\alpha}\cdot \boldsymbol{\nabla}_{\boldsymbol{k}}\boldsymbol{\Pi}_{\alpha}-\boldsymbol{ \nabla}_{\boldsymbol{x}}\boldsymbol{\Pi}_{\alpha}\cdot\boldsymbol{\nabla}_{ \boldsymbol{k}}\boldsymbol{L}_{0}-\boldsymbol{\Pi}_{\alpha}\boldsymbol{\nabla}_{ \boldsymbol{x}}\cdot\boldsymbol{\nabla}_{\boldsymbol{k}}\boldsymbol{L}_{0}) \boldsymbol{W}_{\alpha}\] \[\quad-\boldsymbol{W}_{\alpha}(\boldsymbol{\nabla}_{\boldsymbol{k} }\omega_{\alpha}\cdot\boldsymbol{\nabla}_{\boldsymbol{x}}\boldsymbol{\Pi}_{ \alpha}-\boldsymbol{\nabla}_{\boldsymbol{k}}\boldsymbol{\Pi}_{\alpha}\cdot \boldsymbol{\nabla}_{\boldsymbol{x}}\boldsymbol{L}_{0})^{*}+\boldsymbol{L}_{ \alpha}\boldsymbol{W}_{\alpha}+\boldsymbol{W}_{\alpha}\boldsymbol{L}_{\alpha}^{*}= \boldsymbol{0}\,, \tag{41}\] where \(\{\omega_{\alpha},\boldsymbol{W}_{\alpha}\}=\boldsymbol{\nabla}_{\boldsymbol{k} }\omega_{\alpha}\cdot\boldsymbol{\nabla}_{\boldsymbol{x}}\boldsymbol{W}_{\alpha }-\boldsymbol{\nabla}_{\boldsymbol{x}}\omega_{\alpha}\cdot\boldsymbol{\nabla}_ {\boldsymbol{k}}\boldsymbol{W}_{\alpha}\) and \(\boldsymbol{L}_{\alpha}=\boldsymbol{\Pi}_{\alpha}\boldsymbol{L}_{1}\). Next consider the matrix \(\boldsymbol{M}_{\alpha}=\partial_{x_{j}}\omega_{\alpha}\partial_{k_{j}} \boldsymbol{\Pi}_{\alpha}-\partial_{x_{j}}\boldsymbol{\Pi}_{\alpha}\partial_{k _{j}}\boldsymbol{L}_{0}-\boldsymbol{\Pi}_{\alpha}(\partial_{x_{j}}\partial_{k_{j }}\boldsymbol{L}_{0})\); then: \[\boldsymbol{M}_{\alpha} =\partial_{k_{j}}[(\partial_{x_{j}}\omega_{\alpha})\boldsymbol{ \Pi}_{\alpha}]-(\partial_{x_{j}}\partial_{k_{j}}\omega_{\alpha})\boldsymbol{ \Pi}_{\alpha}-\partial_{x_{j}}(\boldsymbol{\Pi}_{\alpha}\partial_{k_{j}} \boldsymbol{L}_{0})\] \[=\partial_{k_{j}}[\partial_{x_{j}}(\omega_{\alpha}\boldsymbol{ \Pi}_{\alpha})-\omega_{\alpha}\partial_{x_{j}}\boldsymbol{\Pi}_{\alpha}]-( \partial_{x_{j}}\partial_{k_{j}}\omega_{\alpha})\boldsymbol{\Pi}_{\alpha}- \partial_{x_{j}}[\partial_{k_{j}}(\boldsymbol{\Pi}_{\alpha}\boldsymbol{L}_{0}) -(\partial_{k_{j}}\boldsymbol{\Pi}_{\alpha})\boldsymbol{L}_{0}]\] \[=-(\partial_{k_{j}}\omega_{\alpha})(\partial_{x_{j}}\boldsymbol{ \Pi}_{\alpha})-(\partial_{k_{j}}\partial_{x_{j}}\boldsymbol{\Pi}_{\alpha})( \omega_{\alpha}-\boldsymbol{L}_{0})-(\partial_{x_{j}}\partial_{k_{j}}\omega_{ \alpha})\boldsymbol{\Pi}_{\alpha}+(\partial_{k_{j}}\boldsymbol{\Pi}_{\alpha})( \partial_{x_{j}}\boldsymbol{L}_{0})\,.\] Letting (because again \((\omega_{\alpha}-\boldsymbol{L}_{0})\boldsymbol{W}_{\alpha}=\boldsymbol{0}\)): \[\boldsymbol{N}_{\alpha}=\boldsymbol{\nabla}_{\boldsymbol{k}}\boldsymbol{\Pi}_{ \alpha}\cdot\boldsymbol{\nabla}_{\boldsymbol{x}}\boldsymbol{L}_{0}-\boldsymbol{ \nabla}_{\boldsymbol{k}}\omega_{\alpha}\cdot\boldsymbol{\nabla}_{ \boldsymbol{x}}\boldsymbol{\Pi}_{\alpha}-\frac{1}{2}(\boldsymbol{\nabla}_{ \boldsymbol{x}}\cdot\boldsymbol{\nabla}_{\boldsymbol{k}}\omega_{\alpha}) \boldsymbol{I}\,, \tag{42}\] Eq. (41) finally reads: \[\partial_{t}\boldsymbol{W}_{\alpha}+\{\omega_{\alpha},\boldsymbol{W}_{\alpha} \}+(\boldsymbol{L}_{\alpha}+\boldsymbol{N}_{\alpha})\boldsymbol{W}_{\alpha}+ \boldsymbol{W}_{\alpha}(\boldsymbol{L}_{\alpha}+\boldsymbol{N}_{\alpha})^{*}= \boldsymbol{0}\,. \tag{43}\] Alternatively, one may wish to expand the Wigner measure \(\mathbf{W}\) of Eq. (17) onto the eigenvectors \(\mathbf{b}_{\alpha}\) as: \[\mathbf{W}=\sum_{\alpha}\delta(\omega+\omega_{\alpha})\mathbf{W}_{\alpha}=\sum_{\alpha} \delta(\omega+\omega_{\alpha})\mathbf{b}_{\alpha}\mathbf{w}_{\alpha}\mathbf{b}_{\alpha }^{*}\,,\] where \(\mathbf{w}_{\alpha}=\mathbf{c}_{\alpha}^{*}\mathbf{W}_{\alpha}\mathbf{c}_{\alpha}\). The evolution properties of the so-called specific intensity, or coherence matrices \(\mathbf{w}_{\alpha}\) are derived multiplying Eq. (43) by \(\mathbf{c}_{\alpha}^{*}\) on the left-hand side, and by \(\mathbf{c}_{\alpha}\) on the right-hand side. Firstly: \[\mathbf{c}_{\alpha}^{*}(\partial_{k_{j}}\omega_{\alpha})(\partial_{x_ {j}}\mathbf{W}_{\alpha})\mathbf{c}_{\alpha} =(\partial_{k_{j}}\omega_{\alpha})[\partial_{x_{j}}\mathbf{w}_{ \alpha}-(\partial_{x_{j}}\mathbf{c}_{\alpha}^{*})\mathbf{W}_{\alpha}\mathbf{c}_{\alpha}- \mathbf{c}_{\alpha}^{*}\mathbf{W}_{\alpha}\partial_{x_{j}}\mathbf{c}_{\alpha}]\] \[=(\partial_{k_{j}}\omega_{\alpha})[\partial_{x_{j}}\mathbf{w}_{ \alpha}-(\partial_{x_{j}}\mathbf{c}_{\alpha}^{*})\mathbf{b}_{\alpha}\mathbf{w}_{ \alpha}-\mathbf{w}_{\alpha}\mathbf{b}_{\alpha}^{*}\partial_{x_{j}}\mathbf{c}_{\alpha}]\] \[=(\partial_{k_{j}}\omega_{\alpha})[\partial_{x_{j}}\mathbf{w}_{ \alpha}+\mathbf{c}_{\alpha}^{*}(\partial_{x_{j}}\mathbf{b}_{\alpha})\mathbf{w}_{ \alpha}+\mathbf{w}_{\alpha}(\partial_{x_{j}}\mathbf{b}_{\alpha}^{*})\mathbf{c}_{\alpha}]\] owing to the orthonormality condition \(\mathbf{c}_{\alpha}^{*}\mathbf{b}_{\beta}=\delta_{\alpha\beta}\mathbf{I}_{A}\) which yields \((\partial_{x_{j}}\mathbf{c}_{\alpha}^{*})\mathbf{b}_{\beta}=-\mathbf{c}_{\alpha}^{*}( \partial_{x_{j}}\mathbf{b}_{\beta})\) (and a similar property with \(\partial_{k_{j}}\)). Likewise: \[\mathbf{c}_{\alpha}^{*}(\partial_{x_{j}}\omega_{\alpha})(\partial_{k_{j}}\mathbf{W}_{ \alpha})\mathbf{c}_{\alpha}=(\partial_{x_{j}}\omega_{\alpha})[\partial_{k_{j}} \mathbf{w}_{\alpha}+\mathbf{c}_{\alpha}^{*}(\partial_{k_{j}}\mathbf{b}_{\alpha}) \mathbf{w}_{\alpha}+\mathbf{w}_{\alpha}(\partial_{k_{j}}\mathbf{b}_{\alpha}^{*}) \mathbf{c}_{\alpha}]\,,\] and consequently: \[\mathbf{c}_{\alpha}^{*}\{\omega_{\alpha},\mathbf{W}_{\alpha}\}\mathbf{c}_{ \alpha}=\{\omega_{\alpha},\mathbf{w}_{\alpha}\}+\mathbf{c}_{\alpha}^{*}\{\omega_{ \alpha},\mathbf{b}_{\alpha}\}\mathbf{w}_{\alpha}+\mathbf{w}_{\alpha}\{\omega_{ \alpha},\mathbf{b}_{\alpha}^{*}\}\mathbf{c}_{\alpha}\,.\] Secondly: \[\mathbf{c}_{\alpha}^{*}(\partial_{k_{j}}\mathbf{\Pi}_{\alpha})(\partial_{ x_{j}}\mathbf{L}_{0})\mathbf{W}_{\alpha}\mathbf{c}_{\alpha} =[\partial_{k_{j}}(\mathbf{c}_{\alpha}^{*}\mathbf{\Pi}_{\alpha})-(\partial _{k_{j}}\mathbf{c}_{\alpha}^{*})\mathbf{\Pi}_{\alpha}](\partial_{x_{j}}\mathbf{L}_{0}) \mathbf{b}_{\alpha}\mathbf{w}_{\alpha}\] \[=(\partial_{k_{j}}\mathbf{c}_{\alpha}^{*})(\mathbf{I}-\mathbf{\Pi}_{\alpha})[ \partial_{x_{j}}(\mathbf{L}_{0}\mathbf{b}_{\alpha})-\mathbf{L}_{0}\partial_{x_{j}}\mathbf{b}_ {\alpha}]\mathbf{w}_{\alpha}\] \[=(\partial_{k_{j}}\mathbf{c}_{\alpha}^{*})(\mathbf{I}-\mathbf{\Pi}_{\alpha})[ \partial_{x_{j}}(\omega_{\alpha}\mathbf{b}_{\alpha})-\mathbf{L}_{0}\partial_{x_{j}} \mathbf{b}_{\alpha}]\mathbf{w}_{\alpha}\] \[=(\partial_{k_{j}}\mathbf{c}_{\alpha}^{*})(\mathbf{I}-\mathbf{\Pi}_{\alpha})( \omega_{\alpha}-\mathbf{L}_{0})(\partial_{x_{j}}\mathbf{b}_{\alpha})\mathbf{w}_{\alpha}\] \[=(\partial_{k_{j}}\mathbf{c}_{\alpha}^{*})(\omega_{\alpha}-\mathbf{L}_{0} )(\partial_{x_{j}}\mathbf{b}_{\alpha})\mathbf{w}_{\alpha}\] \[=-\mathbf{c}_{\alpha}^{*}[\partial_{k_{j}}(\omega_{\alpha}-\mathbf{L}_{0} )](\partial_{x_{j}}\mathbf{b}_{\alpha})\mathbf{w}_{\alpha}\,;\] and: \[\mathbf{c}_{\alpha}^{*}(\partial_{k_{j}}\omega_{\alpha})(\partial_{ x_{j}}\mathbf{\Pi}_{\alpha})\mathbf{W}_{\alpha}\mathbf{c}_{\alpha} =(\partial_{k_{j}}\omega_{\alpha})[\partial_{x_{j}}(\mathbf{c}_{ \alpha}^{*}\mathbf{\Pi}_{\alpha})-(\partial_{x_{j}}\mathbf{c}_{\alpha}^{*})\mathbf{\Pi}_{ \alpha}]\mathbf{b}_{\alpha}\mathbf{w}_{\alpha}\] \[=(\partial_{k_{j}}\omega_{\alpha})(\partial_{x_{j}}\mathbf{c}_{\alpha}^ {*})(\mathbf{I}-\mathbf{\Pi}_{\alpha})\mathbf{b}_{\alpha}\mathbf{w}_{\alpha}\] \[=\mathbf{0}\,.\] Thirdly: \[\mathbf{c}_{\alpha}^{*}\{\omega_{\alpha},\mathbf{b}_{\alpha}\}-\mathbf{c}_{ \alpha}^{*}[\partial_{k_{j}}(\omega_{\alpha}-\mathbf{L}_{0})]\partial_{x_{j}}\mathbf{b} _{\alpha}=\mathbf{c}_{\alpha}^{*}[(\partial_{k_{j}}\mathbf{L}_{0})(\partial_{x_{j}}\mathbf{b} _{\alpha})-(\partial_{x_{j}}\omega_{\alpha})(\partial_{k_{j}}\mathbf{b}_{\alpha}) ]\,.\] Letting: \[\mathbf{n}_{\alpha} =\mathbf{c}_{\alpha}^{*}(\{\omega_{\alpha},\mathbf{b}_{\alpha}\}+\mathbf{N}_{ \alpha}\mathbf{b}_{\alpha})\] \[=\mathbf{c}_{\alpha}^{*}(\mathbf{\nabla}_{\mathbf{k}}\mathbf{L}_{0}\cdot\mathbf{ \nabla}_{\mathbf{x}}\mathbf{b}_{\alpha}-\mathbf{\nabla}_{\mathbf{x}}\omega_{\alpha}\cdot\mathbf{ \nabla}_{\mathbf{k}}\mathbf{b}_{\alpha})-\frac{1}{2}(\mathbf{\nabla}_{\mathbf{x}}\cdot\mathbf{ \nabla}_{\mathbf{k}}\omega_{\alpha})\mathbf{I}_{A}\,, \tag{44}\] one can show that \(\mathbf{n}_{\alpha}\) is skew-symmetric owing to Eq. (14). Indeed: \[\mathbf{c}_{\alpha}^{*}(\partial_{k_{j}}\mathbf{L}_{0})\mathbf{b}_{\alpha} =\mathbf{c}_{\alpha}^{*}\mathbf{K}_{0}^{-1}(\mathbf{x})(\partial_{k_{j}}\mathbf{ M}(\mathbf{k}))\mathbf{b}_{\alpha}\] \[=\mathbf{b}_{\alpha}^{*}(\partial_{k_{j}}\mathbf{M})\mathbf{b}_{\alpha}\] \[=(\partial_{k_{j}}\omega_{\alpha})\mathbf{I}_{A}\,.\] Then \((\partial_{x_{j}}\mathbf{b}_{\alpha}^{*})(\partial_{k_{j}}\mathbf{M})\mathbf{b}_{\alpha}+\mathbf{b }_{\alpha}^{*}(\partial_{k_{j}}\mathbf{M})(\partial_{x_{j}}\mathbf{b}_{\alpha})=( \partial_{x_{j}}\partial_{k_{j}}\omega_{\alpha})\mathbf{I}_{A}\), or: \[\mathbf{c}_{\alpha}^{*}(\partial_{k_{j}}\mathbf{L}_{0})(\partial_{x_{j}} \mathbf{b}_{\alpha}) =(\partial_{x_{j}}\partial_{k_{j}}\omega_{\alpha})\mathbf{I}_{A}-(\bm {b}_{\alpha}^{*}(\partial_{k_{j}}\mathbf{M})(\partial_{x_{j}}\mathbf{b}_{\alpha}))^{*} \tag{45}\] \[=(\partial_{x_{j}}\partial_{k_{j}}\omega_{\alpha})\mathbf{I}_{A}-(\bm {c}_{\alpha}^{*}(\partial_{k_{j}}\mathbf{L}_{0})(\partial_{x_{j}}\mathbf{b}_{\alpha}) )^{*}\,;\] besides: \[(\partial_{x_{j}}\omega_{\alpha})\mathbf{c}_{\alpha}^{*}\partial_{k_ {j}}\mathbf{b}_{\alpha} =-(\partial_{x_{j}}\omega_{\alpha})(\partial_{k_{j}}\mathbf{c}_{ \alpha}^{*})\mathbf{b}_{\alpha} \tag{46}\] \[=-(\partial_{x_{j}}\omega_{\alpha})(\partial_{k_{j}}\mathbf{b}_{ \alpha}^{*})\mathbf{c}_{\alpha}\,;\] thus one concludes with (45) and (46) that \(\mathbf{n}_{\alpha}=-\mathbf{n}_{\alpha}^{*}\). Therefore, setting \(\mathbf{\ell}_{\alpha}=\mathbf{c}_{\alpha}^{*}\mathbf{L}_{1}\mathbf{b}_{\alpha}\), Eq. (43) for the specific intensities \(\mathbf{w}_{\alpha}\) finally reads: \[\partial_{t}\mathbf{w}_{\alpha}+\{\omega_{\alpha},\mathbf{w}_{\alpha}\}+\mathbf{ \ell}_{\alpha}\mathbf{w}_{\alpha}+\mathbf{w}_{\alpha}\mathbf{\ell}_{\alpha}^{*}+ \mathbf{n}_{\alpha}\mathbf{w}_{\alpha}-\mathbf{w}_{\alpha}\mathbf{n}_{\alpha}=\mathbf{0}\,. \tag{47}\] One can check in particular that for the eigenspace (39) associated with the null eigenvalue (\(\alpha="0"\)), one has \(\mathbf{n}_{0}=\mathbf{c}_{0}^{*}\mathbf{\nabla}_{\mathbf{k}}\mathbf{L}_{0}\cdot\mathbf{\nabla}_{\mathbf{ x}}\mathbf{b}_{0}=\mathbf{b}_{0}^{*}\mathbf{\nabla}_{\mathbf{k}}\mathbf{M}(\mathbf{k})\cdot\mathbf{\nabla}_{ \mathbf{x}}\mathbf{b}_{0}=\mathbf{0}\). The geometrical structure of transport equations (47) is analyzed in _e.g._[34]. Here we show how solving Eq. (47) amounts to track \(\mathbf{w}_{\alpha}\) on its bicharacteristic rays in \(\mathcal{O}\times\mathbb{R}^{3}\). Consider the bicharacteristic (Hamiltonian) equations associated to \(\omega_{\alpha}(\mathbf{x},\mathbf{k})\): \[\frac{\mathrm{d}\mathbf{x}_{\alpha}}{\mathrm{d}t} =\mathbf{\nabla}_{\mathbf{k}}\omega_{\alpha}(\mathbf{x}_{\alpha}(t),\mathbf{k}_{ \alpha}(t))\,,\] \[\frac{\mathrm{d}\mathbf{k}_{\alpha}}{\mathrm{d}t} =-\mathbf{\nabla}_{\mathbf{x}}\omega_{\alpha}(\mathbf{x}_{\alpha}(t),\mathbf{k}_{ \alpha}(t))\,, \tag{48}\] with initial conditions \(\mathbf{x}_{\alpha}(0)=\mathbf{x}_{0}\) and \(\mathbf{k}_{\alpha}(0)=\mathbf{k}_{0}\) lying in the support of the Wigner measure \(\mathbf{W}_{I}\) of the initial conditions \(\mathbf{u}_{\varepsilon}(\cdot,t=0)\). Also introduce \(\mathbf{\Omega}_{\alpha}=\mathbf{\ell}_{\alpha}+\mathbf{n}_{\alpha}\) and the \(A\times A\) matrix \(\mathbf{R}_{\alpha}\) which is such that: \[\frac{\mathrm{d}\mathbf{R}_{\alpha}}{\mathrm{d}t}=-\mathbf{\Omega}_{\alpha}(\mathbf{x}_{ \alpha}(t),\mathbf{k}_{\alpha}(t))\mathbf{R}_{\alpha}(t)\,,\quad\mathbf{R}_{\alpha}(0)= \mathbf{I}_{A}\,. \tag{49}\] Then: \[\mathbf{\Omega}_{\alpha}\mathbf{w}_{\alpha}+\mathbf{w}_{\alpha}\mathbf{ \Omega}_{\alpha}^{*} =-\frac{\mathrm{d}\mathbf{R}_{\alpha}}{\mathrm{d}t}\mathbf{R}_{\alpha}^{-1} \mathbf{w}_{\alpha}-\mathbf{w}_{\alpha}(\mathbf{R}_{\alpha}^{*})^{-1}\frac{ \mathrm{d}\mathbf{R}_{\alpha}^{*}}{\mathrm{d}t}\] \[=-\frac{\mathrm{d}\mathbf{R}_{\alpha}}{\mathrm{d}t}\tilde{\mathbf{w}} _{\alpha}\mathbf{R}_{\alpha}^{*}-\mathbf{R}_{\alpha}\tilde{\mathbf{w}}_{\alpha}\frac{ \mathrm{d}\mathbf{R}_{\alpha}^{*}}{\mathrm{d}t}\] if one lets \(\tilde{\mathbf{w}}_{\alpha}=\mathbf{R}_{\alpha}^{-1}\mathbf{w}_{\alpha}(\mathbf{R}_{ \alpha}^{*})^{-1}\), and owing to Eq. (48) the transport equation (19) reads: \[\frac{\mathrm{d}\mathbf{w}_{\alpha}}{\mathrm{d}t}-\frac{\mathrm{d}\mathbf{R}_{ \alpha}}{\mathrm{d}t}\tilde{\mathbf{w}}_{\alpha}\mathbf{R}_{\alpha}^{*}-\mathbf{R}_{ \alpha}\tilde{\mathbf{w}}_{\alpha}\frac{\mathrm{d}\mathbf{R}_{\alpha}^{*}}{\mathrm{ d}t}=\mathbf{0}\,,\] or: \[\frac{\mathrm{d}}{\mathrm{d}t}\tilde{\mathbf{w}}_{\alpha}(\mathbf{x}_{\alpha}(t), \mathbf{k}_{\alpha}(t),t)=\mathbf{0}\,. \tag{50}\] Consequently: \[\mathbf{w}_{\alpha}(\mathbf{x}_{\alpha}(t),\mathbf{k}_{\alpha}(t),t)=\mathbf{R}_{\alpha}(t )\mathbf{w}_{\alpha}^{I}(\mathbf{x}_{0},\mathbf{k}_{0})\mathbf{R}_{\alpha}^{*}(t) \tag{51}\] where \(\mathbf{w}_{\alpha}^{I}=\mathbf{c}_{\alpha}^{*}\mathbf{W}_{I}\mathbf{c}_{\alpha}\), and one comes down to solving Eqs. (48)-(49) in order to track \(\mathbf{w}_{\alpha}\) on its bicharacteristics \(t\mapsto(\mathbf{x}_{\alpha}(t),\mathbf{k}_{\alpha}(t))\). **Example 2**.: _For the Lorentz model with damping (6) one has three eigenvalues \(\omega_{0}=0\) (\(\alpha=\)"0"), \(\omega_{+}(\mathbf{k})=+c_{0}|\mathbf{k}|\) (\(\alpha=\)"+"), and \(\omega_{-}(\mathbf{k})=-c_{0}|\mathbf{k}|\) (\(\alpha=\)"-"), each of multiplicity two (\(A=2\)), \(c_{0}=1/\sqrt{\epsilon_{0}\mu_{0}}\), with associated eigenvectors [47]:_ \[\mathbf{b}_{0}^{(1)}(\hat{\mathbf{k}})=\frac{1}{\sqrt{\epsilon_{0}}}\begin{pmatrix} \hat{\mathbf{k}}\\ \mathbf{0}\end{pmatrix}\,,\quad\mathbf{b}_{0}^{(2)}(\hat{\mathbf{k}})=\frac{1}{\sqrt{\mu_{ 0}}}\begin{pmatrix}\mathbf{0}\\ \hat{\mathbf{k}}\end{pmatrix}\,,\] _and for \(a=1,2\):_ \[\mathbf{b}_{\pm}^{a}(\hat{\mathbf{k}})=\begin{pmatrix}\frac{\hat{\mathbf{e}}_{a}(\hat{\bm {k}})}{\sqrt{2\epsilon_{0}}}\\ \pm\frac{\hat{\mathbf{k}}\times\hat{\mathbf{e}}_{a}(\hat{\mathbf{k}})}{\sqrt{2\mu_{0}}} \end{pmatrix}\,,\quad\hat{\mathbf{e}}_{1}(\hat{\mathbf{k}})\perp\hat{\mathbf{e}}_{2}(\hat{ \mathbf{k}})\in\hat{\mathbf{k}}^{\perp}\,, \tag{52}\] _such that \((\hat{\mathbf{k}},\hat{\mathbf{e}}_{1}(\hat{\mathbf{k}}),\hat{\mathbf{e}}_{2}(\hat{\mathbf{k}}))\) form a right-handed orthonormal system. Besides \(\mathbf{\ell}_{0}\mathbf{w}_{0}=\mathbf{0}\) since \(\omega=-\omega_{0}=0\) on the support of \(\mathbf{w}_{0}\), and:_ \[\mathbf{\ell}_{\pm}=\frac{\mathrm{i}\omega\widehat{\epsilon}_{1}(\omega)}{2}\mathbf{I }_{2}\,,\] _such that for the propagating modes \(\omega_{\pm}(\mathbf{k})=\pm c_{0}|\mathbf{k}|\):_ \[\begin{split}\mathbf{\ell}_{\pm}\mathbf{w}_{\pm}+\mathbf{w}_{\pm}\bm {\ell}_{\pm}^{*}&=\Re\epsilon\{\mathrm{i}\omega\widehat{ \epsilon}_{1}(\omega)\}\mathbf{w}_{\pm}\\ &=\frac{\omega_{p}^{2}c_{0}^{2}|\mathbf{k}|^{2}\Gamma}{(\omega_{0}^{2 }-c_{0}^{2}|\mathbf{k}|^{2})^{2}+c_{0}^{2}|\mathbf{k}|^{2}\Gamma^{2}}\mathbf{w}_{\pm} \\ &=\tilde{\Gamma}(c_{0}|\mathbf{k}|)\mathbf{w}_{\pm}\,.\end{split} \tag{53}\] _Also \(\mathbf{n}_{\pm}=\mathbf{0}\) for an homogeneous material. Therefore the solution of Eq. (47) reads:_ \[\mathbf{w}_{\pm}(\mathbf{x},\mathbf{k},t)=\mathrm{e}^{-\tilde{\Gamma}(c_{0}|\mathbf{k}|)t }\,\mathbf{w}_{I}(\mathbf{x}\mp c_{0}\hat{\mathbf{k}}t,\mathbf{k}) \tag{54}\] _where \(\mathbf{w}_{I}(\mathbf{x},\mathbf{k})\), \((\mathbf{x},\mathbf{k})\in\mathcal{O}\times\mathbb{R}^{3}\), are the initial (incident) specific intensities supported in some subset of the physical domain \(\mathcal{O}\). Note that similar results were obtained in [5, 43] for memoryless viscoelastic media. Recently a semi-linear radiative transfer model has been derived in media with non linear absorption [33]._ ## 4. Radiative transfer In this section the radiative transfer equations (23) for a bianistropic dielectric medium are now derived. They describe the evolution of the high-frequency electromagnetic energy density accounting for random fluctuations of the optical response. We resort to the same formal pseudo-differential calculus as recalled in Sect. 3.1. However additional rules are needed to account for the dependency of the optical response on both the slow scale of variation of the background medium, and the fast scale of variation of the random fluctuations. In Sect. 4.1 below we start by rewriting Maxwell's system as a pseudo-differential operator when these random fluctuations are considered. In Sect. 4.2 we give the additional rules of pseudo-differential calculus for this situation, and we finally obtain in Sect. 4.3 the radiative transfer equations by a multiscale analysis. ### Maxwell's equations with random perturbations In view of the scattering regime of Eqs. (21)-(22), the rescaled optical response \(\mathbf{K}(\mathbf{x})\) in Eq. (10) is written : \[\mathbf{K}(\mathbf{x})=\mathbf{K}_{0}(\mathbf{x})\left[\mathbf{I}+\sqrt{\varepsilon}\mathbf{V}\left( \frac{\mathbf{x}}{\varepsilon}\right)\right]\,.\] The power spectral density matrix \(\widehat{\mathbf{R}}\) of the homogeneous random field (\(\mathbf{V}(\mathbf{y})\), \(\mathbf{y}\in\mathbb{R}^{3}\)) is given by Eq. (13) and satisfies: \[\begin{split}\mathbb{E}\left\{\widehat{\mathbf{V}}(\mathbf{p})\otimes \widehat{\mathbf{V}}(\mathbf{q})\right\}&=(2\pi)^{6}\widehat{\mathbf{R}}( \mathbf{p})\delta(\mathbf{p}+\mathbf{q})\,,\\ \mathbb{E}\left\{\widehat{\mathbf{V}}(\mathbf{p})\otimes\widehat{\mathbf{V} }(\mathbf{q})\right\}&=(2\pi)^{6}\widehat{\mathbf{S}}(\mathbf{p})\delta(\mathbf{ p}-\mathbf{q})\,,\end{split} \tag{55}\] for the Fourier transform \(\widehat{\mathbf{V}}\) of \(\mathbf{V}\); see Eq. (26). Besides, homogeneity (spatial stationarity) also implies that: \[\widehat{R}_{jklm}(\mathbf{p})=\widehat{R}_{lmjk}(-\mathbf{p})\,,\quad 1\leq j,k,l,m \leq 6\,. \tag{56}\] We remind that in order to preserve the Hermiticity of the actual optical response \(\mathbf{K}\), it is assumed that \(\mathbf{K}_{0}(\mathbf{x})\mathbf{V}(\mathbf{y})=\mathbf{V}(\mathbf{y})^{*}\mathbf{K}_{0}(\mathbf{x})\) for all \(\mathbf{x}\in\mathcal{O}\), \(\mathbf{y}=\frac{\mathbf{x}}{\varepsilon}\). The pseudo-differential operator \(\mathbf{L}_{\varepsilon}\) of Eq. (33) is then modified to: \[\mathcal{L}_{\varepsilon}\left(\mathbf{s},\frac{\mathbf{x}}{\varepsilon},\mathbf{\xi} \right)=\mathbf{L}_{\varepsilon}(\mathbf{s},\mathbf{\xi})+\sqrt{\varepsilon}\mathbf{L}_{0.5} \left(\frac{\mathbf{x}}{\varepsilon},\omega\right)\,, \tag{57}\] where again \(\mathbf{s}=(\mathbf{x},t)\), \(\mathbf{\xi}=(\mathbf{k},\omega)\), and \(\mathbf{L}_{0.5}(\mathbf{y},\omega)=\omega\mathbf{V}(\mathbf{y})\). This additional term arises from the random fluctuations of the optical response. Since it depends on the fast variable \(\mathbf{y}=\frac{\mathbf{x}}{\varepsilon}\), the rules (30) and (31) have to be adapted. ### Rules of pseudo-differential calculus with oscillating coefficients The rules (30) and (31) hold true if the Wigner transform \(\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{v}_{\varepsilon}]\) does not depend on \(\frac{\mathbf{s}}{\varepsilon}\). If however it depends on this fast variable they have to be modified accordingly. Let \(\mathbf{\tau}\mapsto\mathbf{A}(\mathbf{\tau})\) be a smooth matrix-valued function. Then we have: \[\begin{split}\mathbf{W}_{\varepsilon}\left[\mathbf{A}\left(\frac{\mathbf{s}} {\varepsilon}\right)\mathbf{u}_{\varepsilon},\mathbf{v}_{\varepsilon}\right]& =\frac{1}{(2\pi)^{4}}\int_{\mathbb{R}^{4}}\mathrm{e}^{\mathrm{i }\frac{\mathbf{s}}{\varepsilon}\cdot\mathbf{\eta}}\,\widehat{\mathbf{A}}(\mathbf{\eta})\mathbf{W}_ {\varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{v}_{\varepsilon}]\left(\mathbf{s},\mathbf{\xi} -\mathbf{\eta}\right)\mathrm{d}\mathbf{\eta}\,,\\ \mathbf{W}_{\varepsilon}\left[\mathbf{u}_{\varepsilon},\mathbf{A}\left( \frac{\mathbf{s}}{\varepsilon}\right)\mathbf{v}_{\varepsilon}\right]&=\mathbf{W} _{\varepsilon}\left[\mathbf{u}_{\varepsilon},\mathbf{v}_{\varepsilon}\right]\mathbf{A} \left(\frac{\mathbf{s}}{\varepsilon}\right)^{*}\,,\end{split} \tag{58}\] where \(\widehat{\mathbf{A}}(\mathbf{\eta})\) is the Fourier transform (26) of \(\mathbf{A}(\mathbf{\tau})\). Applying these formulas for a matrix-valued observable \(\mathbf{A}(\frac{\mathbf{s}}{\varepsilon})\mathbf{L}(\mathbf{s},\mathbf{\xi})\) yields: \[\mathbf{W}_{\varepsilon}\left[\mathbf{A}\left(\frac{\mathbf{s}}{\varepsilon} \right)\mathbf{L}(\mathbf{s},\varepsilon\mathrm{D})\mathbf{u}_{\varepsilon},\mathbf{v}_{ \varepsilon}\right]\\ =\int\frac{\mathrm{e}^{\mathrm{i}\frac{\mathbf{s}}{\varepsilon}\cdot \mathbf{\eta}}\,\mathrm{d}\mathbf{\eta}}{(2\pi)^{4}}\widehat{\mathbf{A}}(\mathbf{\eta})\mathbf{L}( \mathbf{s},\mathbf{\xi}-\mathbf{\eta})W_{\varepsilon}[\mathbf{u}_{\varepsilon},\mathbf{v}_{ \varepsilon}]\left(\mathbf{s},\mathbf{\xi}-\mathbf{\eta}\right)+\mathrm{O}(\varepsilon)\,, \tag{59}\] and: \[\mathbf{W}_{\varepsilon}\left[\mathbf{u}_{\varepsilon},\mathbf{A}\left(\frac{\mathbf{s}}{ \varepsilon}\right)\mathbf{L}(\mathbf{s},\varepsilon\mathrm{D})\mathbf{v}_{\varepsilon} \right]=\left(\mathbf{A}\left(\frac{\mathbf{s}}{\varepsilon}\right)\mathbf{L}(\mathbf{s}, \mathbf{\xi}-\varepsilon\mathrm{D})\mathbf{W}_{\varepsilon}\left[\mathbf{u}_{\varepsilon },\mathbf{v}_{\varepsilon}\right]\right)^{*}, \tag{60}\] owing to Eq. (30) and Eq. (31). The proofs of these formulas are given in [2, Appendix C] for scalar fields. Their extensions to vector fields \(\mathbf{u}_{\varepsilon}\) and \(\mathbf{v}_{\varepsilon}\) are straightforward. Consequently one has: \[\begin{split}\mathbf{W}_{\varepsilon}\left[\mathbf{L}_{0.5}\left(\frac{ \mathbf{x}}{\varepsilon},\varepsilon\mathrm{D}_{t}\right)\mathbf{u}_{\varepsilon},\mathbf{ u}_{\varepsilon}\right]&=\frac{\omega}{(2\pi)^{3}}\int_{\mathbb{R}^{3}} \mathrm{e}^{\mathrm{i}\frac{\mathbf{s}}{\varepsilon}\cdot\mathbf{p}}\,\widehat{\mathbf{V}}( \mathbf{p})\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}](\mathbf{s},\mathbf{k}-\mathbf{p},\omega) \mathrm{d}\mathbf{p}\,,\\ \mathbf{W}_{\varepsilon}\left[\mathbf{u}_{\varepsilon},\mathbf{L}_{0.5}\left( \frac{\mathbf{x}}{\varepsilon},\varepsilon\mathrm{D}_{t}\right)\mathbf{u}_{ \varepsilon}\right]&=(\omega-\varepsilon\mathrm{D}_{t})\mathbf{W}_{ \varepsilon}[\mathbf{u}_{\varepsilon}](\mathbf{s},\mathbf{\xi})\mathbf{V}\left(\frac{\mathbf{x}} {\varepsilon}\right)^{*}.\end{split} \tag{61}\] Considering again the rules (30) and (31) one obtains that the Wigner transform \(\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]\) satisfies now: \[\mathbf{L}_{0}(\mathbf{x},\mathbf{k})\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon} ]-(\mathbf{L}_{0}(\mathbf{x},\mathbf{k}-\varepsilon\mathbf{D}_{\mathbf{x}})\mathbf{W}_{\varepsilon }[\mathbf{u}_{\varepsilon}])^{*}\\ +\sqrt{\varepsilon}\left(\frac{\omega}{(2\pi)^{3}}\int_{\mathbb{R }^{3}}\mathrm{e}^{\mathrm{i}\frac{\mathbf{n}}{\varepsilon}\cdot\mathbf{p}}\,\widehat{ \mathbf{V}}(\mathbf{p})\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}](\mathbf{s},\mathbf{k}-\mathbf{p},\omega)\mathrm{d}\mathbf{p}-\omega\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]\mathbf{V }\left(\frac{\mathbf{x}}{\varepsilon}\right)^{*}\right)\\ +\frac{\varepsilon}{\mathrm{i}}\left(\partial_{t}\mathbf{W}_{ \varepsilon}[\mathbf{u}_{\varepsilon}]-\mathbf{\nabla}_{\mathbf{x}}\mathbf{L}_{0}\cdot\mathbf{ \nabla}_{\mathbf{k}}\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]-(\mathbf{\nabla}_{\mathbf{ x}}\cdot\mathbf{\nabla}_{\mathbf{k}}\mathbf{L}_{0})\mathbf{W}_{\varepsilon}[\mathbf{u}_{ \varepsilon}]+\mathbf{\nabla}_{\mathbf{x}}\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}] \cdot\mathbf{\nabla}_{\mathbf{k}}\mathbf{L}_{0}^{*}\right)\\ +\frac{\varepsilon}{\mathrm{i}}\mathbf{L}_{1}\mathbf{W}_{\varepsilon}[ \mathbf{u}_{\varepsilon}]+\frac{\varepsilon}{\mathrm{i}}\mathbf{W}_{\varepsilon}[ \mathbf{u}_{\varepsilon}]\mathbf{L}_{1}^{*}=\mathrm{O}(\varepsilon^{\frac{3}{2}})\,. \tag{62}\] ### Multiscale expansion Because of the two-scale dependency of \(\mathcal{L}_{\varepsilon}\) in terms of the slow variable \(\mathbf{x}\) and the fast variable \(\mathbf{y}=\frac{\mathbf{x}}{\varepsilon}\), we expand \(\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]\) as: \[\mathbf{W}_{\varepsilon}[\mathbf{u}_{\varepsilon}]\left(\mathbf{s},\mathbf{\xi}\right)=\mathbf{W} _{0}(\mathbf{s},\mathbf{\xi})+\sqrt{\varepsilon}\mathbf{W}_{0.5}(\mathbf{s},\mathbf{y},\mathbf{\xi})+ \varepsilon\mathbf{W}_{1}(\mathbf{s},\mathbf{y},\mathbf{\xi})+\mathrm{o}(\varepsilon)\,, \tag{63}\] and replace \(\varepsilon\mathbf{D}_{\mathbf{x}}\) by \(\varepsilon\mathbf{D}_{\mathbf{x}}+\mathbf{D}_{\mathbf{y}}\) in Eq. (62). The \(0\)-th order equation (37) is recovered for \(\mathbf{W}_{0}\) which is independent of the fast variable \(\mathbf{y}\), so that it is given by Eq. (17) alike: \[\mathbf{W}_{0}(\mathbf{s},\mathbf{k},\omega)=\sum_{\alpha}\delta(\omega+\omega_{\alpha}( \mathbf{x},\mathbf{k}))\mathbf{W}_{\alpha}(\mathbf{s},\mathbf{k})\,. \tag{64}\] The \(\mathrm{O}(\varepsilon^{\frac{1}{2}})\) half-order terms yield: \[\mathbf{L}_{0}(\mathbf{x},\mathbf{k})\mathbf{W}_{0.5}(\mathbf{s},\mathbf{y},\mathbf{\xi})-( \mathbf{L}_{0}(\mathbf{x},\mathbf{k}-\mathbf{D}_{\mathbf{y}})\mathbf{W}_{0.5})^{*}(\mathbf{s},\mathbf{y}, \mathbf{\xi})\\ +\frac{\omega}{(2\pi)^{3}}\int_{\mathbb{R}^{3}}\mathrm{e}^{ \mathrm{i}\mathbf{y}\cdot\mathbf{p}}\,\widehat{\mathbf{V}}(\mathbf{p})\mathbf{W}_{0}(\mathbf{s},\mathbf{k }-\mathbf{p},\omega)\mathrm{d}\mathbf{p}-\omega\mathbf{W}_{0}(\mathbf{s},\mathbf{k},\omega)\mathbf{V }(\mathbf{y})^{*}=\mathbf{0}\,. \tag{65}\] Taking the Fourier transform with respect to \(\mathbf{y}\) of the above equation yields: \[\mathbf{L}_{0}(\mathbf{x},\mathbf{k})\widehat{\mathbf{W}}_{0.5}(\mathbf{s},\mathbf{p},\bm {k},\omega)-\widehat{\mathbf{W}}_{0.5}(\mathbf{s},\mathbf{p},\mathbf{k},\omega)\mathbf{L}_{0}^{*}( \mathbf{x},\mathbf{k}-\mathbf{p})\\ +\omega\widehat{\mathbf{V}}(\mathbf{p})\mathbf{W}_{0}(\mathbf{s},\mathbf{k}-\mathbf{p}, \omega)-\omega\mathbf{W}_{0}(\mathbf{s},\mathbf{k},\omega)\widehat{\mathbf{V}}(-\mathbf{p})^{*} =\mathbf{0}\,. \tag{66}\] The corrector \(\widehat{\mathbf{W}}_{0.5}\) is then expanded as (we remove the \(\mathbf{s}\) and \(\omega\) dependences from now on for clarity purposes): \[\widehat{\mathbf{W}}_{0.5}(\mathbf{p},\mathbf{k})=\sum_{\alpha,\beta}\mathbf{b}_{\alpha}(\mathbf{ k})\widehat{\mathbf{w}}_{\alpha\beta}(\mathbf{p},\mathbf{k})\mathbf{b}_{\beta}^{*}(\mathbf{k}-\mathbf{p} )\,,\] where from Eq. (66) one deduces that: \[\widehat{\mathbf{w}}_{\alpha\beta}(\mathbf{p},\mathbf{k})=\mathbf{c}_{\alpha}^{*}(\mathbf{k}) \frac{\omega_{\beta}(\mathbf{k}-\mathbf{p})\widehat{\mathbf{V}}(\mathbf{p})\mathbf{W}_{\beta}(\mathbf{ k}-\mathbf{p})-\omega_{\alpha}(\mathbf{k})\mathbf{W}_{\alpha}(\mathbf{k})\widehat{\mathbf{V}}(-\mathbf{p})^{*} }{\omega_{\alpha}(\mathbf{k})-\omega_{\beta}(\mathbf{k}-\mathbf{p})-\mathrm{i}\theta}\mathbf{c }_{\beta}(\mathbf{k}-\mathbf{p})\,.\] Here it is implicitly understood from Eq. (64) that \(\omega=-\omega_{\beta}(\mathbf{k}-\mathbf{p})\) on the support of \(\mathbf{W}_{\beta}(\mathbf{k}-\mathbf{p})\) and that \(\omega=-\omega_{\alpha}(\mathbf{k})\) on the support of \(\mathbf{W}_{\alpha}(\mathbf{k})\), while these terms vanish away from these supports. Also the regularization parameter \(\theta\in\mathbb{R}\) is introduced to evade the case \(\omega_{\alpha}(\mathbf{k})=\omega_{\beta}(\mathbf{k}-\mathbf{p})\) for the time being. It will eventually be sent to \(0\) at the end of the derivation. Therefore one has:2 \[\widehat{\mathbf{W}}_{0.5}(\mathbf{p},\mathbf{k})=\\ \sum_{\alpha,\beta}\frac{\omega_{\beta}(\mathbf{k}-\mathbf{p})\mathbf{\Pi}_{ \alpha}(\mathbf{k})\widehat{\mathbf{V}}(\mathbf{p})\mathbf{W}_{\beta}(\mathbf{k}-\mathbf{p})-\omega_{ \alpha}(\mathbf{k})\mathbf{W}_{\alpha}(\mathbf{k})\widehat{\mathbf{V}}(-\mathbf{p})^{*}\mathbf{\Pi}_{ \beta}(\mathbf{k}-\mathbf{p})^{*}}{\omega_{\alpha}(\mathbf{k})-\omega_{\beta}(\mathbf{k}-\mathbf{ p})-\mathrm{i}\theta}\,. \tag{67}\] At last, the \(\mathrm{O}(\varepsilon)\) terms yield: \[\partial_{t}\mathbf{W}_{0}-\mathbf{\nabla}_{\mathbf{x}}\mathbf{L}_{0}\cdot\mathbf{ \nabla}_{\mathbf{k}}\mathbf{W}_{0}-(\mathbf{\nabla}_{\mathbf{x}}\cdot\mathbf{\nabla}_{\mathbf{k}}\mathbf{L }_{0})\mathbf{W}_{0}+\mathbf{\nabla}_{\mathbf{x}}\mathbf{W}_{0}\cdot\mathbf{\nabla}_{\mathbf{k}}\mathbf{L }_{0}^{*}+\mathbf{L}_{1}\mathbf{W}_{0}+\mathbf{W}_{0}\mathbf{L}_{1}^{*}\\ +\frac{\mathrm{i}\omega}{(2\pi)^{3}}\int_{\mathbb{R}^{3}} \mathrm{e}^{\mathrm{i}\mathbf{y}\cdot\mathbf{p}}\,\widehat{\mathbf{V}}(\mathbf{p})\mathbf{W}_{0.5 }(\mathbf{y},\mathbf{k}-\mathbf{p})\mathrm{d}\mathbf{p}-\mathrm{i}\omega\mathbf{W}_{0.5}(\mathbf{y}, \mathbf{k})\mathbf{V}(\mathbf{y})^{*}\\ +\mathrm{i}\mathbf{L}_{0}(\mathbf{k})\mathbf{W}_{1}+\left(\mathrm{i}\mathbf{L}_{0 }(\mathbf{k}-\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\bm the change of variable \(\mathbf{k}-\mathbf{p}\to\mathbf{p}\), the average of the first sum in Eq. (69) reads: \[\mathbb{E}\left\{\mathcal{I}_{11}\right\} =\sum_{\beta}\iint\mathrm{e}^{\mathrm{i}\mathbf{y}\cdot(\mathbf{p}+\mathbf{q})} \,\frac{\omega_{\beta}^{2}(\mathbf{k}-\mathbf{p})\mathbb{E}\left\{\mathbf{s}_{\alpha \beta}(\mathbf{k},\mathbf{p},\mathbf{k}-\mathbf{p})\mathbf{w}_{\beta}(\mathbf{k}-\mathbf{p})\mathbf{s}_ {\beta\alpha}(\mathbf{k}-\mathbf{p},\mathbf{q},\mathbf{k})\right\}}{\mathrm{i}(\omega_{\alpha} (\mathbf{k})-\omega_{\beta}(\mathbf{k}-\mathbf{p}))+\theta}\frac{\mathrm{d}\mathbf{p}\mathrm{ d}\mathbf{q}}{(2\pi)^{6}}\] \[=\sum_{\beta}\int\frac{\omega_{\beta}^{2}(\mathbf{k}-\mathbf{p})\widehat{ \mathbf{R}}_{\alpha\beta}(\mathbf{k},\mathbf{k}-\mathbf{p}):\mathbb{E}\left\{\mathbf{w}_{ \beta}(\mathbf{k}-\mathbf{p})\right\}}{\mathrm{i}(\omega_{\alpha}(\mathbf{k})-\omega_{ \beta}(\mathbf{k}-\mathbf{p}))+\theta}\mathrm{d}\mathbf{p}\] \[=\sum_{\beta}\int\frac{\omega_{\beta}^{2}(\mathbf{p})\widehat{\mathbf{R} }_{\alpha\beta}(\mathbf{k},\mathbf{p}):\mathbb{E}\left\{\mathbf{w}_{\beta}(\mathbf{p}) \right\}}{\mathrm{i}(\omega_{\alpha}(\mathbf{k})-\omega_{\beta}(\mathbf{p}))+\theta} \mathrm{d}\mathbf{p}\] where the linear operator \(\widehat{\mathbf{R}}_{\alpha\beta}\) is: \[[\widehat{\mathbf{R}}_{\alpha\beta}(\mathbf{k},\mathbf{p})]_{aa^{\prime}bb^{\prime}}:= \overline{\mathbf{c}_{\alpha}^{a}(\mathbf{k})}\otimes\mathbf{b}_{\beta}^{b}(\mathbf{p}): \widehat{\mathbf{R}}(\mathbf{k}-\mathbf{p}):\overline{\mathbf{c}_{\beta}^{b^{\prime}}(\mathbf{p}) }\otimes\mathbf{b}_{\alpha}^{a^{\prime}}(\mathbf{k})\,,\] for \(1\leq a,a^{\prime}\leq A\) and \(1\leq b,b^{\prime}\leq B\), \(A\) and \(B\) being the orders of multiplicity of the modes \(\alpha\) and \(\beta\), respectively.3 Here we have used a mixing assumption by which expectations over products of \(\widehat{\mathbf{V}}\) and over \(\mathbf{w}_{\beta}\) are independent and get decoupled, because both quantities vary on different scales. Now using the changes of variable \(-\mathbf{p}\to\mathbf{p}\), and then \(\mathbf{k}+\mathbf{p}\to\mathbf{p}\), the average of the second sum in Eq. (69) reads: Footnote 3: This result should be compared with [18, Eq. (42)] or [19, Eq. (42)] where no coupling between modes occurs and this operator rather reads: \[[\widehat{\mathbf{R}}_{\alpha}(\mathbf{k},\mathbf{p})]_{aa^{\prime}bb^{\prime}}:= \overline{\mathbf{c}_{\alpha}^{a}(\mathbf{k})}\otimes\mathbf{b}_{\alpha}^{b}(\mathbf{p}): \widehat{\mathbf{R}}(\mathbf{k}-\mathbf{p}):\mathbf{c}_{\alpha}^{a^{\prime}}(\mathbf{k})\otimes \overline{\mathbf{b}_{\alpha}^{b^{\prime}}(\mathbf{p})}\,.\] \[\mathbb{E}\left\{\mathcal{I}_{12}\right\} =\omega_{\alpha}^{2}(\mathbf{k})\sum_{\beta}\iint\mathrm{e}^{\mathrm{ i}\mathbf{y}\cdot(\mathbf{q}-\mathbf{p})}\,\frac{\mathbb{E}\left\{\mathbf{s}_{\alpha \beta}(\mathbf{k},\mathbf{q},\mathbf{k}-\mathbf{p})\mathbf{s}_{\beta\alpha}(\mathbf{k}-\mathbf{p},- \mathbf{p},\mathbf{k})\right\}}{\mathrm{i}(\omega_{\beta}(\mathbf{k}-\mathbf{p})-\omega_{ \alpha}(\mathbf{k}))+\theta}\frac{\mathrm{d}\mathbf{p}\mathrm{d}\mathbf{q}}{(2\pi)^{6}}\] \[=\omega_{\alpha}^{2}(\mathbf{k})\sum_{\beta}\iint\mathrm{e}^{\mathrm{ i}\mathbf{y}\cdot(\mathbf{q}+\mathbf{p})}\,\frac{\mathbb{E}\left\{\mathbf{s}_{\alpha \beta}(\mathbf{k},\mathbf{q},\mathbf{k}+\mathbf{p})\mathbf{s}_{\beta\alpha}(\mathbf{k}+\mathbf{p},\bm {p},\mathbf{k})\right\}}{\mathrm{i}(\omega_{\beta}(\mathbf{k}+\mathbf{p})-\omega_{\alpha}( \mathbf{k}))+\theta}\frac{\mathrm{d}\mathbf{p}\mathrm{d}\mathbf{q}}{(2\pi)^{6}}\] \[=\sum_{\beta}\int\frac{\omega_{\alpha}^{2}(\mathbf{k})\widehat{\mathbf{R} }_{\alpha\beta}(\mathbf{k},\mathbf{k}+\mathbf{p}):\mathbf{I}_{B}}{\mathrm{i}(\omega_{\beta}( \mathbf{k}+\mathbf{p})-\omega_{\alpha}(\mathbf{k}))+\theta}\mathrm{d}\mathbf{p}\] \[=\sum_{\beta}\int\frac{\omega_{\alpha}^{2}(\mathbf{k})\widehat{\mathbf{R} }_{\alpha\beta}(\mathbf{k},\mathbf{p}):\mathbf{I}_{B}}{\mathrm{i}(\omega_{\beta}(\mathbf{p})- \omega_{\alpha}(\mathbf{k}))+\theta}\mathrm{d}\mathbf{p}\,. \tag{71}\] Besides, taking the average of Eq. (70), one has with Eq. (55): \[\mathbb{E}\left\{\mathcal{I}_{2}\right\} =\left(\sum_{\gamma}\int\frac{\omega_{\alpha}^{2}(\mathbf{k})\widehat{ \mathbf{R}}_{\alpha\gamma}(\mathbf{k},\mathbf{k}-\mathbf{p}):\mathbf{I}_{G}}{\mathrm{i}(\omega_{ \gamma}(\mathbf{k}-\mathbf{p})-\omega_{\alpha}(\mathbf{k}))+\theta}\mathrm{d}\mathbf{p}\right) \mathbb{E}\left\{\mathbf{w}_{\alpha}(\mathbf{k})\right\}\] \[\qquad-\sum_{\gamma}\int\frac{\omega_{\gamma}^{2}(\mathbf{k}-\mathbf{p}) \widehat{\mathbf{R}}_{\alpha\gamma}(\mathbf{k},\mathbf{k}-\mathbf{p}):\mathbb{E}\left\{ \mathbf{w}_{\gamma}(\mathbf{k}-\mathbf{p})\right\}}{\mathrm{i}(\omega_{\gamma}(\mathbf{k}- \mathbf{p})-\omega_{\alpha}(\mathbf{k}))+\theta}\mathrm{d}\mathbf{p}\] \[=\left(\sum_{\gamma}\int\frac{\omega_{\alpha}^{2}(\mathbf{k})\widehat{ \mathbf{R}}_{\alpha\gamma}(\mathbf{k},\mathbf{p}):\mathbf{I}_{G}}{\mathrm{i}(\omega_{\gamma}(\bm {p})-\omega_{\alpha}(\mathbf{k}))+\theta}\mathrm{d}\mathbf{p}\right)\mathbb{E}\left\{ \mathbf{w}_{\alpha}(\mathbf{k})\right\}\] \[\qquad-\sum_{\gamma}\int\frac{\omega_{\gamma}^{2}(\mathbf{p})\widehat{ \mathbf{R}}_{\alpha\gamma}(\mathbf{k},\mathbf{p}):\mathbb{E}\left\{\mathbf{w}_{\gamma}(\mathbf{p}) \right\}}{\mathrm{i}(\omega_{\gamma}(\mathbf{p})-\omega_{\alpha}(\mathbf{k}))+\theta} \mathrm{d}\mathbf{p}\,,\] where \(G\) stands for the order of multiplicity of the mode \(\gamma\). The last step is to let \(\theta\to 0\), which is done thank to the (formal) identity \(\frac{1}{\mathrm{i}x+\theta}\to\frac{1}{\mathrm{i}x}+\hat{\theta}\pi\delta(x)\), for \(\hat{\theta}\) being the sign of \(\theta\). This yields: \[\mathbb{E}\left\{\mathcal{I}_{1}\right\}-\mathbb{E}\left\{\mathcal{I}_{2} \right\}=\sum_{\beta}\int 2\pi\hat{\theta}\delta(\omega_{\alpha}(\mathbf{k})- \omega_{\beta}(\mathbf{p}))\omega_{\beta}^{2}(\mathbf{p})\widehat{\mathbf{R}}_{\alpha\beta }(\mathbf{k},\mathbf{p}):\mathbb{E}\left\{\mathbf{w}_{\beta}(\mathbf{p})\right\}\mathrm{d} \mathbf{p}\] \[-\mathbb{E}\left\{\mathbf{w}_{\alpha}(\mathbf{k})\right\}\left[\sum_{\beta}\int \left(\frac{1}{\mathrm{i}(\omega_{\beta}(\mathbf{p})-\omega_{\alpha}(\mathbf{k}))}+\pi \hat{\theta}\delta(\omega_{\beta}(\mathbf{p})-\omega_{\alpha}(\mathbf{k}))\right) \omega_{\alpha}^{2}(\mathbf{k})\widehat{\mathbf{R}}_{\alpha\beta}(\mathbf{k},\mathbf{p}):\mathbf{ I}_{B}\mathrm{d}\mathbf{p}\right]^{*}\] \[-\left[\sum_{\beta}\int\left(\frac{1}{\mathrm{i}(\omega_{\beta}(\mathbf{p})- \omega_{\alpha}(\mathbf{k}))}+\pi\hat{\theta}\delta(\omega_{\beta}(\mathbf{p})-\omega_ {\alpha}(\mathbf{k}))\right)\omega_{\alpha}^{2}(\mathbf{k})\widehat{\mathbf{R}}_{\alpha \beta}(\mathbf{k},\mathbf{p}):\mathbf{I}_{B}\mathrm{d}\mathbf{p}\right]\mathbb{E}\left\{ \mathbf{w}_{\alpha}(\mathbf{k})\right\}\,.\] Consider \(\hat{\theta}=1\) (for causality purposes) and let: \[\mathbf{\sigma}_{\alpha\beta}(\mathbf{k},\mathbf{p})=2\pi\omega_{\alpha}(\mathbf{k})\omega_{ \beta}(\mathbf{p})\widehat{\mathbf{R}}_{\alpha\beta}(\mathbf{k},\mathbf{p})\,, \tag{72}\] and: \[\mathbf{\Sigma}_{\alpha}(\mathbf{k})=\frac{1}{2}\sum_{\beta}\int\delta(\omega_{\beta }(\mathbf{p})-\omega_{\alpha}(\mathbf{k}))\mathbf{\sigma}_{\alpha\beta}(\mathbf{k},\mathbf{p}): \mathbf{I}_{B}\mathrm{d}\mathbf{p}-\frac{\mathrm{i}}{2\pi}\sum_{\beta}\oint\frac{\mathbf{ \sigma}_{\alpha\beta}(\mathbf{k},\mathbf{p}):\mathbf{I}_{B}}{\omega_{\beta}(\mathbf{p})-\omega _{\alpha}(\mathbf{k})}\mathrm{d}\mathbf{p}\,, \tag{73}\] where the second integral holds as a Cauchy principal value; then Eq. (68) finally reads as the radiative transfer equations: \[\partial_{t}\mathbb{E}\left\{\mathbf{w}_{\alpha}\right\}+\left\{ \omega_{\alpha},\mathbb{E}\left\{\mathbf{w}_{\alpha}\right\}\right\}+\mathbf{\ell }_{\alpha}\mathbb{E}\left\{\mathbf{w}_{\alpha}\right\}+\mathbb{E}\left\{ \mathbf{w}_{\alpha}\right\}\mathbf{\ell}_{\alpha}^{*}+\mathbf{n}_{\alpha}\mathbb{E} \left\{\mathbf{w}_{\alpha}\right\}-\mathbb{E}\left\{\mathbf{w}_{\alpha} \right\}\mathbf{n}_{\alpha}\] \[=\sum_{\beta}\int\delta(\omega_{\beta}(\mathbf{p})-\omega_{\alpha}( \mathbf{k}))\mathbf{\sigma}_{\alpha\beta}(\mathbf{k},\mathbf{p}):\mathbb{E}\left\{\mathbf{w} _{\beta}(\mathbf{p})\right\}\mathrm{d}\mathbf{p}\] \[-\mathbf{\Sigma}_{\alpha}(\mathbf{k})\mathbb{E}\left\{\mathbf{w}_{\alpha} (\mathbf{k})\right\}-\mathbb{E}\left\{\mathbf{w}_{\alpha}(\mathbf{k})\right\}\mathbf{ \Sigma}_{\alpha}(\mathbf{k})^{*}\,, \tag{74}\] possibly coupling all modes of propagation while keeping the frequency constant: \(\omega_{\beta}(\mathbf{p})=\omega_{\alpha}(\mathbf{k})\) in the scattering processes. This result extends [47, Eq. (4.32)] for isotropic dielectric media to the case of general bianisotropic dielectric media.4 It revisits the derivation in [18, 19] by considering the time-dependent case with damping effects, and possible coupling between the modes. Moreover, it uses Wigner transforms and pseudo-differential calculus in the same standard quantization rather than mixing it with the Weyl quantization. Matrix-valued radiative transfer equations of the form (74) can be solved numerically by the Monte-Carlo method, which is based on their probabilistic representation in terms of a jump Markov process as outlined in [41]. It extends ray tracing to scattering media. Footnote 4: The sole difference lies in the sign of the imaginary part of \(\mathbf{\Sigma}_{\alpha}\) and the \(1/\pi\) factor, to be compared with [47, Eq. (4.34)]. **Remark 3**.: _In [12] the derivation of a radiative transfer equation of the form of Eq. (74) is proved rigorously for the scalar wave equation with a random speed of sound. Here it is shown that the average Wigner transform of the wave field converges (weakly) to the solution of a linear Boltzmann equation similar to (74), provided that the initial conditions satisfy some adhoc boundedness and tightness conditions and that the random fluctuations fulfill some regularity assumptions as well. A self-averaging property is also demonstrated, by which the deviations of the Wigner transform from its average are shown to vanish in the high-frequency limit. This means that Eq. (74) would also hold for \(\mathbf{w}_{\alpha}\) and not only \(\mathbb{E}\left\{\mathbf{w}_{\alpha}\right\}\), a result that is implicitly acknowledged in the statement of Eq. (23). The proofs in [12] are however quite involved and very technical so that their extension to the present case is much beyond the scope of this research. Radiative transfer equations can also be rigorously derived for some particular configurations as in [16, 37]._ **Example 3**.: _For the Lorentz model with damping (6) the scattering cross-sections (72) are the ones derived in [47] for isotropic media with eigenvalues \(\omega_{\pm}(\mathbf{k})=\pm c_{0}|\mathbf{k}|\), each of multiplicity two, for the propagating modes. Couplings in Eq. (74) occur only for those eigenvalues with the same wave numbers \(|\mathbf{p}|=|\mathbf{k}|\) and signs. The corresponding right eigenvectors \(\mathbf{b}_{\pm}^{a}(\hat{\mathbf{k}})\) are given by Eq. (52) and the left eigenvectors are for \(a=1,2\):_ \[\mathbf{c}_{\pm}^{a}(\hat{\mathbf{k}})=\begin{pmatrix}\sqrt{\frac{\epsilon_{0}}{2}} \hat{\mathbf{e}}_{a}(\hat{\mathbf{k}})\\ \pm\sqrt{\frac{\mu_{0}}{2}}\hat{\mathbf{k}}\times\hat{\mathbf{e}}_{a}(\hat{ \mathbf{k}})\end{pmatrix}\,.\] _Besides, the dimensionless random fluctuations \(\mathbf{V}(\mathbf{y})\) read:_ \[\mathbf{V}(\mathbf{y})=\begin{bmatrix}\epsilon_{1}(\mathbf{y})\mathbf{I}&\mathbf{0}\\ \mathbf{0}&\mu_{1}(\mathbf{y})\mathbf{I}\end{bmatrix}\,,\] _where \((\epsilon_{1}(\mathbf{y}),\,\mathbf{y}\in\mathbb{R}^{3})\) is the dimensionless random fluctuation of the permittivity \(\epsilon_{0}\), and \((\mu_{1}(\mathbf{y}),\,\mathbf{y}\in\mathbb{R}^{3})\) is the dimensionless random fluctuation of the permeability \(\mu_{0}\). Both are real-valued, second-order, mean-square homogeneous random processes with zero means. Their power spectral density functions are denoted by \(\widehat{R}_{\epsilon}(\mathbf{k})\) and \(\widehat{R}_{\mu}(\mathbf{k})\), respectively, and their cross-power spectral density function is denoted by \(\widehat{R}_{\epsilon\mu}(\mathbf{k})=\widehat{R}_{\mu\epsilon}(-\mathbf{k})\). Introducing the \(2\times 2\) matrices \(\mathbf{T}(\hat{\mathbf{k}},\hat{\mathbf{p}})\) and \(\mathbf{X}(\hat{\mathbf{k}},\hat{\mathbf{p}})\) such that:_ \[T_{ab}(\hat{\mathbf{k}},\hat{\mathbf{p}})=\hat{\mathbf{e}}_{a}(\hat{\mathbf{k}})\cdot\hat{\bm {e}}_{b}(\hat{\mathbf{p}})\,,\quad X_{ab}(\hat{\mathbf{k}},\hat{\mathbf{p}})=(\hat{\mathbf{k}} \times\hat{\mathbf{e}}_{a}(\hat{\mathbf{k}}))\cdot(\hat{\mathbf{p}}\times\hat{\mathbf{e}}_{b} (\hat{\mathbf{p}}))\,,\] _we have \(\mathbf{T}(\hat{\mathbf{k}},\hat{\mathbf{p}})^{*}=\mathbf{T}(\hat{\mathbf{p}},\hat{\mathbf{k}})\), \(\mathbf{X}(\hat{\mathbf{k}},\hat{\mathbf{p}})^{*}=\mathbf{X}(\hat{\mathbf{p}},\hat{\mathbf{k}})\), and \(\mathbf{T}(\hat{\mathbf{k}},\hat{\mathbf{p}})\mathbf{X}(\hat{\mathbf{p}},\hat{\mathbf{k}})=(\hat{\mathbf{k }}\cdot\hat{\mathbf{p}})\mathbf{I}\). The differential scattering kernels in Eq. (74) then read:_ \[\mathbf{\sigma}_{\alpha\alpha}(\mathbf{k},\mathbf{p}):\mathbf{w}_{\alpha}( \mathbf{p})=\frac{\pi}{2}c_{0}^{2}|\mathbf{k}||\mathbf{p}| \Big{[} \widehat{R}_{\epsilon}(\mathbf{k}-\mathbf{p})\mathbf{T}(\hat{\mathbf{k}},\hat{\bm {p}})\mathbf{w}_{\alpha}(\mathbf{p})\mathbf{T}(\hat{\mathbf{p}},\hat{\mathbf{k}})\] \[+\widehat{R}_{\mu}(\mathbf{k}-\mathbf{p})\mathbf{X}(\hat{\mathbf{k}},\hat{\mathbf{p}} )\mathbf{w}_{\alpha}(\mathbf{p})\mathbf{X}(\hat{\mathbf{p}},\hat{\mathbf{k}})\] \[+\widehat{R}_{\epsilon\mu}(\mathbf{k}-\mathbf{p})\mathbf{T}(\hat{\mathbf{k}},\hat{ \mathbf{p}})\mathbf{w}_{\alpha}(\mathbf{p})\mathbf{X}(\hat{\mathbf{p}},\hat{\mathbf{k}})\] \[+\widehat{R}_{\mu\epsilon}(\mathbf{k}-\mathbf{p})\mathbf{X}(\hat{\mathbf{k}},\hat{ \mathbf{p}})\mathbf{w}_{\alpha}(\mathbf{p})\mathbf{T}(\hat{\mathbf{p}},\hat{\mathbf{k}})\Big{]}\,,\] _and \(\mathbf{\sigma}_{\alpha\beta}:\mathbf{w}_{\beta}=\mathbf{0}\) whenever \(\beta\neq\alpha\), \(\alpha,\beta\in\{+,-\}\). If in addition the random fluctuations \(\epsilon_{1}\) and \(\mu_{1}\) are isotropic, which means that \(\widehat{R}_{\epsilon}(\mathbf{k})\), \(\widehat{R}_{\mu}(\mathbf{k})\), and \(\widehat{R}_{\epsilon\mu}(\mathbf{k})\) depend on \(|\mathbf{k}|\) solely, it can be shown that \(\int_{S^{2}}\mathbf{\sigma}_{\alpha\alpha}(\mathbf{k},\mathbf{p}):\mathbf{I}_{A}\mathrm{d} \Omega(\hat{\mathbf{p}})\) is proportional to the identity matrix. Then the total scattering cross-sections (73) are diagonal and real, and the formula of [47, Eq. (4.46)] is recovered; see Appendix A._ **Example 4**.: _A chiral medium has optical response:_ _where \(\chi\in\mathbb{R}\) is the magnetoelectric constant and \(\kappa=c_{0}\chi\) is the chirality parameter, which is such that \(|\kappa|<1\) to preserve positivity of \(\boldsymbol{K}_{0}\). We consider biisotropic perturbations as in [18]:_ \[\boldsymbol{V}(\boldsymbol{y})=\begin{bmatrix}a(\boldsymbol{y})\boldsymbol{I}& \mathrm{i}Z_{0}b(\boldsymbol{y})\boldsymbol{I}\\ \frac{b(\boldsymbol{y})}{\mathrm{i}Z_{0}}\boldsymbol{I}&a(\boldsymbol{y}) \boldsymbol{I}\end{bmatrix}\,,\] _where \(Z_{0}=\sqrt{\mu_{0}/\epsilon_{0}}\) is the impedance, and \((a(\boldsymbol{y}),\,\boldsymbol{y}\in\mathbb{R}^{3})\) and \((b(\boldsymbol{y}),\,\boldsymbol{y}\in\mathbb{R}^{3})\) are real-valued, second-order, mean-square homogeneous random processes with zero means. Their power spectral density functions are denoted by \(\widehat{R}_{a}(\boldsymbol{k})\) and \(\widehat{R}_{b}(\boldsymbol{k})\), respectively, and their cross-power spectral density function is denoted by \(\widehat{R}_{ab}(\boldsymbol{k})=\widehat{R}_{ba}(-\boldsymbol{k})\). Then \(\boldsymbol{L}_{0}\) has four non-zero simple eigenvalues:_ \[\omega_{1}(\boldsymbol{k})=\frac{c_{0}|\boldsymbol{k}|}{1+\kappa}\,,\quad \omega_{2}(\boldsymbol{k})=-\frac{c_{0}|\boldsymbol{k}|}{1+\kappa}\,,\quad \omega_{3}(\boldsymbol{k})=\frac{c_{0}|\boldsymbol{k}|}{1-\kappa}\,,\quad \omega_{4}(\boldsymbol{k})=-\frac{c_{0}|\boldsymbol{k}|}{1-\kappa}\,,\] _associated to the right eigenvectors:5_ Footnote 5: Note that for \(\kappa=0\), the eigenvectors (52) are recovered taking the summation and difference of \(\boldsymbol{b}_{1}\) and \(\boldsymbol{b}_{3}\) on one hand, yielding \(\boldsymbol{b}_{\pm}^{1}\) and \(\boldsymbol{b}_{+}^{2}\), and the summation and difference of \(\boldsymbol{b}_{2}\) and \(\boldsymbol{b}_{4}\) on the other hand, yielding \(\boldsymbol{b}_{-}^{1}\) and \(\boldsymbol{b}_{+}^{2}\). \[\boldsymbol{b}_{1}(\hat{\boldsymbol{k}})=\frac{1}{\sqrt{1+\kappa}}\begin{pmatrix} \frac{\hat{\boldsymbol{e}}_{1}^{\prime}(\hat{\boldsymbol{k}})}{\sqrt{2\epsilon _{0}}}\\ \frac{\hat{\boldsymbol{k}}\times\hat{\boldsymbol{e}}_{1}^{\prime}(\hat{ \boldsymbol{k}})}{\sqrt{2\mu_{0}}}\end{pmatrix}\,,\quad\boldsymbol{b}_{2}( \hat{\boldsymbol{k}})=\frac{1}{\sqrt{1+\kappa}}\begin{pmatrix}\frac{\hat{ \boldsymbol{e}}_{2}^{\prime}(\hat{\boldsymbol{k}})}{\sqrt{2\epsilon_{0}}}\\ -\frac{\hat{\boldsymbol{k}}\times\hat{\boldsymbol{e}}_{2}^{\prime}(\hat{ \boldsymbol{k}})}{\sqrt{2\mu_{0}}}\end{pmatrix}\,,\] _and:_ \[\boldsymbol{b}_{3}(\hat{\boldsymbol{k}})=\frac{1}{\sqrt{1-\kappa}}\begin{pmatrix} \frac{\hat{\boldsymbol{e}}_{2}^{\prime}(\hat{\boldsymbol{k}})}{\sqrt{2\epsilon _{0}}}\\ \frac{\hat{\boldsymbol{k}}\times\hat{\boldsymbol{e}}_{2}^{\prime}(\hat{ \boldsymbol{k}})}{\sqrt{2\mu_{0}}}\end{pmatrix}\,,\quad\boldsymbol{b}_{4}( \hat{\boldsymbol{k}})=\frac{1}{\sqrt{1-\kappa}}\begin{pmatrix}\frac{\hat{ \boldsymbol{e}}_{1}^{\prime}(\hat{\boldsymbol{k}})}{\sqrt{2\epsilon_{0}}}\\ -\frac{\hat{\boldsymbol{k}}\times\hat{\boldsymbol{e}}_{1}^{\prime}(\hat{ \boldsymbol{k}})}{\sqrt{2\mu_{0}}}\end{pmatrix}\,,\] _letting \(\hat{\boldsymbol{e}}_{1}^{\prime}(\hat{\boldsymbol{k}})=\frac{1}{\sqrt{2}}( \mathrm{i}\hat{\boldsymbol{e}}_{1}(\hat{\boldsymbol{k}})-\hat{\boldsymbol{ e}}_{2}(\hat{\boldsymbol{k}}))\) and \(\hat{\boldsymbol{e}}_{2}^{\prime}(\hat{\boldsymbol{k}})=\frac{1}{\sqrt{2}}( \mathrm{i}\hat{\boldsymbol{e}}_{1}(\hat{\boldsymbol{k}})+\hat{\boldsymbol{ e}}_{2}(\hat{\boldsymbol{k}}))\). The left eigenvectors are then:_ \[\boldsymbol{c}_{1}(\hat{\boldsymbol{k}})=\sqrt{1+\kappa}\begin{pmatrix}\sqrt {\frac{\epsilon_{0}}{2}}\hat{\boldsymbol{e}}_{1}^{\prime}(\hat{\boldsymbol {k}})\\ \sqrt{\frac{\mu_{0}}{2}}\hat{\boldsymbol{k}}\times\hat{\boldsymbol{e}}_{1}^{ \prime}(\hat{\boldsymbol{k}})\end{pmatrix}\,,\quad\boldsymbol{c}_{2}(\hat{ \boldsymbol{k}})=\sqrt{1+\kappa}\begin{pmatrix}\sqrt{\frac{\epsilon_{0}}{2}} \hat{\boldsymbol{e}}_{2}^{\prime}(\hat{\boldsymbol{k}})\\ -\sqrt{\frac{\mu_{0}}{2}}\hat{\boldsymbol{k}}\times\hat{\boldsymbol{e}}_{2}^{ \prime}(\hat{\boldsymbol{k}})\end{pmatrix}\,,\] _and:_ \[\boldsymbol{c}_{3}(\hat{\boldsymbol{k}})=\sqrt{1-\kappa}\begin{pmatrix}\sqrt {\frac{\epsilon_{0}}{2}}\hat{\boldsymbol{e}}_{2}^{\prime}(\hat{\boldsymbol {k}})\\ \sqrt{\frac{\mu_{0}}{2}}\hat{\boldsymbol{k}}\times\hat{\boldsymbol{e}}_{2}^{ \prime}(\hat{\boldsymbol{k}})\end{pmatrix}\,,\quad\boldsymbol{c}_{4}(\hat{ \boldsymbol{k}})=\sqrt{1-\kappa}\begin{pmatrix}\sqrt{\frac{\epsilon_{0}}{2}} \hat{\boldsymbol{e}}_{1}^{\prime}(\hat{\boldsymbol{k}})\\ -\sqrt{\frac{\mu_{0}}{2}}\hat{\boldsymbol{k}}\times\hat{\boldsymbol{e}}_{1}^{ \prime}(\hat{\boldsymbol{k}})\end{pmatrix}\,.\] _All scattering cross-sections are scalars and couplings in Eq. (74) possibly occur for those eigenvalues with the same signs and \(|\boldsymbol{p}|=|\boldsymbol{k}|\), or \((1\pm\kappa)|\boldsymbol{p}|=(1\mp\kappa)|\boldsymbol{k}|\). However one obtains \(\sigma_{13}(\boldsymbol{k},\boldsymbol{p})=\sigma_{24}(\boldsymbol{k}, \boldsymbol{p})=0\) and only self-couplings occur with scattering cross-sections:_ \[\sigma_{11}(\mathbf{k},\mathbf{p}) =\frac{2\pi c_{0}^{2}|\mathbf{k}|\,|\mathbf{p}|}{(1+\kappa)^{2}}\left|\frac {\overline{\hat{e}_{1}^{\prime}(\hat{\mathbf{k}})}\cdot\hat{\mathbf{e}_{1}^{\prime}(\hat {\mathbf{p}})}}{\overline{\hat{e}_{1}^{\prime}(\hat{\mathbf{k}})}}\cdot\hat{\mathbf{e}_{1}^ {\prime}(\hat{\mathbf{p}})}\right|^{2}\left(\widehat{R}_{a}(\mathbf{k}-\mathbf{p})+\widehat {R}_{b}(\mathbf{k}-\mathbf{p})+2\widehat{R}_{ab}^{s}(\mathbf{k}-\mathbf{p})\right)\,,\] \[\sigma_{22}(\mathbf{k},\mathbf{p}) =\frac{\pi}{2}\frac{2\pi c_{0}^{2}|\mathbf{k}|\,|\mathbf{p}|}{(1+\kappa)^ {2}}\left|\frac{\overline{\hat{e}_{2}^{\prime}(\hat{\mathbf{k}})}\cdot\hat{\mathbf{e} _{2}^{\prime}(\hat{\mathbf{p}})}}{\overline{\hat{e}_{2}^{\prime}(\hat{\mathbf{k}})}} \cdot\hat{\mathbf{e}_{2}^{\prime}(\hat{\mathbf{p}})}\right|^{2}\left(\widehat{R}_{a}( \mathbf{k}-\mathbf{p})+\widehat{R}_{b}(\mathbf{k}-\mathbf{p})+2\widehat{R}_{ab}^{s}(\mathbf{k}-\bm {p})\right)\,,\] \[\sigma_{33}(\mathbf{k},\mathbf{p}) =\frac{2\pi c_{0}^{2}|\mathbf{k}|\,|\mathbf{p}|}{(1-\kappa)^{2}}\left| \frac{\overline{\hat{e}_{2}^{\prime}(\hat{\mathbf{k}})}\cdot\hat{\mathbf{e}_{2}^{ \prime}(\hat{\mathbf{p}})}}{\overline{\hat{e}_{2}^{\prime}(\hat{\mathbf{p}})}}\right|^ {2}\left(\widehat{R}_{a}(\mathbf{k}-\mathbf{p})+\widehat{R}_{b}(\mathbf{k}-\mathbf{p})-2 \widehat{R}_{ab}^{s}(\mathbf{k}-\mathbf{p})\right)\,,\] \[\sigma_{44}(\mathbf{k},\mathbf{p}) =\frac{2\pi c_{0}^{2}|\mathbf{k}|\,|\mathbf{p}|}{(1-\kappa)^{2}}\left| \frac{\overline{\hat{e}_{1}^{\prime}(\hat{\mathbf{k}})}\cdot\hat{\mathbf{e}_{1}^{ \prime}(\hat{\mathbf{p}})}}{\overline{\hat{e}_{1}^{\prime}(\hat{\mathbf{k}})}}\cdot \hat{\mathbf{e}_{1}^{\prime}(\hat{\mathbf{p}})}\right|^{2}\left(\widehat{R}_{a}(\mathbf{k }-\mathbf{p})+\widehat{R}_{b}(\mathbf{k}-\mathbf{p})-2\widehat{R}_{ab}^{s}(\mathbf{k}-\mathbf{p}) \right)\,,\] _for \(\widehat{R}_{ab}^{s}(\mathbf{k})=\frac{1}{2}(\widehat{R}_{ab}(\mathbf{k})+\widehat{R }_{ba}(\mathbf{k}))\). The total scattering cross-sections for statistically isotropic perturbations of the optical response are then:_ \[\Sigma_{1}(|\mathbf{k}|)=\Sigma_{2}(|\mathbf{k}|)=\frac{\pi^{2}c_{0}| \mathbf{k}|^{4}}{1+\kappa}\int_{-1}^{1}(1+\Theta)^{2}\Big{[}\widehat{R}_{a}\left( |\mathbf{k}|\sqrt{2(1-\Theta)}\right)+\widehat{R}_{b}\left(|\mathbf{k}|\sqrt{2(1- \Theta)}\right)\\ +2\widehat{R}_{ab}\left(|\mathbf{k}|\sqrt{2(1-\Theta)}\right)\Big{]} \mathrm{d}\Theta\,,\] _since \(\widehat{R}_{ab}(|\mathbf{k}|)=\widehat{R}_{ba}(|\mathbf{k}|)\) in the isotropic case, and:_ \[\Sigma_{3}(|\mathbf{k}|)=\Sigma_{4}(|\mathbf{k}|)=\frac{\pi^{2}c_{0}| \mathbf{k}|^{4}}{1-\kappa}\int_{-1}^{1}(1+\Theta)^{2}\Big{[}\widehat{R}_{a}\left( |\mathbf{k}|\sqrt{2(1-\Theta)}\right)+\widehat{R}_{b}\left(|\mathbf{k}|\sqrt{2(1- \Theta)}\right)\\ -2\widehat{R}_{ab}\left(|\mathbf{k}|\sqrt{2(1-\Theta)}\right)\Big{]} \mathrm{d}\Theta\,.\] ## 5. Summary and outlook We have derived a system of coupled radiative transfer equations to describe the propagation of high-frequency electromagnetic waves in randomly fluctuating bianisotropic dielectric media including dispersive and dissipative effects. The fluctuations are weak and vary at the same length scales as the typical wavelength. We have used a Wigner functional approach and its interpretation in terms of semiclassical pseudo-differential operators to derive the transfer equations from a multiscale analysis. Radiative transfer equations have a geometrical interpretation in terms of bicharacteristic rays that is well suited for their numerical integration by ray methods and ray tracing solvers. Our results extend this approach to heterogeneous _bianisotropic_ media or plasmas beyond the classical applications of ray tracing in homogeneous or piecewise homogeneous _isotropic_ media. In future works one aims to derive the diffusion limit of the radiative transfer model, which describes the evolution of the energy density in physical space solely after the waves have travelled several scattering mean free paths and have lost the memory of their angular distribution at earlier times. The issue of deriving boundary conditions adapted to the Wigner distribution from the boundary conditions applied to the electromagnetic fields is also worth considering, for applications in radar imaging or remote sensing with scenes exhibiting sharp interfaces for example. ## Acknowledgement We thank Thomas Lepetit at ONERA for valuable discussions.
2302.00130
Splitting probabilities for dynamics in corrugated channels: passive VS active Brownian motion
In many practically important problems which rely on particles' transport in realistic corrugated channels, one is interested to know the probability that either of the extremities, (e.g., the one containing a chemically active site, or connected to a broader channel), is reached before the other one. In mathematical literature, the latter are called the "splitting" probabilities (SPs). Here, within the Fick-Jacobs approach, we study analytically the SPs as functions of system's parameters for dynamics in three-dimensional corrugated channels, confronting standard diffusion and active Brownian motion. Our analysis reveals some similarities in the behavior and also some markedly different features, which can be seen as fingerprints of the activity of particles.
P. Malgaretti, T. Nizkaia, G. Oshanin
2023-01-31T22:39:17Z
http://arxiv.org/abs/2302.00130v1
# Splitting probabilities for dynamics in corrugated channels: passive VS active Brownian motion ###### Abstract In many practically important problems which rely on particles' transport in realistic corrugated channels, one is interested to know the probability that either of the extremities, (e.g., the one containing a chemically active site, or connected to a broader channel), is reached before the other one. In mathematical literature, the latter are called the "splitting" probabilities (SPs). Here, within the Fick-Jacobs approach, we study analytically the SPs as functions of system's parameters for dynamics in three-dimensional corrugated channels, confronting standard diffusion and active Brownian motion. Our analysis reveals some similarities in the behavior and also some markedly different features, which can be seen as fingerprints of the activity of particles. Transport of particles in narrow corrugated channels is an important area of research which has attracted a great deal of attention within the recent several decades (see e.g., Ref. [1] for a review). In part, such an interest is due to the relevance to various realistic physical, biophysical and chemical systems, as well applications in nanotechnology and nanomedicine, e.g., for manufacturing of artificial molecular nanofilters. To name just a few examples, we mention transport in porins [2; 3], in nuclear pores [4; 5; 6], in microtubules [7] and dendritic spines [8], transport of microswimmers in capillaries [9; 10], translocation of polymers in pores [11; 12; 13] and their sequencing in nanopore-based devices [14], as well as in microfluidics [15; 16]. The problem of random transport in corrugated channels is clearly also a challenge for the theoretical analysis - it is too complicated to be solved analytically in full detail and one therefore seeks approximate approaches that are justified in particular limits. Most of the available analytical descriptions rely on the so-called Fick-Jacobs approach [17; 18] and its subsequent generalizations (see, e.g., [19; 20; 21; 22; 23]). In essence, this approach amounts to a reduction of the original multidimensional problem to a one-dimensional diffusion in presence of some potential, which mimics in an effective way a spatial variation of the confining boundaries. In some cases, this approximation is physically meaningful and provides an insight into the behavior of important characteristic properties, e.g., currents across the channel, the mean first-passage times to some positions and quantifying fluctuations of the first-passage times [24]. In other systems, in which, e.g., diffusion in the direction perpendicular to the main axis of the channel is important [25], other approaches are to be developed. In many important situations one is interested in understanding the behavior of the properties which characterize a kind of a "broken symmetry" in otherwise symmetric dynamics : in particular, of the probability that a particle injected at some position within the channel reaches first its prescribed extremity without having ever reached the opposite one. This particle can be a tracer within a channel that is attached to a broader pathway to which all the channels are connected, or it can be a chemically active molecule which needs to react with a target site placed at either of the extremities. In mathematical and physical literature (see, e.g., [26; 27]) such probabilities - the so-called splitting probabilities - have been analyzed in details in various settings, with and without an external potential (see, e.g. [28]), providing an important complementary insight into the dynamical behavior. In the present paper, we study analytically the behavior of splitting probabilities (SPs) as functions of system's parameters for transport in narrow corrugated channels, in terms of a suitably generalized Fick-Jacobs approach. In regard to the dynamics, we confront two different transport mechanisms - standard Brownian motion and Figure 1: Top: A colloidal particle in a simple varying-section channel with fore-aft symmetry. Bottom: the effective piecewise linear potential \(A(x)\) within the Fick-Jacobs approach and the corresponding overall barrier \(\Delta A\). active Brownian motion, capitalizing for the latter case on the theoretical framework developed in recent [29; 30; 31; 32; 33; 34; 35]. For passive diffusion we obtain exact expressions for the SPs for channels of an arbitrary periodic shape. For the active case for which the dynamic equations have a much more cumbersome form [29; 30; 31; 32; 33; 34; 35], we resort to a numerical analysis. Our theoretical findings demonstrate that the SPs are quite sensitive to both the geometry of the channel and the activity of the particles. In particular, for active particles the SPs exhibit a spectacular non-monotonous dependence on the amplitude of the corrugation of the channel when the magnitude of the entropic force emerging due to a confinement becomes comparable to the propulsive force. This effect is absent for a passive Brownian motion. **Passive particles.** Consider a particle that starts at \(\mathbf{r}_{0}\) and undergoes a passive Brownian motion within an axially-symmetric three-dimensional channel with impermeable periodically-corrugated boundaries. It is convenient to use the cylindric coordinates \((r,x)\), where the \(x\)-axis coincides with the main axis of the channel, while \(r\) is the radial coordinate. A local thickness of the channel at point \(x\) is defined by \(h(x)\) and hence, \(r\leq h(x)\). In view of the symmetry, the particle's position probability density function \(\rho(\mathbf{r},t)\) and therefore all other properties derived from it are independent of the polar angle. We focus on the SPs - the probabilities that the particle first reaches either of the extremities \(x=\pm L/2\) of the channel (see Fig. 1) without ever hitting the other one. We first write down the advection-diffusion equation that governs the time evolution of the particle's position probability density function \(\rho(\mathbf{r},t)\) : \[\dot{\rho}(\mathbf{r},t)=\nabla\cdot[D\nabla\rho(\mathbf{r},t)+D\beta\rho( \mathbf{r},t)\nabla W(\mathbf{r})] \tag{1}\] where \(\mathbf{r}\) is the position of a particle, \(D\) is the diffusion coefficient, \(\beta^{-1}=k_{B}T\) is the inverse thermal energy, \(k_{B}\) is the Boltzmann constant, \(T\) - the absolute temperature and \(W(\mathbf{r})\) is the particle-wall interaction potential, \[W(\mathbf{r})=\begin{cases}\phi(\mathbf{r}),&r<h(x),\\ \infty,&\text{otherwise}.\end{cases} \tag{2}\] If a local thickness of the channel is a slowly varying function of the \(x\)-coordinate, such that \(\partial_{x}h(x)\ll 1\), it is possible to write down the probability density function in the following approximate factorized form (see, e.g., [36]), \[\rho(\mathbf{r},t)=p(x,t)\frac{e^{-\beta W(\mathbf{r})}}{e^{-\beta A(x)}}\,, \tag{3}\] where \[A(x)=-\frac{1}{\beta}\ln\left[\frac{1}{\pi h_{0}^{2}}\int_{0}^{\infty}e^{- \beta W(x,r)}rdr\right] \tag{4}\] is the local free energy and \(h_{0}\) is the mean cross-section of the channel. Upon integrating over the radial coordinate, we cast Eq. (3) into the form: \[\dot{p}(x,t)=\partial_{x}\left[D\partial_{x}p(x,t)+D\beta p(x,t)\partial_{x}A (x)\right] \tag{5}\] Such a reduction of the original three-dimensional problem to a one-dimensional diffusion in presence of an effective potential (which, in fact, is the local free energy defined in Eq. (4)) is called the Fick-Jacobs approach [37; 20; 38] and its range of applicability is well-understood [39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. This approach has provided an insight into the behavior of quite diverse confined systems, including colloidal particles [42; 49], flow of charged fluids [50; 51; 52; 53; 54], of polymers [55; 24; 56], of rigid rods [57], systems with chemical reactions [58], and pattern-forming ones [59]. We quantify next the SP \(E_{+}\) - the probability that the particle first reaches \(x=L/2\) without ever touching \(x=-L/2\). This SP has the form (see e.g. [28]) \[E_{+}=\frac{\tau_{-}}{\tau_{+}+\tau_{-}}\,,\quad E_{-}=\frac{\tau_{+}}{\tau_{ +}+\tau_{-}}\,, \tag{6}\] where \(\tau\pm=\rho_{0}L/|J_{\pm}|\) and \(|J\pm|\) are the magnitudes of the steady-state currents from \(x_{0}\) to the extremities \(x=L/2\) and \(x=-L/2\), respectively. Note that the SP \(E_{-}\) (i.e., the probability that the particle first reaches \(x=-L/2\) without ever touching \(x=L/2\)) is simply defined by \(E_{-}=1-E_{+}\). Solving Eq. (5), we determine the steady-state currents \(J\pm\) (see appendix) and hence, the functions \(\tau_{\pm}\) to get \[\tau_{-}=\tau_{0}\int_{-L/2}^{x_{0}}e^{\beta A(x)}dx\,,\quad\tau_{+}=\tau_{0} \int_{x_{0}}^{L/2}e^{\beta A(x)}dx\,, \tag{7}\] with \(\tau_{0}=(L/D)e^{-\beta A(L)}\). Expressions (7) totally define the SPs \(E_{\pm}\). They are fairly general and hold for arbitrary \(A(x)\), i.e., confining boundaries of arbitrary (sufficiently smooth) shapes. In the trivial case \(A(x)\equiv 0\), we find from Eq. (7) that the functions \(\tau_{\mp}=L\left(L/2\pm x_{0}\right)/D\) and hence, recover the well-known result [26] \[E_{+}=\frac{1}{2}+\frac{x_{0}}{L}\,,\quad E_{-}=\frac{1}{2}-\frac{x_{0}}{L} \,,\quad-\frac{L}{2}\leq x_{0}\leq\frac{L}{2} \tag{8}\] We will use Eq. (8) in what follows as a point of reference - all departures from a simple linear behavior are indicative of the effects of the confining boundaries. In order to get an idea of the dependence of the SPs on the overall barrier \(\Delta A\) (see Fig. 1), consider a simple form of the free energy : \[\beta A(x)=\begin{cases}\beta\frac{2\Delta A}{L}\left(x+\frac{L}{2}\right),& -\frac{L}{2}<x\leq 0,\\ \beta\frac{2\Delta A}{L}\left(x-\frac{L}{2}\right),&0<x<\frac{L}{2}\end{cases} \tag{9}\] We note parenthetically that such a simple piece-wise linear form has provided qualitatively reliable predictions on the behavior of the mean first-passage times through a finite channel in case of ions in a charged confinement [36]. Fig.2 displays the SPs \(E_{\pm}\) as functions of the starting point \(x_{0}\) in the case of a fore-aft symmetric channel. We observe that for moderate values of the barrier, \(\beta\Delta A=\pm 1\), i.e., for a mild corrugation the SPs exhibit an almost linear dependence on \(x_{0}\) (see blue curves in Fig. 2). For larger values of \(\beta|\Delta A|\), the corrugation of the channel starts to play a major role and entails an essential departure from the linear dependence. We see that upon an increase of \(\beta\Delta A\) to larger positive values, \(E_{+}\) and \(E_{-}\) attain an \(S\)-shaped form which becomes progressively more steep in the vicinity of \(x=0\) the larger \(\beta\Delta A\) is. Recall that for \(\beta\Delta A>0\) the potential has a maximum at \(x=0\) meaning that the channel has a bottleneck at this position. In consequence, when \(x_{0}\) even only slightly exceeds \(0\), it becomes much more probable for a particle to reach the right extremity because the bottleneck does not permit to reach the left one. Conversely, for \(\beta\Delta A<0\) the channel is widest at \(x=0\). In this case, if a particle starts in a broad part of the channel, it first diffuses there for a long time effectively "forgetting" about its actual starting point. Moreover, since in this case the channel narrows close to the extremities, there emerge effective entropic barriers (see, e.g., [60]) which the particle has to overcome in order to reach any of the extremities. As a consequence, a passage to the extremity may necessitate repeated unsuccessful attempts to overpass the entropic barrier, which attempts are interspersed with the excursions in the broad part of the channel. A combined effect of these two factors results in a very weak dependence of \(E_{\pm}\) on \(x_{0}\), which behavior we indeed observe in the panel (b) of Fig. 2. Next, we examine the dependence of \(E_{\pm}\) on the magnitude of the barrier \(\beta\Delta A\) for a few values of the starting position \(x_{0}\). Figure 3 shows that for \(\beta\Delta A\gg 1\) (i.e., for a strong entropic repulsion from the bottleneck at \(x=0\)) the SPs are either (almost) equal to zero or to unity, meaning that the particle most likely reaches first the closest extremity and never gets to the opposite one. In contrast, for \(\beta\Delta A\ll-1\) (i.e., for an entropic repulsion from the extremities) the barrier to overcome becomes very high and a particle has to undertake many attempts to cross the barrier before it actually does it. As a consequence, for large negative \(\beta\Delta A\) the SPs \(E_{\pm}\simeq 0.5\). It is important to emphasize that within the Fick-Jacobs approach, many characteristic properties of a particle diffusing in a channel, such as charge, elastic moduli, deformability, are effectively encoded in the free energy barrier \(\Delta A\)[55; 55; 24; 61]. For example, for uncharged particles which are much smaller than the channel bottleneck (i.e., point-like particles) we have \(\beta\Delta A\lesssim 3\) where \(\beta\Delta A=3\) implies that the maximal cross-section of the channel is \(\sim 30\) times the radius of the bottleneck. In contrast, for charged ions it is feasible to have \(\beta\Delta A\simeq 10\) when electrostatic potential at the walls is \(\beta e\zeta\simeq 10\)[53]. Finally, for deformable objects, like polymers, one may have a very large effective barrier \(\beta\Delta A\simeq 100\)[55; 24; 56]. **Active particles.** The case of particles that propel themselves through the channels, e.g., of "active" colloids, is most challenging, because a local violations of Figure 3: \(E_{\pm}\) as function of the entropic barrier \(\beta\Delta A\) and \(x_{0}/L=0.1\) (blue), \(0.2\) (cyan) and \(0.4\) (red). Solid lines stand for \(E_{+}\) whereas dashed lines - for \(E_{-}\). Figure 2: \(E_{\pm}\) as functions of the staring point \(x_{0}/L\). Panel (a): \(\beta\Delta A=1.0\) (blue), \(3.0\) (cyan) and \(10.0\) (red). Panel (b): \(\beta\Delta A=-1.0\) (blue), \(-3.0\) (cyan) and \(-10.0\) (red). Solid lines stand for \(E_{+}\) whereas dashed lines for \(E_{-}\). the equilibrium may lead to quite a different scenario as compared to the case of a passive Brownian motion. To set-up the scene, consider first a simple situation in which the non-interacting particles move with a constant velocity either to the left or to the right in a one-dimensional system and interchange the sign of the velocity at random, at a constant rate \(\alpha\). In such a model the time evolution of the densities \(\rho_{\downarrow}(x,t)\) and \(\rho_{\uparrow}(x,t)\) of active particles moving to the left or to the right, respectively, is described by [57]: \[\dot{\rho}_{\uparrow} = D\partial_{x}\left[\partial_{x}\rho_{\uparrow}+\beta\rho_{ \uparrow}\partial_{x}A(x)+\beta\rho_{\uparrow}F_{act}\right]-\alpha\rho_{ \uparrow}+\alpha\rho_{\downarrow}\] \[\dot{\rho}_{\downarrow} = D\partial_{x}\left[\partial_{x}\rho_{\downarrow}+\beta\rho_{ \downarrow}\partial_{x}A(x)-\beta\rho_{\downarrow}F_{act}\right]+\alpha\rho_{ \uparrow}-\alpha\rho_{\downarrow}\] where \(F_{act}\) is the propulsive force. These equations are to be solved subject to the boundary conditions imposed at the extremities : \(\rho_{\uparrow,\downarrow}(x=x_{0})=\rho_{0}/2\) and \(\rho_{\uparrow,\downarrow}(x=\pm\frac{L}{2})=0.\) A straightforward analysis [57] shows that the dynamical behavior is characterized by two dimensionless parameters: the Peclet number \(\mathrm{Pe}=\beta F_{act}R\) and the reduced hopping rate \(\Gamma=\alpha R^{2}/D\). In particular, for a Janus swimmer [57] the hopping between two states stems from a rotational diffusion of the particle and \(\Gamma_{*}=3/4\). For other types of swimmers (e.g., bacteria), the hopping rates can be much smaller. Rewriting the equations in dimensionless form (but keeping the same notations), and setting the length scale to \(L\), we have that the particle probability densities \(\rho=\rho_{0}^{-1}(\rho_{\uparrow}+\rho_{\downarrow})\) and \(\phi=\rho_{0}^{-1}(\rho_{\uparrow}-\rho_{\downarrow})\) obey, in the steady state, \[0 =\partial_{x}[\partial_{x}\rho+\beta\Delta A\rho\partial_{x}a(x) +\frac{\mathrm{Pe}L}{R}\phi], \tag{10}\] \[0 =\partial_{x}\left[\partial_{x}\phi+\beta\Delta A\phi\partial_{x }a(x)+\frac{\mathrm{Pe}L}{R}\rho\right]-2\frac{L^{2}}{R^{2}}\Gamma\phi,\] while the boundary conditions take the form \[\rho(x_{0})=\rho_{0},\rho(\pm 1/2)=0,\,\phi(x_{0})=0,\phi(\pm 1/2)=0. \tag{11}\] Here \(a(x)\) is a piecewise-linear function such that \(a(0)=1\) and \(a(\pm 1/2)=0\). Note that the reduced channel length \(L/R\) appears in the equations only in a combination with \(\mathrm{Pe}\) and \(\Gamma\). This means that the SPs for various channel lengths can be obtained by taking the solution at fixed \(L/R\) and changing \(\mathrm{Pe}\) and \(\Gamma\) accordingly. In the following we use \(L/R\)=10. Consider first a channel with a constant cross-section (\(\Delta A=0\)) - the simplest case for which, however, the SPs have not been determined as yet. Upon some straightforward algebra (see the Suppl. Mat., Eqs.(S30) to (S42)) it is possible to derive closed-form expressions for the currents \(J_{\pm}\) and hence, for the SPs which we depict in Fig. 4. We infer from Fig. 4(a) that upon an increase of \(\mathrm{Pe}\) the SPs \(E_{\pm}\) become (almost) independent of the starting point, except when the latter appears close to the extremities. This resembles the behavior which we observed for a passive particle in a channel with \(\Delta A<0\) for which the bottlenecks (entropic barriers) are at the extremities and the largest cross-section is at \(x=0\). Here, the origin of such a behavior is somewhat different : For low values of \(\Gamma\) the particle does not often change the direction of its motion and travels towards the extremities of the channel ballistically. Consequently, the larger the propulsive force (and hence, \(\mathrm{Pe}\)) is, the less sensitive are \(E_{\pm}\) to the starting point. In turn, in panel (b) we plot \(E_{\pm}\) as functions of \(x_{0}/L\) with fixed \(\mathrm{Pe}\) and three different values of \(\Gamma\). We realize that upon an increase of \(\Gamma\) the \(x_{0}\)-dependence of the SPs approaches the linear dependence in Eq. (8) specific to a passive Brownian motion in one-dimensional systems. This is, of course, not counter-intuitive - the larger \(\Gamma\) is, the more often the particle changes the direction of its motion and the dynamics becomes diffusive. To get an additional insight into the behavior of active particles in channels with a constant cross-section we plot in Fig. 5 the SPs \(E_{\pm}\) as functions of \(\Gamma\) and \(\mathrm{Pe}\) for fixed \(x_{0}/L=0.35\). Clearly, since the starting point is close to the right extremity of the channel, one expects that \(E_{+}>E_{-}\). We observe that \(E_{+}\) (\(E_{-}\)) is a monotonically decreasing (increasing) function of \(\mathrm{Pe}\). While Figure 4: \(E_{+}\) (solid curves) and \(E_{-}\) (dot-dashed curves) as functions of \(x_{0}/L\). Panel (a): Fixed \(\Gamma=0.1\), \(L/R=10\) and varying Péclet number \(\mathrm{Pe}=0\) (blue), \(1\) (cyan) and \(5\) (red). Panel (b): Fixed \(\mathrm{Pe}=3\) and \(\Gamma=0.01\) (blue), \(0.1\) (cyan) and \(1\) (red). for small Pe (for which the particle's dynamics is a passive Brownian motion) \(E_{+}\) (\(E_{-}\)) is rather large (small) (\(E_{+}\approx 0.85\) and \(E_{-}\approx 0.15\)), upon an increase of Pe dynamics becomes ballistic and both \(E_{+}\) and \(E_{-}\) tend to the same universal value \(1/2\), which is rather counter-intuitive. Conversely, \(E_{+}\) (\(E_{-}\)) is a monotonically increasing (decreasing) function of the rate \(\Gamma\). Interestingly enough, in the small-\(\Gamma\) limit the values of \(E_{+}\) (\(E_{-}\)) are markedly different for small and large values of Pe : for Pe = 1 the SP \(E_{+}\) (\(E_{-}\)) is noticeably higher (lower) than 1/2 (in fact, \(E_{-}\approx 0.4\) and \(E_{+}\approx 0.6\)), while for Pe = 5 and 10 we have \(E_{-}\approx E_{+}\approx 1/2\). In the limit \(\Gamma\to\infty\) the dynamics becomes diffusive and we recover the low Peclet number behavior depicted in panel (a). Lastly, we consider the most difficult case - the behavior of the SPs for dynamics of active particles in a channel with a varying cross-section, encoded in the effective potential \(A(x)\). In this case, Eqs.(10) are too complicated to be solved analytically and we resort to a numerical analysis of these equations, which is done by using the standard scipy library in Python (see Suppl. Mat.). Our findings for the SP \(E_{-}\) are summarized in Figs. 6 and 7. Fig. 6 displays the dependence of \(E_{-}\) (recall that \(E_{+}=1-E_{-}\)) on the initial position \(x_{0}\) for fixed Pe = 0.1, \(\Gamma=3/4\) (Janus colloid case) and varying \(\Delta A\). In this figure, circles present the results obtained numerically for active swimmers, while solid curves - an analytical solution for passive particles. Fig. 6(a) demonstrates that for positive \(\Delta A\) (a bottleneck in the center), the behavior of active swimmers is very different from that of passive particles, and depends strongly on the values of both \(\Delta A\) and Pe. For sufficiently large values of the parameter \(\beta\Delta A/\)Pe (red circles), which limit is realized either for large values of the barrier \(\beta\Delta A\) or for small Pe, the SP \(E_{-}\) for the active particles in channels with a varying cross-section exhibits a characteristic \(S\)-shaped form with a very steep dependence on \(x_{0}\) close to the center of the channel. This implies, that once \(x_{0}\) only slightly exceeds (or is less than) 0, the particle is (almost) certain to reach the closest extremity without ever reaching the other one. Numerically, the value of \(E_{-}\) appears to be very close to the corresponding result for passive particles, which is, of course, not a counter-intuitive behavior. In contrast, for small \(\beta\Delta A/\)Pe (blue circles), i.e., either for large values of Figure 5: \(E_{+}\) (solid curves) and \(E_{-}\) (dot-dashed curves) for \(x_{0}/L=0.35\) in a channel with \(L/R=10\). Horizontal dashed line defines the level \(E_{+}=E_{-}=1/2\). Panel (a): The SPs are plotted as functions of the Péclet number for \(\Gamma=0.001\) (blue), 0.1 (cyan) and 1 (red). Panel (b): The SPs are plotted as functions of \(\Gamma\) for Pe = 1 (blue), 5 (cyan) and 10 (red). Figure 6: Panel (a): \(E_{-}\) as a function of the initial position for \(\beta\Delta A/\)Pe = 3 (blue), 10 (cyan) and 20 (red) and fixed \(L=10R\). Solid lines indicate the respective behavior for passive particles, circles - for active particles with Pe = 1 and \(\Gamma=0.1\). Panel (b): The same for \(\beta\Delta A/\)Pe = \(-3\) (blue), \(-10\) (cyan) and \(-20\) (red). The dashed curve depicts the analytical solution for the active particles in a channel with a constant cross-section. Pe or for small values of the entropic barrier, \(E_{-}\) appears to be very close to our analytical prediction obtained for active swimmers (dashed curve) moving in a constant cross-section channels which also physically quite plausible. Since the behavior in these limiting cases is very different, in general, there is a strong dependence of \(E_{-}\) on \(\beta\Delta A/\text{Pe}\) for the intermediate values of the system's parameters. We can therefore expect that particles with different activities can behave very differently in such a channel, especially if they start in the vicinity of the bottleneck. For negative values of \(\Delta A\) (entropic repulsion from the extremities), which case is presented in Fig. 7(b), \(E_{-}\) depends weakly on the starting point, which resembles the behavior observed earlier for passive particles. Further on, to highlight the difference between the passive and the active cases, in Fig. 7 we plot \(E_{\pm}\) as functions of the barrier \(\Delta A\) in situations when a particle (passive or active) starts either close to the middle of the channel, at \(x_{0}=0.1L\), or close to the right extremity of the channel, \(x_{0}=0.4L\). We observe that in the active particles case the behavior of the SPs is indeed very different from that of a passive one, especially when the starting point is close to either of the extremities. While in the situation when the starting point is close to the middle of the channel (i.e., for \(x_{0}=0.1L\)) all curves look very similar with the only difference that for \(\text{Pe}>0\) they become progressively (with an increase of Pe) more shifted to the larger values of the barrier \(\beta\Delta A\), in case when \(x_{0}=0.4L\) a remarkable non-monotonous behavior as function of \(\beta\Delta A\) emerges for active particles, meaning that at some corrugation profiles the active particles more readily reach the right extremity. Interestingly enough, the position of the local maximum (minimum) of \(E_{+}\) (\(E_{-}\)) corresponds to \(-\beta\Delta A\simeq\text{Pe}/2\), i.e., the entropic force compensates the propulsive one. For passive particles \(E_{\pm}\) are monotonously increasing functions of \(\beta\Delta A\). ## V Conclusion To conclude, we discussed here the behavior of the splitting probabilities as functions of system's parameters for dynamics in three-dimensional axially-symmetric channels with varying cross-sections. In a standard notation, the splitting probability is the probability that either of the extremities is reached before the opposite one. In regard to the dynamical behavior, we focused on two models of random transport - standard Brownian motion and active Brownian motion. Our analytical approach was based on a suitably generalized Fick-Jacobs approximation, which reduces an original three-dimensional model to a one-dimensional system with a spatially-varying effective potential defined as the local free energy. For standard diffusion, the latter model is exactly solvable and we derive explicit expressions for the splitting probabilities in arbitrarily shaped channels. For active Brownian motion the dynamical equations are more complicated and we find an analytical solution for constant cross-sections only. For more general case of a spatially-varying cross-section we resort to a numerical analysis. Our analysis reveals some similarities in the behavior of passive and active Brownian motions and also some distinctly different features, which can be seen as fingerprints of the activity of particles. A more detailed discussion of the behavior in channels with a more complicated geometry and more elaborate analytical analysis will be presented elsewhere.
2309.03993
Cold Solar Flares I. Microwave Domain
We identify a set of ~100 "cold" solar flares and perform a statistical analysis of them in the microwave range. Cold flares are characterized by a weak thermal response relative to nonthermal emission. This work is a follow up of a previous statistical study of cold flares, which focused on hard X-ray emission to quantify the flare nonthermal component. Here we focus on the microwave emission. The thermal response is represented by the soft X-ray emission measured by the GOES X-ray sensors. We obtain spectral parameters of the flare gyrosynchrotron emission and investigate patterns of the temporal evolution. The main results of the previous statistical study are confirmed: as compared to a "mean" flare, the cold flares have shorter durations, higher spectral peak frequencies, and harder spectral indices above the spectral peak. Nonetheless, there are some cold flares with moderate and low peak frequencies. In a majority of cold flares, we find evidence suggesting the presence of the Razin effect in the microwave spectra, indicative of rather dense flaring loops. We discuss the results in the context of electron acceleration efficiency.
Alexandra L. Lysenko, Stephen M. White, Dmitry A. Zhdanov, Nataliia S. Meshalkina, Aleksander T. Altyntsev, Galina G. Motorina, Gregory D. Fleishman
2023-09-07T19:55:17Z
http://arxiv.org/abs/2309.03993v1
# Cold Solar Flares I. Microwave Domain ###### Abstract We identify a set of \(\sim\)100 "cold" solar flares and perform a statistical analysis of them in the microwave range. Cold flares are characterized by a weak thermal response relative to nonthermal emission. This work is a follow up of a previous statistical study of cold flares, which focused on hard X-ray emission to quantify the flare nonthermal component. Here we focus on the microwave emission. The thermal response is represented by the soft X-ray emission measured by the _GOES_ X-ray sensors. We obtain spectral parameters of the flare gyrosynchrotron emission and investigate patterns of the temporal evolution. The main results of the previous statistical study are confirmed: as compared to a "mean" flare, the cold flares have shorter durations, higher spectral peak frequencies, and harder spectral indices above the spectral peak. Nonetheless, there are some cold flares with moderate and low peak frequencies. In a majority of cold flares, we find evidence suggesting the presence of the Razin effect in the microwave spectra, indicative of rather dense flaring loops. We discuss the results in the context of electron acceleration efficiency. 0000-0002-8802-8880]Alexandra L. Lysenko 0000-0002-4883-7888]Stephen M. White 0000-0002-4733-0888]Dmitry A. Zhdanov 0000-0002-4883-7888]Natalia S. Meshalkina 0000-0002-4883-7888]Aleksander T. Altyntsev 0000-0002-4883-7888]Galina G. Motorina 0000-0002-4883-7888]Gregory D. Fleishman ## 1 Introduction The magnetic energy that powers a solar flare can be distributed in a number of ways: heating of the surrounding plasma, accelerating charged particles (both electrons and ions), the kinetic energy of a coronal mass ejection, and radiation. The energy allocation between these different components varies dramatically from flare to flare (Emslie et al., 2012), and it is yet unclear what causes the energy partitioning in each case. In purely "thermal" flares, detectable particle acceleration does not occur and almost all released magnetic energy is spent on direct plasma heating (e. g., Gary & Hurford, 1989; Fleishman et al., 2015). For the majority of flares, particle acceleration coexists with direct plasma heating (Veronig et al., 2002). For other flares, particle acceleration strongly dominates over direct heating, and almost all of the plasma thermal response is caused by the energy loss of the accelerated electrons. These flares are called "cold" solar flares and are characterized by a weak thermal response relative to the nonthermal emission. For cold solar flares no significant thermal emission is observed prior to the impulsive phase where nonthermal particles dominate, thus they represent a subclass of "early impulsive flares" (Sui et al., 2006). Cold solar flares are of particular interest. First, they allow experimental exploration of the causes of energy partitioning in a solar flare. Second, in such flares emission from nonthermal electrons can be examined down to low (\(\sim\)10 keV) energies without contamination from stronger thermal emission. Third, cold solar flares allow us to study the thermal response of the plasma to accelerated particles without the admixture of direct heating. Several cold solar flares were reported in previous case studies (White et al., 1992; Bastian et al., 2007; Fleishman et al., 2011; Masuda et al., 2013; Fleishman et al., 2016; Motorina et al., 2020). The energy balance between thermal and nonthermal components calculated for cold flares in Bastian et al. (2007), Fleishman et al. (2016) and Motorina et al. (2020) confirmed that the energy conveyed by accelerated electrons was sufficient to explain the observed plasma heating. The weak thermal response was attributed to (a) low temperatures due to high plasma density, in the flares reported by Bastian et al. (2007) and Masuda et al. (2013); or (b) low emission measure due to low plasma density (Fleishman et al., 2011) or the small volume of the main flaring loop (Fleishman et al., 2016). In Lysenko et al. (2018) (hereafter L18) a statistical study of cold solar flares in the X-ray and microwave ranges was performed for the first time. Cold flares were selected using hard X-ray (HXR) data from the Konus-_Wind_ experiment (Aptekar et al., 1995), complemented by the microwave data from several available instruments. The reference flare set in the HXR range has been selected from solar flares registered by Konus-_Wind_ and not classified as cold flares. The reference flare set in the microwave range was taken from the statistical study by Nita et al. (2004) based on observations at the Owens Valley Solar Array (OVSA). L18 found that, by comparison with the reference groups, cold flares tend to be characterized by harder spectral indices of the accelerated electron energy spectra. In the microwave range the overall peak frequency distribution of gyrosynchrotron spectra for cold flares was significantly shifted towards higher frequencies as compared to the reference flare set. At the same time, there are cold flares with very low peak frequencies. Compared to the reference groups of flares, cold flares are characterized by shorter duration in both the microwave and HXR ranges. Thus, L18 concluded that a group of cold flares characterized by high peak frequencies is associated with compact loops with strong magnetic fields, while the low-frequency group is produced by larger loops with weak field. It remains a question whether the harder spectra of accelerated electrons are related to the acceleration mechanism involved in cold flares, or whether chromospheric evaporation is reduced due to the penetration of electrons with harder spectra into the deeper layers of the Sun's atmosphere (Fisher et al., 1985; Reep et al., 2015). In the present statistical study we use microwave emission for the selection of cold solar flares instead of HXR emission in order to provide a different perspective on this phenomenon. The aims of the research are to cross-check the results with L18; to explore if the selection criteria for cold solar flares are sensitive to the choice of the nonthermal emission regime, HXR or microwave; which results are resistant to the flare selection criteria and which are not. The present study explores the thermal response of the ambient plasma to accelerated electrons, flare evolution in microwave and X-ray ranges, and draws conclusions about flare morphology and flare properties related to the acceleration mechanism. A practical goal of the research is to extend our list of well-observed cold flares for future case studies. This paper is the first part of the research and is focused on event selection and the analysis of cold flare properties in the microwave domain. In the forthcoming second part we will study X-ray emission of cold solar flares and the relationships between properties observed in the microwave and X-ray domains. ## 2 Instrumentation ### Total Power Radio Instruments Key radio instruments used in this work are the Nobeyama Radiopolarimeters (NoRP, Torii et al., 1979) located in Japan. NoRP measures intensity and circular polarization at six frequencies (1, 2, 3.75, 9.4, 17, and 35 GHz) with a time resolution of 1 s along with intensity measurements only at 80 GHz. In addition to NoRP, we use observations from several other radio instruments: the US Air Force Radio Solar Telescope Network (RSTN, Guidice et al., 1981), the Solar Radio Spectropolarimeters (SRS, Muratov, 2011), and the Badary Broadband Microwave Spectropolarimeters (BBMS, Zhdanov & Zandanov, 2015). RSTN consists of four stations at Learmonth (Australia), San Vito (Italy), Sagamore Hill (USA) and Palehua (USA), which measure intensity at eight frequencies (245, 410, 610, 1415, 2695, 4995, 8800, and 15400 MHz) with a 1 s time resolution. In this work we use observations by Learmonth and Palehua stations overlapping in time with NoRP. SRS and BBMS spectropolarimeters are located near Irkutsk, Russia and provide integrated flux over the whole solar disk in two circular polarizations. SRS covers the 2-24 GHz frequency range at 16 frequencies with a temporal resolution of 1.6 s. BBMS performs measurements in the range 4-8 GHz at 26 frequencies with resolution of 10 ms. ### GOES data in Soft X-ray Range In the soft X-ray (SXR) range we use the data from the _GOES_ X-ray sensors (XRS) in two broad bands, 1-8 A and 0.5-4 A (White et al., 2005). Spacecraft of NOAA's _GOES_ series have been performing SXR observations since 1974 with temporal resolution varying from 3 s to 1 s. It should be noted that the XRS flux scale was changed with the transition to GOES-16 in 20201, and NOAA are in the process of converting older data to the modern scaling, but most existing flare catalogs use the legacy scaling and that is the scale used for XRS data here. ### Imaging instruments When available, the locations of cold solar flares on the solar disk were determined by microwave images from Nobeyama Radioheliograph (NoRH, Nakajima et al., 1994). In other cases we searched for HXR images from the Ramaty High Energy Solar Spectroscopic Imager (_RHESSI_, Lin et al., 2002). For the cases where neither NoRH nor _RHESSI_ data were available, we used differential images obtained by Atmospheric Imaging Assembly onboard Solar Dynamics Observatory (_SDO_/AIA, Lemen et al., 2012).2 Footnote 2: The _SDO_/AIA images are available on the website [https://www.lmsal.com/solarsoft/latest_events_archive/events_summary/](https://www.lmsal.com/solarsoft/latest_events_archive/events_summary/) or [https://helioviser.ias.u-psud.fr/](https://helioviser.ias.u-psud.fr/). ## 3 Selection of cold solar flares We searched for cold solar flares in the events seen by NoRP from 2010 to 2017 and identified matching observations by _SDO_/AIA and _RHESSI_ for future case studies of the selected flares. For this search we used a new catalog of NoRP events developed for other purposes (a study of the incidence of coherent emission; S. M. White et al., in preparation). This catalog is somewhat larger than the observatory catalog, which stopped updating in 2015 due to resource issues: the new catalog has 633 events in the period 2010-2015 relevant for this study, compared to 195 in the on-line catalog. For comparison between thermal and nonthermal components we used an approach similar to that adopted in L18. The nonthermal component was quantified as the background-subtracted peak flux density or time-integrated flux density of gyrosynchrotron emission observed in the microwave range during the impulsive phase. The thermal response to this nonthermal energy input was estimated as the maximum flux increase in the _GOES_ 1-8 A channel during the flare impulsive phase, \(\Delta GOES\) (Figure 1). If no response was observed, or a high and decreasing background in the _GOES_ 1-8 A channel was present, \(\Delta GOES\) was estimated as the flux error at the beginning of the impulsive phase in the 1-8 A channel, i.e., 15 % of the flux (Garcia, 1994). In the cases where thermal response was very low, i. e. less than the error of the maximum flux, we also estimated upper limits for \(\Delta GOES\) as 15 % of the maximum flux. L18 identified two main groups in the radio properties of cold solar flares: high-frequency flares with a maximum in the radio spectrum at frequencies \(>\)10 GHz, and low-frequency flares with spectral maximum at a few GHz. Along with these main groups, there are cold flares with spectral peaks at moderate frequencies (between \(\sim\)3 and \(\sim\)10 GHz). To account for this spectral Figure 1: Association between thermal and nonthermal flare components. (a) Microwave dynamic spectrum of a flare on 2015 May 8; the colorbar represents the flux density in SFU; time profiles at (b) 3.75 GHz, (c) 9.4 GHz, and (d) 17 GHz; horizontal dashed lines correspond to the preflare background level, dotted lines indicate the postflare background level, dash-dotted lines mark flare peaks at each frequency, semitransparent fill corresponds to time-integrated flux density at each frequency; (e) flux in the _GOES_ 1–8 Å channel, horizontal dashed lines mark the flux increment \(\Delta GOES\) during the impulsive phase at 3.75 GHz (red), 9.4 GHz (blue), 17 GHz (green); vertical dashed lines indicate the beginning and the end of the flare impulsive phase at 3.75 GHz (red), 9.4 GHz (blue), 17 GHz (green). Figure 2: Relationship between peak flux density (left) and time integrated flux density (right) observed by NoRP and the flux increment in the _GOES_ 1–8 Å channel during the flare impulsive phase at 3.75 GHz (top), 9.4 GHz (middle) and 17 GHz (bottom). Solid black lines are regression lines, dashed black lines represent the bounds which distinguish the cold flares. Cold flares are represented as red (3.75 GHz), blue (9.4 GHz) and green (17 GHz) filled circles, other flares are marked as grey circles. diversity we performed a search on three NoRP frequencies: 3.75 GHz, 9.4 GHz and 17 GHz. We did not use 1 GHz and 2 GHz for the search because of the frequent occurrence of bright coherent emission there. At higher frequencies we considered narrow-band events with a short time scale to be caused by coherent emission and excluded them. A bias could well result from the use of multiple frequencies to pick out cold flares without accounting for the location of the radio spectral peak. This is because the microwave flux depends on a high power of the magnetic field strength as well as the nonthermal energy of the radio-emitting electrons, and thus the pure radio flux is, likely, not the best measure of the nonthermal energy: the higher the spectral peak, likely the higher the magnetic field strength and thus the more the radio flux might over-represent the nonthermal energy. On the other hand, regions of strong magnetic field tend to be more compact than regions with weaker field. The radio flux is proportional to the source area when optically thick; thus, the radio flux increase due to magnetic field increase may be partly compensated by a corresponding decrease of the source area. We therefore check _a posteriori_ if our selection introduces any bias towards events with higher microwave spectral peak frequency. There is an intrinsic uncertainty in determining the end of the impulsive phase in the microwave range because of thermal radio emission at later stages. Often, the flare-associated thermal radio emission decays very slowly and lasts for minutes or tens of minutes after the gyrosynchrotron emission is over, similar to the duration of SXR emission. To identify and exclude this component we segmented the microwave time profiles into "Bayesian blocks", which is a standard technique for astronomical light-curve analysis (Scargle et al., 2013). A Bayesian block is a time interval assumed to have "constant" flux density at the selected significance level, with superposed variations that can be regarded as random fluctuations. Here we took the significance level to be equal to \(4\sigma\) where \(\sigma\) was determined for each flare on each of the three frequencies as a standard deviation of the flux density during a 30 s interval with stable emission. We determined preflare and postflare background levels as the first Bayesian block with duration more than 30 s before and after the flare peak respectively.3 The beginning and the end of the impulsive phase were determined as the times when the flux is at a level 10 % of the peak value above preflare and postflare background levels, respectively (Figure 1). Time profiles at frequencies with failures, significantly varying background, and significant contribution from coherent emission were excluded. Footnote 3: We used our own implementation for Bayesian block segmentation. The source code is available via [https://github.com/dsvinkin/b.blocks](https://github.com/dsvinkin/b.blocks) Finally we derived power-law relations between the peak flux density and \(\Delta GOES\) and time-integrated flux density and \(\Delta GOES\) for more than 500 flares. These data are plotted in Figure 2. Solid lines represent linear regression lines (on a logarithmic scale) for all flares between the peak flux density (Figure 2, left) and the time-integrated flux density (Figure 2, right) at 3.75 GHz (top), 9.4 GHz (middle) and 17 GHz (bottom) and \(\Delta GOES\). Regression lines (solid black) were calculated using the python scipy.stats.linregress procedure. Dashed lines separate outliers with low thermal response at each of the three frequencies. These outliers were selected as in L18: we built the distribution of distances between each point and the regression line for the specified frequency, with negative distances for the points below and positive distances for the points above the regression line. The points which fall in the 16 % percentile are identified as weak thermal response outliers. In this manner we selected 130 flares which demonstrated weak thermal response relative to the nonthermal peak or time-integrated emission on at least one of the three frequencies. From this candidate list we excluded flares which are not early impulsive flares, i.e. flares with preheating. We used the formal criterion proposed in Sui et al. (2006): the absence of a flux increase in the _GOES_ XRS sensors earlier than 30 s before the flare impulsive phase. Here we met with a difficulty - almost half of the cold flare candidates occurred during unstable background in the SXR range, either during the decay of a more powerful flare, or after a series of small brightenings, perhaps, preflares. Such preflares are rather common phenomena (see, e.g. Farnik et al., 1996; Farnik & Savy, 1998; Battaglia et al., 2009; Wang et al., 2017), and for the majority of flares the preflare sources don't coincide spatially with the main flare phase (Farnik & Savy, 1998) and refer to a distinct energy release such as reconnection of a small loop with a larger one (Farnik et al., 1996; Wang et al., 2017). Without spatial information it is difficult to draw a solid conclusion as to whether activity observed before the flare refers to the same reconnection act as the impulsive phase and, thus, may represent a preheating, or not. Thus, we used the following formal criteria. If the flux in the _GOES_ sensors increases monotonically before the impulsive phase, we considered it to be preheating and excluded such flares. If before the impulsive phase the SXR flux increased and than decreased, we kept such flares as cold solar flare candidates. These will be further discussed in Section 5. These criteria identified 109 early impulsive solar flares without preheating and with weak thermal response. Later we excluded two flares after cross-calibration of NoRP data with BBMS and SRS, which revealed that the flux density in the microwave range for these flares showed significant discrepancies between observatories. Localization of these 107 flares on the solar disk revealed two behind-the-limb flares which were also excluded since the thermal emission could, in fact, be strong but occulted by the solar limb. The final list contains 105 cold solar flares. Figures and tables with the relationship between nonthermal and thermal flare components for each cold flare are reported at doi:10.5281/zenodo.7775771. Cold flares are plotted in Figure 2 with red, blue and green circles referring to frequencies 3.75 GHz, 9.4 GHz and 17 GHz, respectively, while the remaining flares are shown as gray circles. As we used weak thermal response relative to either peak or time-integrated nonthermal emission, there are cold flares above the dashed lines on all relationships in Figure 2. The cold flare list presented here contains 15 solar flares registered by Konus-_Wind_ in the triggered mode. The cold solar flare list from L18 contains six cold flares which coincide in time with NoRP observations and occur in 2010-2017. Thus, the cold flare selection in L18 was more restrictive and there are only four flares in common between these two lists. ## 4 Data Analysis ### Distribution of cold solar flares on the disk Cold solar flare locations on the solar disk are plotted in Figure 3, along with their distribution with longitude. Helioprojective coordinates for each cold flare are reported at doi:10.5281/zenodo.7775771. The flare distribution over longitude is fairly uniform taking into account the uncertainties. Kosugi (1985) showed that solar gyrosynchrotron emission at 17 GHz is apparently isotropic, and the slight bias away from the limb obtained by him is due to the difficulty of near-limb flare identification in \(H_{\alpha}\) images. Thus, we conclude that the cold solar flare longitudinal distribution is similar to that of other flares, and the selection of cold solar flares is not affected by observational selection related to emission directivity. ### Spectral fitting of gyrosynchrotron microwave spectra The gyrosynchrotron spectrum of flare-accelerated electrons depends on the magnetic field strength and direction, the spectrum of nonthermal electrons, and the density and the temperature of the thermal plasma, see online video in Fleishman et al. (2022). It has a bell-shape that can be characterized by four free parameters: the peak frequency \(f_{\rm peak}\), the peak flux density \(S_{\rm peak}\), the low-frequency spectral index \(\alpha_{\rm f}\), and the high-frequency spectral index \(\alpha_{\rm hf}\). This spectral shape can be described by a phenomenological model (Stahli et al., 1989): \[S(f)=Af^{\alpha}\left[1-e^{-Bf^{-\beta}}\right], \tag{1}\] Here \(\alpha_{\rm hf}\)=\(\alpha\), \(\alpha_{\rm hf}\)=\(\alpha\)-\(\beta\) and \(f_{\rm peak}\) and \(S_{\rm peak}\) can be calculated via parameters of the function \(S(f)\). To determine parameter values at a given time frame we use least-square spectral fitting. Prior to the spectral fitting, we prepared dynamic spectra combining all available microwave data obtained with various instruments. We fixed clock errors by selecting NoRP timing as the reference. Time intervals for the background estimation Figure 3: Cold solar flare locations on the solar disk (top) and distribution by longitude (bottom). were defined individually for each instrument, and background was approximated by a constant or a polynomial. The standard deviation at each frequency was estimated as the flux density deviation from the background level during a background time interval. To reduce random fluctuations of the fitting parameters we used Bayesian block segmentation (see Section 3). The time intervals used for the spectral fitting were combined from the boundaries of Bayesian blocks at each frequency, thus, fitting was performed on the data with variable time resolution. We ignored frequencies with faults, amplitude calibration errors and frequencies where coherent plasma emission could be present. If there were multiple spectral peaks we analyzed the most intense one. The least-squares technique was applied in python using the scipy.optimize.minimize function. Wherever possible we fitted the spectra by the phenomenological gyrosynchrotron model (GYR, Equation 1), but for a number of flares the spectral peak was outside the observed frequency range. In these cases we used a power-law model (PL) and determined only lower limits for \(S_{\rm peak}\), lower or upper limits for \(f_{\rm peak}\) and \(\alpha_{\rm ff}\) or \(\alpha_{\rm hf}\) depending on whether \(f_{\rm peak}\) is above or below the observational range, respectively. For two flares we had to freeze \(\alpha_{\rm ff}\) at fixed values in order to achieve a fit. For some flares \(\alpha_{\rm ff}\) or \(\alpha_{\rm hf}\) was poorly defined when using the GYR model due to the small number of observed frequencies below or above the spectral peak. In these cases the appropriate spectral index was defined using the PL model. For four flares both \(\alpha_{\rm ff}\) and \(\alpha_{\rm hf}\) were rather large such that the GYR model couldn't determine them correctly, and the PL model was used for estimation of both spectral indices. We obtained successful fits for 86 flares with the GYR model and for 12 flares with the PL model. For the remaining seven flares spectral fitting failed due to insufficient number of observed frequencies or large amplitude-calibration discrepancies between different instruments. An example of the time evolution of the fitted parameters is plotted in Figure 4. Fit results for individual cold solar flares are plotted at doi:10.5281/zenodo.7775771 ### Parameter distributions of gyrosynchrotron spectra Following L18, we compare distributions of the fitted parameters \(S_{\rm peak}\), \(f_{\rm peak}\), \(\alpha_{\rm ff}\) and \(\alpha_{\rm hf}\) obtained for cold solar flares with those from a group of reference flares obtained by Nita et al. (2004) based on OVSA data (Gary & Hurford, 1994). Unlike L18 and the present study, the reference group contains events with both gyrosynchrotron and coherent emission at the lower frequencies. To account for this slight discrepancy we included in the reference distributions only flares with peak frequencies above 2.6 GHz; this frequency was found to be a rough demarcation point between decimetric, often coherent, and centimetric, mainly gyrosynchrotron bursts. To account for significant time evolution of spectral parameters which can occur during the microwave bursts (see, e.g., Melnikov et al., 2008; Fleishman et al., 2016) we used the same approach as in L18: histograms with parameter distributions include values obtained for Figure 4: An example of fitting a cold solar flare with the GYR model (equation 1; cf. L18). (a) Microwave dynamic spectrum; the color bar shows the flux density scale in SFU; time evolution of (b) the peak flux density; (c) the low frequency spectral index; (d) the peak frequency; and (e) the high frequency spectral index. Red points mark time intervals included in histograms with fitting parameters. five time frames during the main temporal peak. These five time frames are selected at the beginning and the end of the burst, at the peak maximum, and in the middle of the rising and declining phases. The beginning and the end of the burst were defined in the same way as in Nita et al. (2004): as the beginning and the end of the time interval during which the flux density at the peak frequency of the event is above 80 % of the corresponding peak flux. The parameter values included in histograms are marked in Figure 4 with red points. Distributions of the peak flux density \(S_{\rm peak}\), peak frequency \(f_{\rm peak}\), low frequency spectral index \(\alpha_{\rm ff}\) and high frequency spectral index \(\alpha_{\rm nf}\) are presented in Figure 5, along with results obtained for cold solar flares in L18 and results for the reference group of solar flares taken from Nita et al. (2004). The peak frequency distribution for cold solar flares obtained in this work is shifted to higher frequencies as compared to the distribution of the reference flare set, but this shift is not as large as the one obtained in L18. This could be evidence that our multi-frequency Figure 5: Distributions of the microwave spectral parameters for cold solar flares in the present study (blue hatching), cold solar flares from L18 (light blue), reference flare set (gray). Inverse hatching denotes cold flares from this work fitted with the PL model. Top left: peak frequency distribution; top right: peak flux density distribution; bottom: distribution of spectral indices in the low frequency range (right) and in the high frequency range (left). cold solar flare selection process does not introduce additional bias towards events with high spectral peak frequency compared with the L18 selection based on the HXR data (see Section 3). As in L18, there is a group of cold solar flares with low peak frequencies (\(f_{\rm peak}<\)3 GHz), i. e. 14 flares out of 98 flares with successful fits; 37 flares are characterized by moderate values of \(f_{\rm peak}\) (3 GHz\(<\)\(f_{\rm peak}<\)10 GHz); while the most numerous group among cold solar flares, 47 out of 98, are high-frequency flares with \(f_{\rm peak}>\)10 GHz. Higher values of peak frequency indicate stronger magnetic field in the cold solar flares compared to flares from the reference group (Fleishman et al., 2020). The distribution of the peak flux density shows that the cold solar flares studied in this work are more intense than those studied in L18 and those from the reference group. This can be attributed to harder electron energy spectra and, as a consequence, higher effective energies of the radiating nonthermal electrons for cold solar flares as well as to higher magnetic field. The low-frequency spectral index could be determined for 96 cold solar flares out of 98 flares with successful fits. Cold solar flares studied in this work are characterized by higher values of \(\alpha_{\rm ff}\) than the cold solar flares from L18 and flares from the reference set. The magnitude of \(\alpha_{\rm ff}\) is related to the source morphology (Fleishman et al., 2018): lower values of \(\alpha_{\rm ff}\) can be explained by source inhomogeneity, while larger values of \(\alpha_{\rm ff}\) can be related to, e.g., Razin supression (Razin, 1960a), free-free absorption, or gyrosynchrotron self-absorption along the line of sight (Bastian et al., 1998), for further details see Section 4.5. The high-frequency spectral indices \(\alpha_{\rm hf}\) for cold solar flares both from this work and L18 are smaller in absolute value than those for the reference flare set obtained by Nita et al. (2004). There could be several reasons for this; e.g., harder energy spectra of accelerated electrons (Fleishman et al., 2020) or the presence of the Razin effect which could flatten \(\alpha_{\rm hf}\)(Razin, 1960b; Melnikov et al., 2008). It should be noted that the percentage of flares with very flat spectra (\(\alpha_{\rm hf}>\)-0.5) presumably related to free-free emission of thermal electrons (White et al., 1992) is approximately the same for cold solar flares from this work, from L18 and for the reference flare set. ### Timescales in the microwave range Cold solar flare timescales in the microwave range were estimated as the time interval during which the flux density at the peak frequency exceeds 80 % of the peak flux (see Section 4.3). Such an approach allows direct comparison of the present results with the results obtained in L18 and in Nita et al. (2004), but it can only be applied to the flares with successful spectral fits. For the flares fitted by the PL model a lower limit of the timescale is defined. Note that the timescale defined in this way is significantly shorter than the impulsive phase duration defined from the light curves (Section 3). The peak timescales for individual flares are reported at doi:10.5281/zenodo.7775771, and their distribution along with distributions for the cold solar flares from L18 and for the reference flare set from Nita et al. (2004) are presented in Figure 6. Cold solar flares both from this work and L18 are generally significantly shorter than flares from the reference group. This could imply shorter flare loops for the cold solar flares compared to the flares from the reference group. ### Spectral evolution in the microwave range Evolution of the microwave spectral parameters is linked with the evolution of flare morphology and the nonthermal electron distribution, and, thus, can help better understand dynamic processes in solar flares (e.g. Bastian et al., 2007; Melnikov et al., 2008; Fleishman et al., 2016). The peak frequency evolution is available for 86 out of 98 flares via successful fits with the GYR model (see Section 4.2). We consider two aspects of \(f_{\rm peak}\) variations: correlation with the peak flux density, Figure 6: Distribution of the timescales in the microwave range for cold solar flares from the presented study (blue hatching), from L18 (light blue) and for the reference flare group (gray). Inverse hatch denotes cold flares from this work fitted by the PL model. Figure 7: Examples of microwave spectral parameter evolution. (a) Correlation between peak frequency \(f_{\rm peak}\) and peak flux density \(S_{\rm peak}\), “C+”; (b) anticorrelation between \(f_{\rm peak}\) and \(S_{\rm peak}\), “C–”; (c) increase of \(f_{\rm peak}\) during the flare, type ”F+”; (d) decrease of \(f_{\rm peak}\) during the flare, “F–”; (e) multiple increases/decreases of \(f_{\rm peak}\) during the flare main peak, “F?”; (f) soft-hard-soft evolution of high frequency index \(\alpha_{\rm hf}\), SHS; (g) hard-soft-hard evolution of \(\alpha_{\rm hf}\), HSH; (h) soft-hard-harder evolution of \(\alpha_{\rm hf}\), SHH. and the overall change in peak frequency over the course of the flare. Based on the correlations between \(f_{\rm peak}\) and \(S_{\rm peak}\), the flares were divided into three groups. The first group, "C+", demonstrates high correlation (correlation coefficient \(C_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm\rm\rm\{}}}}}}}}}}}}}}}}>\)0.5) and contains 34 flares; an example of such a flare is presented in Figure 7(a). In the second group, "C-", high anticorrelation between \(f_{\rm peak}\) and \(S_{\rm peak}\) is observed (\(C_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm\{\rm\{{\rm\{ \rm\{\{ \rm\{ \{ tion from isotropic to beam-like while observing from a transverse direction during the flare peak (Fleishman & Melnikov, 2003). Another flare group demonstrates soft-hard-harder (SHH) spectral evolution, and contains 10 flares (Figure 7(h)). This evolution type is rather common for both HXR and microwave emission (Asai et al., 2013) and could be explained by the capture of accelerated electrons in magnetic traps and preferential scattering of low energy electrons into the loss cone due to Coulomb collisions leading to their precipitation into the chromosphere. This results in a steady hardening of the energy spectrum of the electrons remaining in the trap (Cliver et al., 1986). For the remaining flares we could not draw solid conclusions about the evolution of \(\alpha_{hf}\) due to the insufficient number of frequencies above the spectral peak, or chaotic changes in \(\alpha_{hf}\) making it difficult to determine a clear evolution pattern. The patterns of parameter temporal evolution along with morphology types defined by \(\alpha_{\rm{lf}}\) for individual cold flare are reported at doi:10.5281/zenodo.7775771. ## 5 Summary and Discussion In the present study we have analyzed microwave properties of solar flares with abnormally weak thermal response relative to the nonthermal emission--cold solar flares. The selection criteria for cold solar flares were similar to those used in L18: (i) a low flux increase in the \(GOES\) 1-8 A channel during the flare impulsive phase relative to the peak flux or time integrated flux of the nonthermal emission; and (ii) the absence of preheating before the flare impulsive phase. In this study we used microwave emission at 3.75 GHz, 9.4 GHz and 17 GHz for the estimation of the nonthermal flare component, rather than HXR emission as used in L18. Our search revealed 105 cold solar flares out of a subset of \(>\)500 solar radio bursts recorded by NoRP in the period 2010-2017, hence \(\sim\)20 % of flares were qualified as "cold." In L18, only 27 cold solar flares were found from 1994 to 2017 among \(\sim\)1000 solar flares registered by Konus-_Wind_ in the triggered mode, thus the criteria used in the current work are less restrictive. This is partly because Konus-_Wind_ detects only rather hard and spiky flares in the triggered mode, while NoRP data are less exposed to these selection effects. Thus, as compared to L18, the present flare list improves the statistical sample for the study. Despite the differences in the choice of the nonthermal diagnostic, the main results of L18 for the parameter distribution of the gyrosynchrotron spectra were confirmed here. As compared to the reference flare set taken from Nita et al. (2004), the cold solar flares have the following features: 1. Cold flares both from L18 and the present work are characterized by higher values of the gyrosynchrotron peak frequencies \(f_{\rm peak}\): in the present study \(\sim\)50 % of cold flares have \(f_{\rm peak}>\)10 GHz, while statistical studies of microwave bursts showed that for the majority of bursts \(f_{\rm peak}\) lies in the range 4-10 GHz (Guidice & Castelli, 1975; Stahli et al., 1989; Nita et al., 2004). 2. Both studies revealed a group of flares with low peak frequencies (\(f_{\rm peak}<\)3 GHz); in the present study this group contains \(\sim\)14 % of cold flares. 3. Distribution of the low-frequency indices \(\alpha_{\rm{lf}}\) for cold solar flares is shifted towards higher values as compared to this distribution for the flares of reference group; although, this shift is much more vivid in the present study than in L18. 4. Cold solar flares from present work and L18 are characterized by harder high-frequency spectral indices \(\alpha_{\rm{hf}}\) compared with the reference group. 5. Cold solar flares in present study demonstrate higher microwave intensities compared to cold flares from L18 and the reference set. 6. Cold flares from both this work and L18 have significantly shorter timescales in the microwave range, than flares from the reference set. In the present study, in addition to the distributions of gyrosynchrotron spectral parameters, we examined their temporal evolution and found several evolutionary patterns allowing us to draw additional conclusions about flare morphology and flare scenarios. If the spectral peak is formed by gyrosynchrotron self-absorption, the peak frequency \(f_{\rm peak}\) increases together with the peak flux density \(S_{\rm peak}\) during the flare maximum (Melnikov et al., 2008; Fleishman et al., 2020), as the number of nonthermal electrons increases. However, for the majority of cold solar flares (\(\sim\)65 %), \(f_{\rm peak}\) and \(S_{\rm peak}\) do not correlate. This fact along with higher values of \(\alpha_{\rm{lf}}\) speaks in favor of the Razin effect playing a role (Melnikov et al., 2008). For \(\sim\)20 % of the cold solar flares the influence of the Razin effect is so strong that the \(f_{\rm peak}\) and \(S_{\rm peak}\) even anticorrelate: as the Razin cut-off frequency is inversely proportional to the magnetic field strength, such anticorrelation could be caused by an increase in field strength during the flare peak (for alternative explanations, see Section 4.5). In many cases (one third of all cold solar flares) the peak frequency decreased during the course of the flare, which was also observed in the cold flare described by Bastian et al. (2007). Presumably, in such flares accelerated electrons lose their energy in dense flaring loops before they reach the chromosphere. The heating of the loop by nonthermal electrons reduces the free-free opacity, which may reduce the peak frequency. For the group of flares with a spectral peak formed by gyrosynchrotron self-absorption, high values of \(f_{\mathrm{peak}}\) imply a strong magnetic field, and low values of \(f_{\mathrm{peak}}\) are associated with a weak field. The presence of the Razin effect indicates high background density in the flaring loops; although the Razin cutoff frequency is inversely proportional to magnetic field, the field needs to be strong enough to produce intense microwave emission. The shorter durations of cold solar flares relative to the flares of the reference set could be explained by shorter flaring loops. Thus, many of the cold solar flares are inferred to be confined flares with high magnetic field strength and high density of the loop plasma. For such dense flares, chromospheric evaporation could be suppressed as most accelerated electrons lose their energy in the flaring loops and do not reach the chromosphere. Such dense loops could be leftovers of some earlier activity (see Section 3), such as an earlier flare, which could have initiated significant chromospheric evaporation prior to the impulsive phase of the cold solar flare (Battaglia et al., 2009). Between cold solar flares there is a group of flares with a spectral peak formed by gyrosynchrotron self-absorption and low peak frequencies and, hence, weak field. Such flares could be similar to the tenuous cold flare described by Fleishman et al. (2011), where the thermal response is reduced due to low emission measure. Recent simulations (Arnold et al., 2021; Sioulas et al., 2022) show that the key parameter defining the efficiency of electron acceleration is the ratio between reconnecting and non-reconnecting magnetic field, or "guide" field. With the increase of the reconnecting field over the guide field, and, thus, the increase of the free magnetic energy, acceleration efficiency goes up. Higher values of reconnecting magnetic field relative to the guide field give harder power-law indices for the electron distribution (Arnold et al., 2021), which can be related to the harder high-frequency indices observed for cold solar flares compared to the flares of the reference group. Arnold et al. (2021) also show that system size has little influence on the nonthermal electron production, thus cold solar flares could be examples of high acceleration efficiency, similar to that observed in the large X8.2 class solar flare of 10 September 2017 (Fleishman et al., 2022). In this flare almost all electrons in the cusp region were accelerated to form a power-law distribution, while the number density of the thermal electrons became undetectable. While the source of electron nonthermal energy gain in a flare is the magnetic field, the only driving force capable of accelerating charged particles is the electric field force. Data-constrained 3D modeling of a multi-loop C-class solar flare evolution (Fleishman et al., 2023; ApJ in press) revealed strong dependence of the acceleration efficiency on the Dreicer field value in the given flux tube. Specifically, the acceleration was highly efficient in a hot tenuous loop with a low value of the Dreicer field, while it was much weaker in a cooler and denser loop with a larger Dreicer field. This implies that the acceleration efficiency can be controlled by a balance between the effective electric field (which can be either turbulent, or regular, or a combination of both) responsible for plasma energization and the value of the Dreicer field that controls the fraction of the runaway particles vs the Maxwellian core particles. In this context high acceleration efficiency in the cold solar flares with the apparent lack of the direct heating can indicate that the effective accelerating electric field is comparable to or larger than the Dreicer field. The Dreicer field depends on the plasma density and temperature; thus, it is different for tenuous and dense flares and may be different for flares with low and high spectral peak frequencies. We expect that the study of X-ray emission which we plan to perform in a follow-up paper will allow us to better constrain the causes of high acceleration efficiency and low direct heating in the cold solar flares. The cause of the hard-soft-hard (HSH) evolution pattern of the high-frequency spectral index \(\alpha_{\mathrm{hf}}\), which was found to occur in \(\sim\)40 % of cold flares (see Section 4.5), is unclear. To draw reliable conclusions about the reasons responsible for the evolution of \(\alpha_{\mathrm{hf}}\) one would have to perform thorough case studies of well-observed flares. These studies seem to be promising as evolution patterns of \(\alpha_{\mathrm{hf}}\) might not be features solely distinctive of cold solar flares, but may be common to other solar flares accompanied by electron acceleration. In the forthcoming part of this study we will investigate energy partitioning between thermal and nonthermal flare components based on X-ray observations to verify if there was truly no (or very little) direct heating for the flares from the present list. Analysis of hard X-ray data will allow recovery of the spectrum of accelerated electrons more directly and, thus, help disentangling acceleration, propagation, and other effects on the electron spectra and the high-frequency spectral index of the radio emission. ## 6 Conclusions This study identifies and statistically analyzes about 100 "cold" solar flares, with weak thermal response in the soft X-ray range relative to the prominent nonthermal emission in the microwave range. The statistical analysis of these cold flares in the microwave range confirms the conclusions obtained in the previous study by Lysenko et al. (2018): cold solar flares are characterized by higher peak frequencies of gyrosynchrotron emission, harder spectral indices in the high frequency range, and shorter durations than the flares of the reference group taken from the statistical study by Nita et al. (2004). This study reveals that, for a majority of cold flares, the gyrosynchrotron spectrum is influenced by the Razin effect rather than gyrosynchrotron self-absorption. We propose that the majority of cold flares are confined flares and are associated with short loops with strong magnetic fields and dense ambient plasma. However, cold flares do not represent a homogeneous group, and there are cold flares with low or moderate values of peak frequencies and long duration. We suggest that in cold solar flares the direct plasma heating is negligible and almost all the heating is driven by the Coulomb energy loss of the accelerated electrons. A better understanding of why the thermal emission is weak in cold solar flares requires an analysis of the X-ray emission, which will be performed in a subsequent study. We thank Dr. Dmitry Svinkin for useful advice concerning flare selection and data analysis. The work of A.L.L. was carried out in the framework of the basic funding program of the Ioffe Institute No. 0040-2019-0025. G.D.F. was supported by NSF grants AGS-2121632, and AST-2206424, and NASA grants 80NSSC19K0068, and 80NSSC23K0090, to New Jersey Institute of Technology. D.A.Z., A.T.A. and N.S.M. acknowledges the support of the Ministry of Science and Higher Education of the Russian Federation. The BBMS data were obtained using the Unique Research Facility Siberian Solar Radio Telescope4. We are grateful to the teams of the RSTN, Nobeyama Radio Observatory, SDO and RHESSI, who have provided open access to their data. G.G.M. was supported by grant 21-16508J of the Grant Agency of the Czech Republic, the project RVO:67985815, and the project LM2018106 of the Ministry of Education, Youth and Sports of the Czech Republic. S.W. acknowledges support from AFOSR under grant 23RVCOR003. Footnote 4: [http://ckp-angara.iszf.irk.ru/index_en.html](http://ckp-angara.iszf.irk.ru/index_en.html) _Facilities:_ Nobeyama Radio Observatory (NoRP and NoRH), RSTN, SDO/AIA
2309.13558
Simulation of optoelectronic oscillator injection locking, pulling and spiking phenomena
Complex envelope and reduced phase simulation models describing the dynamical behavior of an optoelectronic oscillator (OEO) under injection by an external source are described. The models are built on the foundations of a previously reported delay integral differential equation (DDE) theory of injection locking of time delay oscillators (TDO) such as the OEO. The DDE formulation is particularly amenable to high precision simulation using the Simulink block diagram environment. The correspondence between the blocks and the oscillator components offers intuition and considerable freedom to explore different circuit architectures and design variations with minimal coding effort. The simulations facilitate the study of the profound effect the multimode nature of a TDO has on its dynamical behavior. The reduced phase models that make use of the Leeson approximation are generally successful in reproducing the results of complex envelope models for established oscillations except for spiking phenomena for which the Leeson approximation fails. Simulation results demonstrating phenomena not captured by classical injection theory are presented, including multimode oscillation, the appearance of sidemodes in the RF and phase noise spectrum, and persistent spike trains redolent of recent experimental observations of 2pi phase pulse trains in a broadband OEO under injection.
A. Banerjee, T. J. Hall
2023-09-24T06:13:24Z
http://arxiv.org/abs/2309.13558v1
# Simulation of optoelectronic oscillator injection locking, pulling & spiking phenomena ###### Abstract Complex envelope and reduced phase simulation models describing the dynamical behaviour of an optoelectronic oscillator (OEO) under injection by an external source are described. The models build on the foundations of a previously reported delay integral / differential equation (DDE) theory of injection locking of time delay oscillators (TDO) such as the OEO. The DDE formulation is particularly amenable to high precision simulation using the Simulink(tm) block diagram environment. The correspondence between the blocks and the oscillator components offers intuition and considerable freedom to explore different circuit architectures and design variations with minimal coding effort. The simulations facilitate the study of the profound effect the multimode nature of a TDO has on its dynamical behavior. The reduced phase models that make use of the Leeson approximation are generally successful in reproducing the results of complex envelope models for established oscillations except for spiking phenomena for which the Leeson approximation fails. Simulation results demonstrating phenomena not captured by classical injection theory are presented, including multimode oscillation, the appearance of sidemodes in the RF and phase noise spectrum, and persistent spike trains redolent of recent experimental observations of \(2\pi\) phase pulse trains in a broadband OEO under injection. ## I Introduction Clock jitter or, equivalently, oscillator phase / frequency fluctuation limits the precision of time and frequency measurements, which is of major consequence in applications such as optical & wireless communications, high speed digital electronics, radar & lidar, astronomy and terahertz technology. RF photonic technology offers the potential to realise oscillators with phase noise levels orders of magnitude lower than conventional sources at frequencies from the microwave to the terahertz region of the spectrum. Among a variety of means using photonic technology to generate pristine RF carriers, the optoelectronic oscillator (OEO) is the most suited to practical deployment. Reference [1] provides a recent review of the extensive literature that has arisen following the introduction of the OEO [2]. A laser and OEO are examples of a time delay oscillator (TDO). The laser generates optical carriers using a resonant cavity containing the sustaining amplifier. The OEO generates microwave carriers using an RF photonic link consisting of laser; optical intensity modulator; optical fibre; photo-receiver; RF amplifier and filter, which drives the modulator, closing the loop and sustaining oscillation (see Fig. 1). The low loss of optical fibre (~0.2 dB/km) permits delay line lengths of ~ 5 km offering, for example, OEO phase noise performance of -145 dBc/Hz at 10 kHz offset from a 10 GHz carrier [3]. However, the frequency interval between adjacent oscillation modes decreases in inverse proportion to the delay (40 kHz for 5 km), and a practical RF resonator cannot provide sufficient selectivity to suppress the multitude of sidemode resonances. Multimode operation is an artefact of an OEO with profound consequences on its behaviour [3, 4]. There is a conceptual distinction between a single-mode or multimode oscillation and a single-mode or multimode oscillator. In applications as a low noise oscillator, evolution to a single-mode oscillation state is a desirable property but it does not change the nature of the oscillator from a multimode to a single-mode oscillator. Modes correspond to the attractive states (limit cycles) of the equations of motion that govern the oscillator. Whether the evolution of the oscillation reaches an attractive state, and, if so, which of the multitude of attractive states becomes the final state of oscillation depends upon both the equations of motion and the initial condition. Even when the initial condition corresponds to a single-mode oscillation state the presence of multiple attractors is manifest by any perturbation of a multimode oscillator including: fluctuations & noise, which excite sidemodes and give rise to spurious resonances in the phase noise spectrum; intra-loop phase modulation, especially at frequencies resonant with the intermodal frequency that gives rise to giant phase modulation, frequency combs & mode-locking; tuning scans and transients that, if rapid, excite sidemodes or, if slow, may cause mode-hops. In addition, giant phase modulation is a source of a modulational instability in a phase lock loop (PLL) controlled OEO [3]. A new delay integral/differential equation (DDE) formulation of an OEO under external injection is presented in a prior work [4] that removes the implicit assumption of a single-mode oscillator under weak injection made in previous treatments that apply classical injection locking theory to OEOs. The emphasis of the prior work is on the theory and the correct prediction of the experimentally observed phase noise spectrum; simulation results were precluded by space considerations. This work corrects that omission. The DDE formulation is particularly amenable to high precision simulation using the Simulink(tm) block diagram environment. The correspondence between blocks and the component of the oscillator offers intuition into the behaviour of the oscillator and considerable freedom to explore different circuit architectures and design variations with minimal coding effort. The simulations are valuable in illustrating the dependence on system parameters of behaviour that can be rich with phenomena that defy analytic solution. Levy et al [5] presents a ground-breaking numerical simulation model based on a multiple timescale approach to study mode competition during the OEO initial start-up regime as well as the amplitude and phase of the established oscillation. This model is further extended in [6] to accurately determine the phase noise in single loop and dual-loop OEOs and its dependence on parameters. Mikitchuk et al [7] reports a numerical nonlinear time varying model of a delay line OEO, which can simulate the stationary behavior and dynamical instabilities in single-loop OEO and multiloop OEO. More recently, Yuan et. al. [8] reports a time domain convolution simulation model to study the real time dynamic process in the initial and established oscillation regimes of an OEO under external injection. The convolution describes the time domain response in terms of the impulse response of the RF bandpass filter (BPF) and the method of steps for solving a DDE is used to calculate the current segment of data over the delay interval from the previous segment with appropriate account for the convolution extending across segments. in these works, the nonlinear transfer function of a Mach-Zehnder modulator (MZM) is assumed to be responsible for the gain control mechanism. This introduces a first order Bessel function dependence of the saturated gain, the non-monotonicity of which is responsible for an MZM overdrive envelope instability [5, 9]. Complex dynamical instabilities are of theoretical interest and have been thoroughly investigate by Chembo et al [9, 10, 11, 12]. However, in low-noise oscillator applications, it is preferable that the MZM is driven between adjacent minimum and maximum transmission points and no further to avoid this instability. That is, RF amplifier limiting should be the principal source of gain saturation and modelled appropriately (see Section II.B(i)). OEO simulation tools are generally implemented using custom code. Belkin et al [13] is a first attempt to use a commercial photonic circuit simulation tool (VPI Transmission Maker(tm)) which is advantageous in offering a comprehensive library of optical and some electronic component models. However, the simulation must adequately sample the RF modulation of the optical carrier which severely restricts the fibre delay line length that can be modelled. The Simulink(tm) models described herein treat the RF-photonic link as an RF component (a transport delay) so adequate sampling is required only of the fluctuations of the complex envelope reducing the number of samples held within the delay line by several orders of magnitude. The paper describes for the first time an implementation of a comprehensive simulation model of an OEO under injection by an external source using the Simulink(tm) block diagram environment. Simulation results reveal phenomena not captured by classical single-mode injection locking theory. Specifically, spiking phenomena are observed that are related to a serrodyne mode injection-pulling solution of an idealised reduced phase-only model. The formation of a periodic spike train necessarily requires the _coherent_ superposition of a multitude of modes and, as such, spiking is a manifestation of mode-locking. TDOs such as lasers and OEOs support a multitude of modes and consequently, spiking is to be expected. The paper reports the first observation in simulation of mode-locking by injection induced intra-loop phase modulation and is redolent of recent experimental observations reported by Diakonov et al [14]. of \(2\pi\) phase pulse trains generated by a broadband OEO under injection Spiking is of interest for the emulation of excitability in neuroscience [15] and its application to high-speed neuromorphic computing [16]. Mode-locked laser theory and practice is well advanced [17] and analogous methods of mode-locking an OEO are being explored for application to pulsed radar systems [18, 19, 20, 21, 22]. The structure of the paper is as follows. In Section II a brief review is presented of the analytical model of a free OEO and an injection driven OEO in the complex envelope representation, including an explanation of how amplifier gain saturation by limiting is modelled. In addition, the reduction to a phase-only model of an OEO under injection and its phase-locked solutions are described. A special'serrodyne' quasi-locked state unique to a broadband TDO is introduced for the first time. Section III describes the detailed implementation of the envelope and phase models in the Simulink(tm) block diagram environment. Section IV then presents representative results generated by the models that illustrate the multimode nature of the oscillator and its profound effect on its time domain behaviour and spectral features. Section V concludes the paper with a summary of the principal findings. ## II Analytical model The OEO under study, depicted in Fig. 1, consists of an RF photonic link and an RF amplifier chain configured into a loop. The laser followed by the MZM converts the RF signal into an intensity-modulated optical carrier. A single-mode optical fibre coil with delay \(\tau_{D}\) is used as the optical fibre delay line (OFDL). The RF modulation is recovered by the photodetector (PD). The combination of a laser, an MZM, an OFDL and a PD is functionally equivalent to an RF-photonic link, which is used to provide the time delay of the oscillator. The sustaining amplifier of the oscillator is provided by the RF chain. The RF chain consists of an RF amplifier followed by an RF BPF, also referred to as an RF resonator, which has an on-resonance group delay \(\tau_{R}\). The round-trip time is \(\tau_{G}=\tau_{D}+\tau_{R}\) and normally \(\tau_{D}\gg\tau_{R}\). The OEO achieves low phase noise through the large delay enabled by the exceptionally low transmission loss of optical fibre. However, the frequency interval between potential oscillation modes is inversely proportional to the delay. Typically, an RF BPF is concatenated with the delay line to select the desired oscillation mode. At microwave frequencies the passband of the RF filter is broad relative to the frequency interval between potential oscillation modes; and spurious resonances due to sidemodes falling within the RF filter passband are clearly seen both in the RF and phase noise spectral densities. The RF BPF plays an important role in promoting single-mode oscillation. Initially the oscillation builds up from an initial transient or noise and the sustaining amplifier operates in its linear regime. A potential oscillation mode grows at a rate proportional to the net gain it experiences on each round-trip. A suitably tuned RF BPF with a bell-shaped transmission function magnitude favours the growth of one mode slightly above all others. The gain control mechanism, usually saturation of the sustaining amplifier, reduces the round-trip gain as the magnitude of the oscillation grows until the favoured mode is sustained by net unit gain and neighbouring modes experience net loss and slowly decay. Once the oscillation is established, the gain control mechanism holds the magnitude of the oscillation substantially constant, but the phase of the oscillation continues to be modulated by the residual sidemode spectral components. The RF BPF then acts as a phase filter suppressing the phase modulation and thereby continuing the decay of the sidemode spectral components. In this regime, the step response of the oscillation phase has a staircase waveform. The RF filter smooths the top step of the phase staircase each round-trip and the staircase evolves ultimately into a linear ramp, i.e., a step Figure 1: Schematic of a single-loop OEO. LD: semiconductor laser diode; MZM: Mach-Zehnder Modulator; SMF: single mode fibre, OFDL: optical fibre delay line; PD: photo detector, EA: electronic amplifier (RF amplifier); EBPF: electronic bandpass filter (BPF); EC: electronic coupler; ESA: electronic spectrum analyser. change in intra-loop phase ultimately results in a change in frequency of a pristine oscillation. Perturbations due to a variety of fluctuations within the oscillator maintain a residual level of sidemode spectral components that otherwise would decay asymptotically to zero. ### _Free running single-loop OEO_ The dynamical equation of a free running single loop OEO is given by: \[v=e^{i\phi}K\left(h\otimes\left(D_{\tau_{D}}v\right)\right) \tag{1}\] where \(v\) is the complex envelope of the oscillation at the output of the RF amplifier following the RF BPF and \(\phi\) represents the net phase contributed by an intra-loop tuning element and other components. The action of the RF BPF on the complex envelope is represented by a convolution by the impulse response \(h\) of the equivalent baseband low pass filter (LPF) [23]. The sustaining amplifier is represented by the operator \(K\), which is characterized by a real valued large signal gain \(\kappa\) that decreases from the linear gain in such away that the magnitude of the complex envelope of the amplifier output is held substantially constant when the amplifier operates in saturation: A more precise characterisation is given in Section II.B(i). The operator \(D_{\tau_{D}}\) represents a delay: \[\left(D_{\tau_{D}}v\right)(t)=v(t-\tau_{D}) \tag{2}\] Eq. (1) admits a family of freely oscillating mode solutions: \[v(t)=a_{p}\exp(s_{p}t)\quad;\quad s_{p}=\sigma_{p}+i\omega_{p} \tag{3}\] subject to the complex Barkhausen condition: \[\kappa H\big{(}s_{p}\big{)}e^{i\phi}\exp\bigl{(}-s_{p}\tau_{D}\bigr{)}=1 \quad\Longrightarrow\quad\begin{cases}\kappa\big{|}H\big{(}s_{p}\big{)} \big{|}\exp\bigl{(}-\sigma_{p}\tau_{D}\bigr{)}=1\\ \omega_{p}\tau_{D}=2p\pi+\phi+\arg\left(H\big{(}s_{p}\big{)}\right)\quad p\in \mathbb{Z}\end{cases} \tag{4}\] The magnitude of a single freely oscillating mode close to the passband centre frequency of the RF BPF remains constant as time progresses if \(\kappa=1\), which will be maintained by the gain control mechanism. The integer \(p\) represents the additional number of cycles contained within the delay line of mode \(p\) relative to the reference mode \(p=0\). The frequency interval between allowed freely oscillating modes defines the free-spectral range (FSR), i.e., the change in frequency for a unit increment of the number of cycles within the loop. The oscillator is tuned over an FSR for \(\phi\in(-\pi,\pi]\). ### _Single-loop OEO under RF signal injection_ **(i) Complex envelope model** The addition of a complex envelope \(w\) to the right-hand side of (1) introduces a forcing term representing the oscillator under injection. The resulting complex envelope model of injection locking of a time delay oscillator is conveniently written in a form: \[\begin{split} u=v+w\\ v=e^{i\phi}K\left(h\otimes\left(D_{\tau_{D}}u\right)\right)\end{split} \tag{5}\] that highlights the role of the linear superposition \(u\) of the complex envelopes representing the oscillation \(v\) and the injected carrier \(w\). To describe gain saturation by limiting of an RF amplifier driving an MZM, it is assumed that each nonlinear stage is followed by a BPF that dissipates of all intermodulation products but has a sufficiently broad passband relative to the signal bandwidth that the phase of the signal is passed undisturbed. Consequently, with appropriate choice of time co-ordinate, the signal \(x\) may be represented locally (on the scale of a period), as a pure carrier: \[x(t)=a\cos(t) \tag{6}\] where \(a\) is a locally constant magnitude. Since \(x\) is even and periodic one can restrict attention to the interval \(t\in[0,\pi/2]\). Fast limiting at unit amplitude may be described by: \[\overline{x}(t)=\ x(t)+y(t) \tag{7}\] where \(y(t)\) accounts for all distortion: \[y(t)=\begin{cases}0&;\quad t\in[\xi,\pi/2]\\ 1-x(t)&;\quad t\in(0,\xi)\end{cases} \tag{8}\] and \(\xi\) defines the interval over which limiting occurs. \[\begin{array}{c}\xi=0\qquad;\quad a\leq 1\\ a\cos(\xi)=1\quad;\quad a>1\end{array} \tag{9}\] The amplitude \(\tilde{a}\) of the fundamental harmonic of \(\overline{x}\) is: \[\tilde{a}=a+\frac{4}{\pi}\int_{0}^{\xi}y(t)\cos(t)\,dt=a\left(1-\frac{2}{\pi} \int_{0}^{\xi}dt-\frac{2}{\pi}\int_{0}^{\xi}\cos(2t)\,dt\right)+\frac{4}{\pi} \int_{0}^{\xi}\cos(t)\,dt \tag{10}\] Evaluating the integrals leads to: \[\tilde{a}=\begin{cases}a&;\quad a\leq 1\\ a(1-(2\xi-\sin(2\xi))/\pi)&;\quad a>1\end{cases} \tag{11}\] \(\tilde{a}\) is a continuous monotonic function of \(a\) which in the hard-clipping limit tends to \(4/\pi\) corresponding to the amplitude of the fundamental harmonic of a unit amplitude square wave. The saturation limit may be normalised to unity while retaining a unit unsaturated gain by pre-multiplication by \(\pi/4\) and post-multiplication by \(4/\pi\). The saturated gain \(\kappa=\tilde{a}/a\) is the describing function [24] of the describing function method, a special case of the harmonic balance method [25]; in which a truncated Fourier series representation is used to approximate periodic solutions of a nonlinear system. The harmonic balance method is a special case of the mathematically rigorous Galerkin method [26]. **(ii) Reduced phase-only model** The magnitude of an established oscillation is substantially maintained constant by the saturation of the sustaining amplifier. Amplitude fluctuations are highly suppressed. It is consequently an excellent approximation in the established oscillation regime to reduce the complex envelope model to a reduced phase-only model given by: \[\theta_{v}=\phi+h\bigotimes\left(D_{\tau_{D}}\left(\tan^{-1}\left(\frac{\rho \sin(\theta_{w}-\theta_{v})}{1+\rho\cos(\theta_{w}-\theta_{v})}\right)+\theta _{v}\right)\right) \tag{12}\] where \(\theta_{v}\) is the oscillator phase; \(\theta_{w}\) is the phase of the injected carrier; and \(\rho\) is the ratio of the magnitudes of the injected carrier and oscillation. This model invokes the Leeson approximation [27, 28] of the action of a BPF on the phase of a complex envelope. The arctangent function is to be understood in the sense: \[\theta=\tan^{-1}(y/x)=\arg(z)\quad;\quad z=x+iy \tag{13}\] It is a multivalued function with an infinity of branches separated by \(2\pi\). Each branch corresponds to one of an infinity of oscillation modes. For \(\rho<1\) the arctangent remains within a single branch and may be evaluated using the two argument atan2 function. Eq. (12) fully supports multimode oscillation subject only to the phase being the sole state variable. If the effect on the dynamics of fluctuations of the magnitude is non-negligible recourse to the complex envelope model is necessary. If sustained multimode oscillation is of interest, then the winner-takes-all competition may be suppressed by substituting the bell-shaped passband RF resonator by a broad flat-topped passband RF resonator or even by removing the RF resonator entirely. The impulse response \(h\) may be then approximated by a Dirac distribution and (12) reduced to: \[\theta_{v}(t)-\theta_{v}(t-\tau_{D})=\phi+\left(\tan^{-1}\left(\frac{\rho\sin( \theta_{w}-\theta_{v})}{1+\rho\cos(\theta_{w}-\theta_{v})}\right)\right)(t- \tau_{D}) \tag{14}\] The free oscillator (static tuning, no injection or Langevin forcing terms) is then described by: \[\theta_{v}(t+\tau_{D})-\theta_{v}(t)=\phi \tag{15}\] which admits solutions of the form: \[\theta_{v}(t)=s(t)+\xi(t)\quad,\quad s(t+\tau_{D})-s(t)=\phi\quad,\quad\xi(t+ \tau_{D})=\xi(t) \tag{16}\] where \(s\) is a staircase function with steps of height \(\phi\) occurring at intervals of \(\tau_{D}\) and \(\xi\) is a periodic function with period \(\tau_{D}\). Note that the staircase function may be expressed: \[s(t)=\omega t+\tilde{\xi}\quad\omega\tau_{D}=\phi \tag{17}\] where \(\omega\) is the frequency shift common to all modes introduced by the tuning phase \(\phi\) and \(\tilde{\xi}\) is a zero mean sawtooth function with the same period as \(\xi\). The periodic functions \(\xi\) and \(\tilde{\xi}\) are responsible for the appearance in the spectrum of a multitude of modes spaced in frequency by \(1/\tau_{D}\). If the initial contents of the delay line are set equal to \(\xi\) over the interval \(t\in[-\tau_{D},0)\), then a simulation of the free time delay oscillator with no RF resonator and \(\phi=0\) will reconstruct the periodic function \(\xi\) for all \(t\geq 0\). For \(\phi\neq 0\) the staircase function adds to the solution by linearity. This generalizes so that simulations of (12) may be initiated in a constant magnitude but otherwise arbitrary multimode oscillation state by an appropriate choice of the initial contents of the delay line. In the sequel it is assumed unless otherwise stated that the oscillation has evolved to a predominately single-mode state prior to the onset of injection. ### _Locked oscillation states_ Consider the injection of a pure carrier with frequency \(\omega_{l}\) which gives rise to the phase ramp. \[\theta_{w}(t)=\omega_{l}t \tag{18}\] A phase locked solution of (14) corresponds to a phase ramp of the same slope but offset by a constant: \[\theta_{v}(t)=-\theta_{\infty}+\omega_{l}t \tag{19}\] Substituting (19) into (14) using \(\omega_{p}\tau_{D}=\phi\) yields: \[\tan\left(\left(\omega_{i}-\omega_{p}\right)\tau_{D}\right)=\frac{\rho\sin( \theta_{\infty})}{1+\rho\cos(\theta_{\infty})} \tag{20}\] Consequently, there is a range of injection frequencies about _every_ natural frequency: \[\left(\omega_{i}-\omega_{p}\right)\tau_{D}\in[-\sin^{-1}(\rho),\sin^{-1}(\rho)] \tag{21}\] for which a locked oscillation state exists. ### _Serrodyne oscillation states_ Suppose the frequency of the injected carrier \(\omega_{l}\) falls within the locking range of a mode so that: \[\left(\omega_{i}-\omega_{p}\right)\tau\in 2m\pi+[-\sin^{-1}(\rho),\sin^{-1}(\rho)] \quad;\quad m\in\mathbb{Z} \tag{22}\] The case \(m=0\) with \(\rho<1\) corresponds to injection within the locking range of the principal mode already considered. \(m\neq 0\) corresponds to injection within the locking range of a mode other than the principal mode. The linear ramp solution corresponding to an injection-locked oscillation of the sidemode within locking range of the injected carrier frequency can be reached only if \(\rho>1\) and the arctangent is unwrapped. The latter is a reasonable expedient given that for \(\rho>1\) the trajectory of the vector sum of the oscillation and injection complex envelopes encircles the origin in the complex plane. For \(\rho<1\) the trajectory does not encircle the origin and no phase locked solution exists for \(m\neq 0\). Phase model simulations demonstrate that a solution of (14) given by: \[\theta_{\nu}(t)=-\theta_{\infty}+\varpi_{i}t+\mathcal{S}(t) \tag{23}\] is attractive, where \(\mathcal{S}\) is a _sawtooth function_ which has a peak-to-peak magnitude \(2\pi\), a fundamental period \(\tau\) and, where it exists, a derivative given by: \[\tau\frac{d\mathcal{S}}{dt}=\ 2m\pi\quad;\quad m\in\mathbb{Z} \tag{24}\] It follows that \(m\) cycles of the sawtooth function occur within each delay interval \(\tau_{D}\). Now: \[\theta_{\nu}(t)-\theta_{\nu}(t-\tau)=\varpi_{i}\tau \tag{25}\] and, where it exists, the derivative: \[\tau\frac{d}{dt}(\theta_{w}-\theta_{\nu})=(\omega_{i}-\varpi_{i})\tau-2m\pi=0 \tag{26}\] which identifies \(\varpi_{i}\tau\) as the remainder of \(\omega_{i}\tau\) modulo \(2\pi\). Where the derivative does not exist \(\theta_{w}-\theta_{\nu}\) steps by \(2\pi\)_instantaneously_. It follows that: \[\theta_{w}-\theta_{\nu}=\theta_{\infty}\mod 2\pi \tag{27}\] Consequently: \[(\varpi_{i}-\omega_{0})\tau=\tan^{-1}\left(\frac{\rho\sin(\theta_{\infty})}{1 +\rho\cos(\theta_{\infty})}\right)\quad;\quad\omega_{0}\tau=\phi \tag{28}\] The serrodyne frequency-shifted oscillation is thereby phase locked to the injected carrier. In practice the fly-back of the sawtooth phase modulation cannot be instantaneous. The RF BPF ensures a short but continuous flyback transient occurs. The duration of the transient is inversely proportional to the RF BPF bandwidth. In simulations, a 3.6 MHz or 36 MHz bandwidth filter results in a transient of 2 \(\upmu\)s or 200 ns respectively. The serrodyne waveform consists of a concatenation of regular domains of a substantially pure oscillation phase locked to the injected carrier with the flyback transients forming the domain walls. The spectrum of such a waveform has a dominant component at the frequency of injection. It is observed experimentally that injection can be applied to select any mode among the multitude of natural modes [29]. The question then arises whether the serrodyne state can evolve into a locked pure oscillation state. The phase models that invoke the Leeson model show no evidence that the serrodyne state is other than persistent. However, the validity of the Leeson approximation is suspect for short transients. Envelope model simulations show that for a single-mode initial condition and for detuning within a subset of the locking range, the serrodyne waveform decays into a pure oscillation state. ## III Description of the Simulation Model Time domain simulations are performed using the Simulink(tm) block diagram environment. The phase noise spectrum is assessed by appropriate spectral analysis of the time domain data. It is the complex envelope of a pristine carrier corresponding to some nominal frequency of oscillation that is simulated. Since the nominal carrier is known completely, it conveys no information and is omitted. This is equivalent to translating the frequency origin to the nominal oscillation frequency. The complex envelope representation is motivated by the sample rate required in a digital simulation to avoid aliasing. For example, if the spectrum of the complex envelope of interest extends from -1 MHz to 1 MHz a sample rate of at least 2 MHz is required whereas to adequately represent the _same_ signal with an explicit 10 GHz carrier requires a sampling rate of greater than 20 GHz. The reduced phase-only models are essentially equivalent to the envelope models but with the complex envelope magnitude held constant, thereby restricting the simulation to the evolution of the phase of an established oscillation. A phase model provides the option to invoke the Leeson approximation to the action on the phase of an RF BPF [27, 28]. The Leeson approximation has the merit of linearity, simplicity, and general utility, but its validity is suspect when spiking phenomena may occur. The device of a complex envelope representation is used with advantage in optical circuit simulations [23] to avoid having to sample an optical carrier which has a frequency of the order of 200 THz. It is tempting to use an optical circuit simulator to simulate an OEO to take advantage of the sophisticated models for lasers, modulators, optical fibres, photodetectors, and a more limited range of electronic components. However, the number of samples required to adequately represent the RF modulation in transit through a fibre of length of the order of 10 km is prodigious (~1,000,000) and simulations of oscillators with coil lengths of up to ~ 100 m only is possible before exhausting available computing resources. This problem is avoided by treating the RF-photonic link as an RF delay line; indeed, the purpose of the photonics is to transport the RF carrier over a long path taking advantage of the low loss of the optical fibre. Ideally, the photonics could otherwise be ignored. However, the photonics contributes by a variety of mechanisms to RF phase & amplitude fluctuations which are included in the simulations. These observations reaffirm the rationale for using Simulink(tm) rather than an optical circuit simulator for the time domain simulations. Nevertheless, in contrast to the abstract description of the mathematical models, the Simulink(tm) models are built from blocks that correspond closely to the physical components that compose an OEO thereby providing intuition. ### _Complex envelope simulation model_ A top level (level 0) Simulink(tm) test harness for an envelope model of a single-loop OEO under injection by an external source is shown in Figure 2 (a). The simulation time is considered to have units of \(1\,\mu s\). Figure 2(b) & (c) expand the _OEO_ and the _RF source_ to level 1 to reveal their respective internal structure. The constituent blocks are further expanded to reveal their contents in Figure 3 & Figure 4 at level 2 and Figure 3(f) & Figure 4(d) at level 3. The OEO subsystem consists of an _Injection_, _Phase fluctuations_, _Initial condition_, _RF delay line_, _RF resonator_, _RF amplifier_, and a _Phase bias_ subsystem arranged into a loop (Figure 3). These subsystems implement Eq. (5) with the addition of noise and an explicit initial condition. The linear superposition \(u=v+w\) of the injected carrier \(w\) and the oscillation \(v\) is implemented by the _Injection_ subsystem (Figure 4 (a)) with provision to initiate injection with a specified injection ratio (_Gain_ block) at a specified simulation time (_Switch_ and _Step_ blocks). The delay operator \(D_{\tau_{D}}\) is implemented by the _RF delay line_ subsystem which makes use of the supplied _Transport Delay_ block (Figure 3 (a)). The latter does not support complex scalar input and output variables, but it does support multidimensional real vectors. Consequently, the _Transport Delay_ block is placed between a subsystem _Convert 1_, which maps the incoming complex envelope to a 2-dimensional real vector, and a subsystem _Convert 2_, which performs the corresponding inverse map. The convolution operator \(h\otimes(\cdot)\) is implemented by the _RF resonator_ subsystem which makes use of the supplied _State Space_ block (Figure 3 (b)). Since the latter does not support complex scalars but does support multidimensional real vectors, it is placed between _Convert 1_ & _Convert 2_ subsystems. A single pole LPF baseband equivalent to a BPF corresponds to the settings: \[\begin{array}{l}\mathbf{A}=-\mathbf{I}/\tau_{R}\\ \mathbf{B}=\mathbf{I}/\tau_{R}\\ \mathbf{C}=\mathbf{I}\\ \mathbf{D}=\mathbf{0}\end{array}\] where \(\mathbf{I}\) is the \(2\times 2\) real identity, \(\mathbf{0}\) is the \(2\times 2\) real zero matrix and \(\tau_{R}\) is the on-resonance group delay of the RF resonator. The nonlinear operator \(K\) is implemented by the _RF amplifier_ subsystem (Figure 3 (a)). The subsystem may be used to simulate an RF power amplifier with a memoryless limiting gain saturation mechanism. The complex envelope at the output is equal to the product of the input complex envelope and a real scalar saturable gain \(\kappa\). The phase of the output is identical to the input, but the magnitude of the output is a monotonic increasing function of the magnitude of the input with an asymptote for small signals equal to the linear gain parameter and a least upper bound for large signals equal to the saturation parameter. The user-defined MATLAB Function code (Figure 3(f)) implements Eq. (11) to provide a saturable gain with unit linear gain and a unit magnitude maximum output. The _Product_ and _Constant_ blocks enable the linear gain parameter and saturation parameter of the RF amplifier to be set independently to arbitrary values. The tuning phase \(\phi\) is implemented by _Phase bias_ subsystem (Figure 4(b)). The _Product_ block provides a complex envelope at the output equal to the product of the input complex envelope and the complex phase factor \(\exp(i\phi)\) supplied by the _Trigonometrical function_ and _Constant_ blocks. The solution of the DDE describing a TDO requires knowledge of the entire initial contents of the loop. In this case, the set of all initial conditions forms an infinite dimensional vector space of functions over the interval \(t\in[-\tau_{G},0)\quad;\quad\tau_{G}=\tau_{D}+\tau_{R}\) characteristic of a multimode oscillator with an infinity of modes. The purpose of the _Initial condition_ subsystem is to load the loop with the requisite function. Figure 2: Simulink® envelope model of an optoelectronic oscillator under external injection by an RF source. (a) Test harness: Scope 1 measures the time evolution of the real and imaginary parts of the oscillation; Scope 2 measures the time evolution of the unwrapped phase difference between the output of the RF injection source and the oscillator. The To File block records the complex data for subsequent spectral analysis. (b) & (c) Level 1 expansion of the Optoelectronic oscillator and RF source subsystems. Within the _Initial condition (Bessel mode)_ subsystem (Figure 3(c)), the _Sine Wave_ block together with the _Trigonometric Function_ block generates the complex envelope of a carrier phase-modulated by a sinusoid defined by the _frequency_, _amplitude_, and _bias_ parameters set within the _Sine Wave_ block. At the start of simulation, the _Switch_ block connects the complex envelope signal generated to the output port until the _Step_ block changes state at the time set by its _Step Time_ parameter when the _Switch_ block connects the input port directly to the output port. The _Step Time parameter_ should be set equal to the delay time. An alternative _Initial Condition (White noise)_ subsystem (Figure 3(d)) loads the loop with a random complex envelope with Gaussian distributed real and imaginary parts. The _Initial Condition_ subsystems offer initial conditions corresponding to established single mode or highly multimode oscillation states. The _Initial Condition (Bessel mode)_ subsystem is particularly useful when a frequency comb initial oscillation is desired and the _Initial Condition (White noise)_ subsystem is Figure 3: Content of RF Delay, RF Resonator, Initial condition & RF Amplifier subsystems in Figure 2(b). particularly useful when either a realistic start-up from low noise is of interest or, at the other extreme, a highly random multimode initial condition is desired. The _Phase fluctuations_ subsystem (Figure 4(c)) perturbs the oscillation phase within the loop thereby generating oscillator phase noise. The _Product_ block and _Trigonometric function_ block form a phase modulator that is driven via the _Gain_ block by a power law stochastic process generated by the _Power law fluctuations_ subsystem. The _Gain_ block permits adjustment of the overall level of the fluctuations while preserving the profile of the fluctuation spectral density. Figure 4(d) illustrates the internal structure of the _Power law fluctuations_ subsystem. There are three arms (0, 1, 2) that respectively generate white (\(f^{\,0}\)), flicker (\(f^{-1}\)), and Brownian fluctuations (\(\mathrm{f}^{-2}\)) by filtering the independent identically distributed zero mean unit power Gaussian noise processes generated by the _Bandlimited White Noise_ blocks. A weighted sum of spectrally shaped noise from each arm forms the output the _Power law fluctuations_ subsystem. The weights provided by the _Gain_ blocks are chosen to reproduce representative experimentally measured OEO phase noise spectral densities. The white noise arm (0) applies no filtering. The Brownian arm (2) applies a simple integrator to generate a Brownian motion stochastic process. The flicker noise arm (1) uses the _Discrete Filter_ block as an infinite impulse response filter according to a method introduced by Kasdin [30]. To obtain a spectral density that follows \(1/f\) power law over a frequency range spanning 3-4 decades, it is found that 30,000 filter coefficients are required. These are calculated by MATLAB code (Figure 5) stored in the model Figure 4: Content of Injection, Phase bias, Phase fluctuations subsystems depicted in Figure 2(b) workspace. The code also generates and stores in the model workspace an array of random seeds for the _Band-Limited White Noise_ blocks ensuring independence. The injected carrier \(w\) is supplied by the _RF source_ subsystem (Figure 2(a)) which is modelled as a simple phase integrator. The complex envelope is formed by complex exponentiation by a _Trigonometric Function_ block of the output of an _Integrator_ block driven by a _Bandlimited White Noise Block_ weighted by a _Gain_ block with an additive constant provided by a _Bias_ block (Figure 2(c)). Provision is made using the _Switch_ block to defer the activation of the RF source to the _step time_ parameter of the _Step_ block. The weight provided by the _Gain_ block is chosen to reproduce representative measured phase noise spectral densities of dielectric resonator oscillator (DRO) based stable local oscillators (STALO). For zero oscillator tuning phase (\(\phi=0\)), the _Bias_ block parameter \(b\) corresponds to the detuning \(\omega\) of the injection source from the natural frequency of the oscillator. For \(\phi=0\), \(\rho=0.1\), \(\tau_{G}=25\;\mu s\), setting \(b=0.9\sin^{-1}(0.1)/25\) corresponds to injection detuned by 90% of the high frequency limit of the locking range defined by Eq. (21) of the principal (\(m=0\)) mode and \(b=2\pi\) corresponds to injection with zero detuning from the adjacent sidemode (\(m=1\)). ### _Reduced phase-only simulation model_ A top level (level 0) Simulink(tm) test harness for a reduced phase-only model of a single-loop OEO under injection by an external source is shown in Figure 6(a). The _RF source_ and the _OEO subsystems_ are expanded to reveal their respective internal (level 1) structure in Figure 6 (b) & (c). The constituent blocks are further expanded to reveal their (level 2) contents in Figure 6 (d), (e) & (f). The _Power law fluctuations_ subsystem in Figure 6 (e) is identical to that used in the complex envelope models (see Figure 4(c) & (d)). The _OEO_ subsystem consists of an _Injection_ subsystem & _Bias (Tune)_ block, _Phase fluctuations_ & _Initial condition_ subsystems, and _Transport Delay_ & _Transfer Fnc (RF resonator)_ blocks arranged into a loop (Figure 6(c)). These subsystems and blocks implement Eq. (12) with the addition of noise and an explicit initial condition. The injection induced phase shift of the oscillation (i.e., the arctangent term in Eq. (12)) is provided by the _Injection_ subsystem (Figure 6(d)) with provision to initiate injection with a specified injection ratio at a specified simulation time (_Switch_ and _Step_ blocks). In this example, the step occurs at \(t=1000\;\mu s\) and is of height \(\rho=0.1\). The delay operator \(D_{\tau_{D}}\) is implemented by the _Transport Delay_ block; the convolution operator \(h\otimes(\cdot)\) is implemented by the _Transfer Fcn (RF resonator)_ block; and the tuning phase \(\phi\) is implemented by _Bias (Tune)_ block. The _Injection_ subsystem is the only source of nonlinearity when the Leeson approximation is invoked. For small phase fluctuations it is found that, in a large neighbourhood of \(\theta_{w}-\theta_{v}=0\), the injection subsystem is well approximated by the linear superposition: \[\theta_{u}=\eta\theta_{w}+(1-\eta)\theta_{v}\quad;\;\;\eta=\frac{\rho}{1+\rho}\] Figure 6: _Simulink(tm) phase model of an GEO under external injection by an RF source. (a) Test harness: Scope 1 measures the time evolution of the real and imaginary parts of the oscillation; the To File blocks record the complex envelope associated with the oscillator phase for subsequent spectral analysis. (b)&(c) level 1 expansion of the RF source and Optoelectronic oscillator subsystems. (d), (e) & (f) Level 2 expansions of the injection, Phase fluctuations & Initial condition (Bessel mode) subsystems. The Power law fluctuations subsystem in (e) is expanded in Figure 4(d)._ i.e., the _Injection_ subsystem reduces to a simple linear coupler. The linearized _OEO_ subsystem is amenable to tractable analysis. In particular, the phase noise spectrum may be derived analytically. If the validity of the Leeson approximation is suspect, the _Transfer Frac (RF resonator)_ subsystem in Figure 6(c) may be replaced by the custom _RF resonator_ subsystem shown at level 1 in Figure 7 (a) and expanded to level 2 & 3 in Figure 7(a) & (b). A _State-Space_ implementation of a complex envelope model of the RF resonator is placed between a _Trigonometric Function & Vector Concatenate_ block that generate a 2-dimensional real vector representation of a unit magnitude complex envelope and a _Demux & Real-Imag to Complex_ block that reconstruct the complex envelope z as filtered by the resonator. The _arg(z) (unwrapped)_ subsystem provides a continuous phase output provided the magnitude of the complex envelope never passes through zero. The unwrapping is achieved by using a short time delay to provide a previous value of the unwrapped phase which is used to avoid evaluating the arctangent near its branch cut (Figure 7(c)). The custom _RF resonator_ subsystem is useful in distinguishing between phenomena due to failure of the Leeson model in reduced phase models from those due to loss of amplifier saturation in complex envelope models. ## IV Simulation Results Representative results generated by the complex envelope and reduced phase-only Simulink(tm) simulation models are presented. The results are selected to illustrate that the DDE formalism of Figure 7: Alternative Simulink®_phase model not invoking the Leeson approximation. (a) Level 1 contents of the Optoelectronic oscillator subsystem with a custom RF resonator subsystem replacing the Transfer Frac block in Figure 6(c). (b) Level 2 contents of the custom RF resonator subsystem. (c) Level 3 contents of the _arg(z) (unwrapped) subsystem._ injection locking captures OEO multimode oscillation phenomena. Table 1 lists the principal simulation parameters and their values used unless otherwise stated in the text. \[\begin{array}{ll}\tau_{D}=24.9\ \mu s&\text{\it Delay line delay time}\\ \tau_{R}=0.1\ \mu s&\text{\it Resonator on-resonance group delay}\\ \tau_{G}=\tau_{D}+\tau_{R}=25\ \mu s&\text{\it Round trip group delay time}\\ \Delta f=1/\pi\tau_{R}=3.18\ MHz&\text{\it Resonator bandwidth}\\ FSR=1/\tau_{G}=40\ kHz&\text{\it Frequency interval between modes}\\ \rho=0.1&\text{\it Injection ratio}\\ m=0&\text{\it Mode index}\\ \omega\tau_{G}=2m\pi+c\sin^{-1}(\rho)&\text{\it Detuning within locking range of mode $m$}\\ c\in[-1,1]\end{array}\] An OEO with a wide flat-topped passband RF BPF or no RF BPF supports concurrent oscillation of a multitude of modes. Figure 8(a) shows results of a complex envelope simulation of a free running OEO using the model shown in Figure 2 with the _RF resonator_ and _phase fluctuations_ subsystems commented through and the _RF source_ subsystem commented out. An established oscillation consisting of a multitude of modes of comparable magnitudes is observed in the spectrogram that persists indefinitely following start-up from noise provided by the _Initial condition (white noise)_ subsystem (Figure 3(d)). The bell-shaped transfer function of the RF resonator when in place results in weak damping of the sidemodes ultimately leading to a single-mode oscillating state via a winner-takes-all gain competition process (Figure 8). In the case of the white noise initial condition, the winning mode is close to but not necessarily at the passband centre due to the random nature of the initial condition. In Figure 8 (b), the first adjacent mode to the mode at zero offset frequency is the winner. In practice, the internal fluctuations of the oscillator will maintain a residual level of sidemodes. Injection locking theory typically assumes an established single-mode initial oscillation. For comparison between theory and simulation, it is consequently convenient either to initiate injection after the free-running oscillator has reached a substantially single-mode oscillating state or to set a single-mode initial condition. In the case of an OEO with large delay it is customary to adapt the theory of Adler [31], Paciorek [32], Stover [33] & Armand [34] to the problem. The classical theory describes a _single mode_ oscillator with a resonator parameterised by its quality factor \(Q\) and natural frequency \(\omega_{0}\). it is asserted (e.g., Fan et al [35]) that a delay line with delay time \(\tau_{D}\) operating at a frequency \(\omega_{0}\) has a quality factor \(Q=\omega_{0}\tau_{D}/2\). The assignment of a \(Q\) to a delay line is fraught [36] in terms of accepted notions [37]. In essence, the delay line and RF-resonator are discarded and replaced by an ultra-high \(Q\) resonator with the same on-resonance delay. Figure 9 shows a comparison of the RF and phase noise spectra of the oscillation generated by an injection locked OEO and, by this customary reasoning its associated classical analogue. The classical analogue seriously underestimates the phase noise beyond midway between the oscillating mode and its adjacent sidemode and completely fails to describe the sidemode structure. In this example, a complex envelope model is used but the phase model provides the same results. When linearized the phase model provides an analytic prediction of the phase noise spectrum that accurately fits the phase noise spectrum of the simulation data. \begin{table} \begin{tabular}{l l} \(\tau_{D}=24.9\ \mu s\) & _Delay line delay time_ \\ \(\tau_{R}=0.1\ \mu s\) & _Resonator on-resonance group delay_ \\ \(\tau_{G}=\tau_{D}+\tau_{R}=25\ \mu s\) & _Round trip group delay time_ \\ \(\Delta f=1/\pi\tau_{R}=3.18\ MHz\) & _Resonator bandwidth_ \\ \(FSR=1/\tau_{G}=40\ kHz\) & _Frequency interval between modes_ \\ \(\rho=0.1\) & _Injection ratio_ \\ \(m=0\) & _Mode index_ \\ \(\omega\tau_{G}=2m\pi+c\sin^{-1}(\rho)\) & _Detuning within locking range of mode \(m\)_ \\ \(c\in[-1,1]\) & \\ \end{tabular} \end{table} Table 1: Principal simulation model parameters Figure 10 shows the results of a simulation of the dynamical behaviour of an OEO under injection locking over a \(5000\,\mu s\) time interval. The OEO is prepared in a single-mode initial oscillation state. At \(t=1000\)\(\mu s\), injection by the RF source is initiated. The injected carrier is detuned by 90% of the locking range (\(\omega\tau_{G}=0.9\sin^{-1}(0.1)/25\)). On this timescale, the response of the OEO oscillation to the onset of injection is substantially immediate (Figure 10(a)). The phase difference between the OEO and source evolves smoothly and monotonically to a steady phase-locked state (Figure 10(b)). The locking behavior is similar to a phase locked loop with proportional control [30]. In this example, an envelope model is used but a phase model provides identical results. Figure 11 provides simulation results provided by the phase model (Leeson approximation) illustrating the serrodyne oscillation state that occurs for injection within the locking range of a sidemode to the oscillating mode. In the example, the injected carrier is tuned to the first adjacent sidemode (\(\omega\tau_{G}=2\pi\)). The oscillation phase, shown in Figure 11(a), executes a sawtooth waveform with almost vertical \(2\pi\) flyback transitions (cycle-slips) and a period \(\tau_{G}=25\)\(\mu s\). If the injected carrier is tuned to sidemode \(m\) the period of the sawtooth decreases to \(\tau_{G}/m\), i.e., there are \(m\) regularly spaced cycle-slips within the fundamental period \(\tau_{G}\).The relative phase between the OEO and RF source, shown in Figure 11(a), Figure 8: A spectrogram of the oscillation of a free-running OEO with 25 \(\mu s\) delay generated by an envelope (upper panels) or phase model (lower panels) with (right panels) or without (left panels) the RF BPF. The resonant peaks are spaced by 40 kHz. The spectrogram colour map encodes the squared magnitude in dB. The oscillator is free of perturbations. executes a staircase waveform with steps of height \(2\pi\). The almost horizontal steps demonstrate a phase-locked state between cycle-slips. The associated complex envelope, shown in Figure 11(a), features domains delimited by cycle-slip transitions containing a carrier at the same frequency as the RF source that extrapolates perfectly across the domain walls. The detuning from the natural frequency of the sidemode adds a ramp of slope \(\omega_{i}\) to the sawtooth waveform so that the leading-edge slope remains \(\omega_{i}\), and the appearance of the staircase waveform is unchanged. There are \(m\) cycle-slips (domain walls) in each base period of \(\tau_{G}\). Since \(\omega\tau_{G}/2\pi\) is no longer an integer, the carrier translates slowly with respect to the domain walls. Phase model simulations show no evidence that the serrodyne state is other than persistent. Figure 9: Comparison of the RF spectrum (left panels) and phase noise spectrum (right panels) provided by the complex envelope simulation model of an injection locked optoelectronic oscillator (upper panels) and an injection locked classical oscillator with the same on-resonance group delay (lower panels). The injection ratio and the phase fluctuation parameters within the source and oscillator are identical in the two cases. The customary adoption of classical injection locking theory to a time delay oscillator seriously underestimates the phase noise beyond midway between the locked mode and its adjacent sidemode and completely fails to describe the sidemode structure. Figure 12 shows envelope model simulation results featuring spiking behaviour. It is found in that the serrodyne oscillation may decay into a pure carrier with the same frequency as the injection, as shown in Figure 12(a), (b), (c). It persists only for detuning within a small interval within the locking range extending from the nearest edge to the principal mode. For example, when tuned to the first adjacent sidemode, it persists indefinitely only for \(\omega\tau_{G}\in[2\pi-\sin^{-1}(0.1),2\pi-0.62\sin^{-1}(0.1)]\). The transition from the serrodyne state to the pure oscillation state is heralded by the appearance of downward spikes in the magnitude of the RF resonator output, which can cause loss of amplifier saturation and hasten the decay. However, substitution of the exact phase model in place of the Leeson model of the RF resonator confirms that the fundamental cause of the decay is the failure of the Leeson approximation and not loss of amplifier saturation, as shown in Figure 12(d), (e), (f). The Leeson model Figure 11: _Serrodyne oscillation state waveforms provided by the reduced phase-only simulation model. The injected carrier is tuned to the first adjacent sidemode (\(\omega\tau_{G}=2\pi/25\)) (a) The oscillation phase executes a sawtooth waveform with an almost vertical \(2\pi\) flyback (cycle-slip). (b) The oscillation phase relative to the source executes a staircase waveform with steps of height \(2\pi\). The almost horizontal steps demonstrate a phase-locked state between the cycle-slips. (c) The associated complex envelope has domains containing a carrier at the frequency of the adjacent side mode and delimited by cycle-slip transitions. The waveform extrapolates perfectly across the domain walls._ Figure 10: _(a) Evolution of the complex envelope of the oscillation of an OEO under injection. (b) The associated evolution of the phase difference between the OEO and the RF source. The OEO is prepared in a single mode initial oscillation state. The injection is initiated at \(t=1000\)\(\mu\)s. The detuning is 90% of the locking range of the principal mode. The OEO dynamical behaviour is similar to a phase-locked loop with proportional control [30]. In this example an envelope simulation model is used but a phase model provides an identical result._ of the RF filter acts on the phase (Figure 11 (a)), smoothing the \(2\pi\) phase transients and limiting their minimum duration. The exact model of the RF filter acts on the complex envelope generated by the phase. For parameter regimes leading to decay, the flyback phase transient becomes steeper as the simulation proceeds; the relative proportion of time occupied by the transient to the locked carrier decreases, while the spectral content of the transient increasingly falls outside the passband of the RF filter. Consequently, the RF filter succeeds in erasing the narrow domain walls (Figure 12 (a), (b), (c), & Figure 12 (e), (f)) and the Figure 12: _Serrodyne oscillation state waveforms provided by the complex envelope simulation model (a)-((g), (h) & (i) and by the reduced phase simulation model with exact RF resonator subsystem (d)-(f). In (a)-(g) the injected carrier is tuned to the first adjacent sidemode (\(\omega_{G}=2\pi/25\)). The serrodyne oscillation state can persists indefinitely for detuning within a small interval within the locking range extending from the nearest edge to the principal mode. Otherwise, it is observed to decay into a pure locked oscillation state as shown in (a)-((c). The decay is heralded by notches in the magnitude of the complex envelope which can cause loss of saturation but the exact phase model results (d)-(f) confirm that the fundamental cause is failure of the Leeson approximation. (g) illustrates that appearance of a plot of the real & imaginary parts of a complex envelope can be deceptive. Plot (a) and (g) illustrate the same oscillation state and are entirely equivalent. The difference is the nominal carrier frequency in (a) is set to the natural frequency of the main oscillation mode but in (g) it is set to the frequency of the injected carrier. (g) illustrates that the locked carrier is phase modulated by a regular train of short \(2\pi\) phase pulses. In (h) & (i) the injected carrier is tuned to the edge of the locking range of the principal mode \(\omega_{G}=0.9\sin^{-1}(0.1)\). The multimode initial condition as used to generate Figure 8(c) & (d) is found to generate \(02\pi\) phase pulse train (h) that persist indefinitely as shown by the spectrogram (i)._ domains coalesce leaving only the locked carrier. This is consistent with the experimental observation that injection can be applied to select any mode among the multitude of natural modes [29]. In simulations, the multimode initial oscillation state used to generate Figure 8(c) & (d) also triggers the generation of a \(2\pi\) phase pulse train by an OEO under injection tuned to the edge of the locking range of the nominal (\(m=0\)) principal mode, as shown in Figure 12 (h) & (i). The spike train (Figure 12 (h)) generated by the complex envelope simulation model persist indefinitely while the phase model (Leeson approximation) generates a double spike train those decays. The replacement of the Leeson model by the exact model of the RF filter resolves the discrepancy. The appearance of sinusoidal waveforms in between transients in all the plots of the real & imaginary parts of the complex envelope in Figure 11, Figure 12 and especially Figure 12 (h) may be misleading. The complex envelopes are defined relative to a nominal carrier which in the Simulink\({}^{\text{\tiny{m}}}\) models presented is set to zero offset frequency. However, for locked serrodyne modes the carrier frequency is set by the RF source. Hence the pertinent complex envelope is \(\exp(i\theta)\) where \(\theta=\theta_{v}-\theta_{w}\) is the relative phase between oscillator and source which has a staircase waveform with \(2\pi\) steps. The staircase waveform is transformed by the complex exponentiation into pure spike trains as shown in Figure 12(a). This figure was generated for the special case of zero detuning that results in an asymptotic phase shift \(\theta_{\infty}=0\). However, if \(\theta_{\infty}\) is absorbed by a phase shift of the carrier then Figure 11 (c) and Figure 12(a), (b), (c), (f), & (h) would all have the same appearance as Figure 12(g), i.e.. a train of pure \(2\pi\) pulses phase modulates a carrier locked to the injection source. The simulation results are consequently redolent of the recent experimental observations of \(2\pi\) phase pulse trains reported by Diakonov et al [14]. A periodic train of narrow pulses necessarily requires the _coherent_ superposition of a multitude of modes. In this sense the spiking phenomena may be considered a form of mode-locking. TDOs such as lasers and OEOs support a multitude of modes and consequently, spiking phenomena is to be expected. There is some literature on baseband spiking phenomena in broadband OEOs triggered by external pulses [15, 16]. Mode-locked laser theory and practice is well advanced [17] and OEOs support analogous phenomena [18-22]. However, beyond the cycle-slips observed in injection pulling of classical oscillators [31,34,35], the theory and simulation of injected carrier induced spiking phenomena in a TDO appears not to have been explored. ## V Summary & Conclusions. A comprehensive Simulink\({}^{\text{\tiny{m}}}\) simulation model of the dynamical behavior of an OEO under injection by an external source has been described. The model builds on the foundations of a previously reported delay DDE formulation of injection locking of TDO [4]. The model has two varieties, a complex envelope model which is relates more closely to the physics and a reduced phase only model that relates more closely to analytic solutions and classical theories. The correspondence between the blocks and the oscillator components offers intuition and considerable freedom to explore different circuit architectures and design variations with minimal coding effort. Selected simulation results demonstrate that the simulation models fully support multimode oscillation and correctly predict the phase noise spectral density. Moreover, new theoretical and simulation results have been presented of injection induced persistent spiking phenomena redolent of recent experimental observations of \(2\pi\) phase pulse trains in a broadband OEO under injection [14].
2309.06633
MCQUIC: Multicast and unicast in a single transport protocol
Multicast enables efficient one-to-many communications. Several applications benefit from its scalability properties, e.g., live-streaming and large-scale software updates. Historically, multicast applications have used specialized transport protocols. The flexibility of the recently standardized QUIC protocol opens the possibility of providing both unicast and multicast services to applications with a single transport protocol. We present MCQUIC, an extended version of the QUIC protocol that supports multicast communications. We show how QUIC features and built-in security can be leveraged for multicast transport. We present the design of MCQUIC and implement it in Cloudflare quiche. We assess its performance through benchmarks and in emulated networks under realistic scenarios. We also demonstrate MCQUIC in a campus network. By coupling QUIC with our multicast extension, applications can rely on multicast for efficiency with the possibility to fall back on unicast in case of incompatible network conditions.
Louis Navarre, Olivier Pereira, Olivier Bonaventure
2023-09-12T22:49:22Z
http://arxiv.org/abs/2309.06633v1
# MCQUIC: Multicast and unicast in a single transport protocol ###### Abstract. Multicast enables efficient one-to-many communications. Several applications benefit from its scalability properties, e.g., live-streaming and large-scale software updates. Historically, multicast applications have used specialized transport protocols. The flexibility of the recently standardized QUIC protocol opens the possibility of providing both unicast and multicast services to applications with a single transport protocol. We present MCQUIC, an extended version of the QUIC protocol that supports multicast communications. We show how QUIC features and built-in security can be leveraged for multicast transport. We present the design of MCQUIC and implement it in Cloudflare _quiche_. We assess its performance through benchmarks and in emulated networks under realistic scenarios. We also demonstrate MCQUIC in a campus network. By coupling QUIC with our multicast extension, applications can rely on multicast for efficiency with the possibility to fall back on unicast in case of incompatible network conditions. QUIC, Multicast + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science this modularity shifts into a _multicast as a service_ paradigm. Multicast could be used when possible for better use of resources, but the communication could easily fall back on a unicast session seamlessly for the client. This is important in today's Internet where different access networks support different types of services with different levels of multicast deployment. This paper presents Multicast QUIC (MCQUIC), a multicast extension of the QUIC transport protocol offering group key security, source authentication and partial reliability. MCQUIC relies on Multipath QUIC (Marcus et al., 2017) to implement a multicast forwarding path transparently for the clients. We provide an implementation of MCQUIC, evaluate it with benchmarks and deploy it in our campus network. We also use emulated networks to validate MCQUIC in lossy scenarios. We extended Cloudflare's _quiche_(Chou et al., 2017) with full backward compatibility with unicast QUIC as defined in RFC9000 (Krishnan et al., 2017). The implementation of MCQUIC required \(\sim\)7000 source lines of code (SLoC) for the core behavior and \(\sim\)5000 SLoC for tests. We will publicly release our source code upon upcoming publications. This work does not raise any ethical issue. This paper is organized as follows. Section 2 summarizes the main features of QUIC (Krishnan et al., 2017) and Multipath QUIC (Marcus et al., 2017) that we use in MCQUIC. We detail the design of MCQUIC in Section 3 and evaluate it through benchmarks in Section 4. Section 5 shows how we deployed our implementation in a real multicast network used in our campus. It further explores the reliability of MCQUIC with losses in emulated networks. We conclude this paper in Section 6. Multicast extensions for QUIC have already been discussed within the IETF (Krishnan et al., 2017; Krishnan et al., 2017). Even if this paper partially builds upon these drafts for the design of MCQUIC, it suggests different mechanisms for reliability, source authentication and modularity with the unicast session between the server and each client. Additionally, the design presented in this paper is evaluated in a real implementation. This is further discussed in Section 3.5. ## 2. QUIC Background QUIC (Krishnan et al., 2017) is a connection-oriented protocol built on top of UDP. The majority of a QUIC packet is encrypted to prevent middleboxes from acting on it. TLS 1.3 is embedded in QUIC to provide a fast and secure session handshake. A QUIC connection starts with a handshake where TLS session keys are computed for the communication. In contrast to the 4-tuple used in TCP, a QUIC connection is identified with a source and destination Connection ID (CID), generated at random by the end-hosts. Each QUIC packet is identified with a monotonically increasing packet number (PN). Retransmitted data is sent in new QUIC packets with increased packet numbers. These packets can be considered as _frame_ containers, which carry control and data information. Thanks to this architecture, it becomes easy to extend the protocol by defining new such frames. QUIC supports reliable, ordered data stream multiplexing through the STREAM frame, and unreliable communication with the DATAGRAM frame. Each stream is assigned a Stream ID (SID) carried in the STREAM frame. The SID is encoded on 62 bits, meaning that more than \(4.6\times 10^{18}\) streams can be used in a single QUIC connection. Multipath QUIC (Krishnan et al., 2017) extends the QUIC protocol by allowing to simultaneously use multiple paths within a single QUIC connection. Like Multipath TCP (Krishnan et al., 2017), Multipath QUIC enables to either improve the transfer bandwidth or the connection reliability (Krishnan et al., 2017). Multipath QUIC associates each path with a distinct CID, exchanged between the end-hosts using the NEW_CONNECTION_ID frame. A new path can only be used once it has been probed by a host and acknowledged by its peer with the PATH_CHALLENCE and PATH_RESPONSE frames. The IETF is finalizing the standardization of this extension (Marcus et al., 2017). Several open source implementations1 of QUIC already support multipath (Krishnan et al., 2017) or discuss its integration (Chou et al., 2017). Multipath QUIC support is negotiated during the handshake with the enable_multipath transport parameter. Footnote 1: See [https://github.com/quicwg/multipath/wiki/QUIC-Implementations-with-multipath-support](https://github.com/quicwg/multipath/wiki/QUIC-Implementations-with-multipath-support). ## 3. MCQUIC Design This section describes the design of MCQUIC. We first present how a server advertises a multicast channel to its client in Section 3.1. Section 3.2 shows how we leverage Multipath QUIC to transparently transmit multicast data to the clients, enabling unicast fall-back seamlessly. Section 3.3 introduces the multicast reliability mechanism used in MCQUIC to enable for packet recovery by relaxing the full-reliability constraint and introducing a data expiration timer defined by the application. Finally, we discuss extensions to support source authentication in Section 3.4. Such mechanism can be used by an application to ensure that clients receive data from the expected source. Due to space limitations, the MCQUIC frames are detailed in Appendix D (Table 2 to 8). ### Multicast channel announcement Any MCQUIC communication starts with a standard QUIC handshake (Krishnan et al., 2017). During this handshake, both endpoints advertise their local support of multicast with two new transport parameters, respectively for server and client support (mc_server_params and mc_client_params). Multicast communication is only possible if both the client and the server support multicast. Multipath QUIC must also be enabled. After the QUIC handshake, the unicast server potentially announces the multicast channel information (Figure 1) with an MC_ANNOUNCE frame (Table 2). This frame contains the Channel ID (the multicast equivalent of the Connection ID), the multicast source and group IP addresses, and the UDP destination port. Group management actions are performed with the MC_STATE frame (Table 3). To join the multicast channel, a client sends this frame with the JOIN action. The server responds with an MC_KEY frame (Table 6) containing the multicast session key used to decrypt packets sent on the multicast channel. The client is now considered a member of the MCQUIC group. The unicast session between the client and the server remains open during all communication, even if the client is in the multicast group. During this multicast handshake, the client potentially receive application data through this unicast session. ### Multicast leveraging Multipath Multipath QUIC (MPQUIC) is being standardized within the IETF [(34)]. Thanks to this extension, it becomes possible to leverage multiple paths in a single QUIC connection. **Overview.** Introducing the _multicast as a service_ paradigm, a client can transparently switch from the multicast channel to the unicast session if minimum conditions are not met to stay in the multicast group (e.g., too many packet losses or change in the support of IP multicast in the network). MCQUIC relies on MPQUIC to create the multicast data forwarding channel as a second path. Figure 3 shows two clients listening to the multicast channel. The first path is the unicast session between the client and the server. The unicast session keys (\(K_{1}\) and \(K_{2}\)) are specific to this path. The second path is encrypted using a multicast session key (\(K_{G}\)) that is derived by the multicast source, and forwarded to the clients on the unicast paths using the MC_KEY frame. From the client's perspective, the multicast channel is hence a second path using a different cryptographic context. **Multicast path.** For \(n\) clients, the MCQUIC server (_Server_ in Figure 3) manages a set of \(n+1\) QUIC connections. For each new client, the _Server_ start a new unicast QUIC connection (_UC server_). The additional connection is the multicast source (_MC source_). When a client joins the Multicast QUIC channel, it creates a new path towards its _UC server_. The four-tuple Figure 1. Multicast channel announcement. Figure 3. MCQUIC leverages Multipath QUIC to create the multicast data channel. Both clients have a unicast session with the server (_UC server_) encrypted and authenticated with their unicast session keys (\(K_{1}\) and \(K_{2}\)). Clients listen to the multicast group address announced by the server as second path. They decrypt and authenticate the data using the multicast group session key \(K_{G}\). The server-side unicast connections exchange control information with the multicast source (_MC Source_). Figure 2. Multicast reliability mechanism with negative acknowledgments and Forward Erasure Correction. associated to this path uses the multicast group address and port advertised in the MC_ANNOUNCE frame. The multicast path is unidirectional and only the _MC source_ sends data on this path. **Group session key.** A secure multicast communication requires that each member of the group has access to the common group session key, \(K_{G}\)[(21)]. This key is different from the unicast session keys (\(K_{1}\) and \(K_{2}\) in Figure 3) to ensure the integrity of the unicast sessions. \(K_{G}\) is derived by the _MC source_. Once a client joins the group, it receives the secrets used to derive \(K_{G}\) through the MC_KEY frame sent by the unicast server. When the client receives a packet on the multicast path, it uses \(K_{G}\) to decrypt it. Group key management in multicast is an important property to ensure backward and forward-secrecy properties of communications. Although this remains an open research problem [(55; 21)], we consider it as orthogonal to the design of MCQUIC and is not considered in this paper. Indeed, a group key management algorithm could be integrated on top of this design by leveraging MC_KEY frames and the multicast channel. **Multicast as a service.** Since the multicast channel is considered as a second path by the client, it may seamlessly switch from multicast to unicast and still receive application data through the unicast path without interruption. This is possible thanks to the features of Multipath QUIC. A client leaves the channel by sending an MC_STATE frame with the LEAVE action to its _UC server_. We detail in Section 3.3 how the clients use the unicast channels to return acknowledgments to the source to provide partial reliability. ### Reliability Ensuring (partial) reliability is essential in multicast communications. As noted by the authors of PGM [(20)], for a group of 1,000,000 clients with an independent loss probability of 0.01 %, the probability that all clients receive a given packet is \(10^{-43}\). Most strategies use a system of negative acknowledgment (NACK) from the clients to the source to notify lost packets [(20)]. Other designs suggest the use of a hierarchical distribution of the clients to provide reliability [(45)]. Another technique to recover from packet losses in multicast is Forward Erasure Correction (FEC) [(7; 20; 49; 54)], as a single repair symbol can recover different lost packets on distinct receivers. **Overview.** Multicast QUIC relies on a NACK strategy which consumes less bandwidth and resources on the server than positive acknowledgments. To inform the source of lost packets, clients send NACKs on the unicast session. Because MCQUIC supports Single-Source Multicast (SSM), there is no interaction between receivers to perform NACK aggregation [(15)]. For simplicity reasons, the design does not support hierarchical distribution of the receivers either [(45)]. MCQUIC provides partial reliability by sending Forward Erasure Correction (FEC) repair frames on the multicast path. The unicast path between a client and its _UC source_ uses the standard QUIC retransmission mechanisms [(25)]. MCQUIC uses an _expiration timer_ (ET) set by the application. This value indicates the time during which the source can send repair frames to recover lost data. Upon expiration, every data sent before this timer are removed from the sending state of the source, and no more repair symbols will be generated to recover them. **Negative acknowledgments.** A unicast QUIC end-host sends positive acknowledgments (ACK frames) to its peer with the ranges of packets numbers (PN) correctly received. The peer detects lost packets by inspecting the gaps in these ranges, and retransmits the lost reliable frames (e.g., STREAM frames) in subsequent packets. Instead, MCQUIC clients send MC_NACK frames (Table 5) to their _UC server_, as presented in Figure 2. This frame contains the ranges of missing packet numbers and the highest PN (p_max) the client received at the time it generated the frame. As shown in the Figure, clients detect a gap in the packet sequence numbers once they receive packets with a higher PN. Clients do not send ACK frames for packets received on the multicast path, but use the standard reliability mechanism [(25)] on the unicast path. Similarly to unicast QUIC, the MCQUIC source can regularly send PING frames if no application data must be sent, to trigger MC_NACK from the clients. **Retransmissions using FEC.** Lost frames sent on the multicast path are not directly retransmitted by the source. Instead, the emitter sends Forward Erasure Correction (FEC) repair packets on the multicast path to all clients. Using these repair symbols with the correctly received packets, a client can recover the frames sent in the lost packets. Replacing the QUIC retransmission mechanism with this strategy already shown positive results for unicast communication [(39)]. In multicast, a single repair packet can be sufficient to recover different losses on distinct receivers. We follow the FEC for QUIC loss recovery draft [(38)]. Frames to be protected by FEC are mapped into source symbols using the SQUURCE_SYMBOL header. The mapping consists in adding a monotonically increasing source symbol ID (SSID) to each protected frame. Repair symbols are carried inside REPAIR frames. In MC-QUIC, every STREAM frame sent on the multicast path is mapped into a source symbol. Listing 1 presents our FEC scheduler to send repair packets. The source maintains two state variables, nb_to_send indicating the number of repair symbols to generate, and sent_repairs containing the PNs of already generated repair symbols. When the source receives an MC_NACK from a client, it computes the number of packets (containing reliable frames) the client did not receive (Line 2). Lines 3 and 4 limit the number of repair symbols to send using the feedback from this client. For example, in case the client has a higher delay than others, it may ask for repair packets that were already generated by the source. Line 3 uses p_max to compute the number of repair symbols the source already generated that the client was not aware of when sending the MC_NACK. The number of repair symbols to generate for this client is subtracted by this value. The source generates as many repair symbols as indicated by nb_to_send. This value is decremented for each generated and sent repair symbol. **Partial reliability and expiration timer.** Ensuring theoretically full reliability in multicast requires that the forwarding follows the bottleneck client. Instead, MCQUIC relaxes this constraint by offering partial reliability controlled by an _expiration timer_ (ET) set by the application. MCQUIC adds a new timer triggered every ET. Packets that were sent before the previous ET are considered expired by the source, and removed from the sending state; streams that were open before the last ET are reset. The FEC state is also reset, i.e., expired source symbols are removed from the window and expired repair symbols are removed from sent_repairs. The source notifies group members of expired packet-s/streams/FEC symbols by sending an MC_EXPIRE frame (Table 7) at each ET trigger with (i) the highest expired packet number, (ii) the highest expired stream ID (SID) and (iii) metadata to reset the FEC state. Since packet numbers and SSIDs monotonically increase, a receiver can safely remove expired packets and source symbols from its state. However, MCQUIC requires that the SIDs also monotonically increase as clients reset all streams below the value given in the MC_EXPIRE frame. An improvement would replace this value by ranges of expired streams, but it would consume more bytes. **Congestion and flow control.** Multicast congestion control has been addressed by different researchers [36, 50, 56] with attempts to build general-purpose TCP-friendly [50] congestion controllers or multicast application-oriented algorithms [56]. We leave the design of a multicast-specific congestion controller for Multicast QUIC as a future work. Multicast applications are usually self-tailored by their nature. Live events are limited in their bandwidth consumption. Large software updates (i.e., file transfers) are generally performed in background. Leveraging the _multicast as a service_ paradigm, a client experiencing congestion could fall back on its unicast session for data delivery. Flow control is also disabled for MCQUIC communication. In practice, limits like the maximum number of streams are set to maximum values. ### Source authentication Applications can decide to authenticate the source of their packets [9, 28]. This mechanism offers additional protection for use cases where malicious hosts could send spoofed packets, at the cost of more processing on the source and receivers. For multicast, a natural method consists in adding an asymmetric signature to the packet sent. This signature can be verified by all clients on the multicast channel as only the intended source has access to the private key. However, asymmetric signatures are computationally costly to generate and to verify on end-hosts. **Overview.** MCQUIC supports two different source authentication methods whose performance depends on the group size. The main objective is to keep the additional authentication processing on the source as independent as possible of the group size. The first method uses asymmetric signatures concatenated at the end of the packets sent. In the second method, the multicast source sends an authentication packet on a second multicast channel containing symmetric authentication tags of the data packets using the unicast session keys of each client individually. The MC_ANNOUNCE frame sent by the _UC Server_ (Section 3.1) additionally contains (i) the path authentication type (_auth type_ in short) and (ii) a path authentication type-specific payload (_auth type payload_). The Multicast QUIC source can decide to disable source authentication. **Signature.** By their nature, signatures are well suited for multicast source authentication. Even if the signature generation and verification is costly, it scales independently of the multicast group size. When signatures are used, the _auth type payload_ contains the public key used to verify the signature. The current implementation supports Ed25519 [4], which produces signatures of 64 bytes. Other signature algorithms can easily be added, as long as the _auth type payload_ contains an additional field indicating the used algorithm and the signature length. The signature is computed on the UDP payload containing (potentially multiple coalesced) QUIC packets. Clients verify the signature before decryption and processing of the QUIC packets. The signature is concatenated at the end of the UDP datagram payload. The maximum size for QUIC packets is then reduced by the signature length, which is known by the client. **Authentication tags.** Computing symmetric authentication tags is less demanding than signatures, especially on short messages. This method relies on secret keys derived between a client and its _UC Server_ (\(K_{1}\) and \(K_{2}\) in Figure 3). Figure 4 presents the design of this method. For each packet sent on the multicast path, the _MC source_ computes a symmetric authentication tag for each client, using the unicast session key shared with this client. The authentication payload is the encrypted packet sent on the multicast path. In our first implementation, we do not use a message authentication code (MAC), due to the lack of supported implementation in the cryptographic libraries we use. Instead, we use an authenticated encryption with associated data mechanism (AEAD) that explicitly outputs an authentication tag, namely AES in GCM mode. We ignore the ciphertext part of the output, and keep the tag, leaving the IV implicit as it was computed during the unicast QUIC handshake with the client. In effect, we use this AEAD to re-encrypt, using the unicast session keys, the encryption of each packet that is sent on the multicast path. The resulting authentication tags are added in an MC_AUTH frame (Table 8), along with the source Connection ID of the client whose unicast session key was used to generate the tag (CID\({}_{1}\) and CID\({}_{2}\) in the Figure). Finally, the frame contains the packet number of the packet it authenticates. There may be multiple MC_AUTH frames if there are too many clients in the group to fit in a single QUIC packet. To make the distinction between data and authentication packets, the clients maintain a second multicast path. We make the further distinction between the _multicast data path_ and the _multicast auth path_, respectively for transporting multicast data and the MC_AUTH frames. Both paths use the same multicast address, but with different _Channel IDs_ and UDP destination ports. The packets sent on the _multicast auth path_ are also encrypted with \(K_{G}\). Clients map the received packets on the _multicast data path_ with the corresponding MC_AUTH tag by decrypting the packet header to retrieve its packet number. Cleints may buffer packets received on the _data path_ until they receive the corresponding authentication information on the _auth path_, or decide to directly process its payload without verifying the source. The client decrypts any incoming packet from the _auth path_ with \(K_{G}\). It then looks for its Connection ID in the MC_AUTH frame and retrieves the AEAD tag and the target packet number. If the retrieved packet number corresponds to a buffered (or already processed) packet received on the _multicast data path_, the client encrypts it using its unicast session key. If the obtained tag is similar to the tag from the MC_AUTH frame, the packet comes from the correct source and can safely be processed. As other clients of the group do not have access to the unicast session key, it is not possible for an adversarial member to send authentication packets with a valid AEAD tag to the group. Unauthenticated packets may be buffered by the client application until their expiration (triggered by an MC_EXPIRE frame). **Optimizations.** We detail optimizations speeding up the authentication process and decreasing the byte overhead. First, adding the Source Connection ID of each client alongside the generated AEAD tag may be costly. Instead, our implementation generates for each client a _Client ID_, a monotonically increasing unique client identifier. This value is generated by the _MC source_ and sent in the MC_KEY frame when the client joins the multicast channel. The Client ID is encoded in 64 bits, less than the source CID. Second, the method performs too many encryptions. The QUIC encryption procedure first encrypts the packet payload and generates the associated tag with the same key (in this context, the multicast group key, \(K_{G}\)). On the other hand, the authentication tags that are sent inside the MC_AUTH frames are generated after the re-encryption of the data packets already encrypted with \(K_{G}\). This new encryption is costly but of no utility as only the tag will be forwarded to the client in the MC_AUTH frame. An improvement to this method would be to decouple the packet payload encryption from the tag authentication. The encryption would still be performed with \(K_{G}\). The authentication would be repeated for each client with its corresponding unicast session key. As such, for \(n\) clients, only a single encryption and \(n\) tags would be executed. The current method computes \(n+1\) encryption and \(n+1\) tags. **Authenticate groups of packets.** The two aforementioned methods authenticate the source at the packet level. We extended the asymmetric authentication method to enable the source to authenticate chunks of application data, potentially spanning several packets. Inside the QUIC packet carrying the end of an authenticated chunk of data, the source adds an MC_ASYM frame (Table 4). This frame embeds the asymmetric signature computed on the whole authenticated data using the asymmetric private key of the source. A receiver can authenticate each chunk of data it receives by verifying the signature on the received stream of data. This method requires the recipient to wait for the entire chunk of data to be received to authenticate it as a whole. The application is responsible to cut its data in appropriate pieces. For example, a video conference application could authenticate each video frame separately. In MCQUIC, chunks of data are mapped into streams. Each stream is hence signed individually, and the MC_ASYM frame is sent after the last STREAM frame of the stream it authenticates. We further refer to this method as _per-stream_ asymmetric source authentication. Figure 4. Symmetric source authentication design. ### Related work Two multicast QUIC approaches have been discussed within the IETF. HTTP over multicast QUIC [44] is an IETF draft designing HTTP communication over QUIC leveraging an IP multicast network. The multicast behavior (such as channel discovery) is implemented using HTTP and not QUIC. This idea has been implemented in nghq [3]. The draft also suggests using Forward Erasure Correction to recover lost packets and digital signatures to authenticate the source. However, this project is not maintained as it only partially supports a 3-years old version of the draft [44] and does not implement QUIC as defined in RFC9000 [26]. Multicast extensions for QUIC [22] is another draft related to this paper. It specifies extensions inside QUIC to support multicast. Our design follows some guidelines from the draft, e.g., the MC_ANNOUNCE, MC_KEY and MC_STATE of MCQUIC have similar behavior than this IETF document. However, our design suggests a more scalable approach when faced to a large audience. First, the draft [22] suggests to only re-transmit lost frames using the unicast session between the server and the client. MCQUIC recovers lost packets with Forward Erasure Correction frames sent on the multicast channel, thus enabling distinct clients to recover from different losses using a single additional multicast packet. Second, Multicast extensions for QUIC [22] recommends a different approach where data packets sent on the multicast channel are authenticated by hashing them and sending them either individually on each client or on another multicast channel whose packets carrying these hashes are also authenticated using the same authentication mechanism. Such approach, at some point, requires packet hashes to be regularly sent on the unicast session of each client of the multicast group, which is not scalable for large multicast groups. It introduces a linear dependency between the number of recipients and the amount of bytes being sent, which is against the multicast philosophy. MCQUIC suggests different approaches to solve the source authentication problem, discussed in Section 3.4. Some of these methods scale independently of the number of clients listening to the multicast channel. Finally, MCQUIC leverages Multipath QUIC [34], which renders the communication with the clients more straightforward as the multicast channel is considered as a second path with a different cryptographic context. It thus offers seamless migration between unicast and multicast in a single protocol. We believe that our approach is more scalable both for the recovery and authentication mechanism. We could not compare the performance of MCQUIC with the draft [22] as we could not find any up-to-date implementation. The literature contains various other approaches that aim at amortizing the cost of computing a full signature per packet - see [6, Sec. 14] for an introduction. We do not rely on these techniques, as they introduce an important additional complexity and are not supported by standard libraries. The TESLA protocol [47] introduces a clever way to replace each signature with a single MAC, independently of the number of clients, but at the cost of introducing a strong constraint on the timing of packets, which may not be desirable in many applications. ## 4. McQUIC Performance Benchmarks Real implementations of protocols are important to drive research as they prove the feasibility of an architecture. Evaluating these implementations allows measuring their limits and behaviors in particular situations. An important contribution of this paper is a real implementation of MCQUIC. We extended the Cloudflare's _quiche_ open source project [11] with the design presented in Section 3. We leveraged pull request #1310 [12] providing support for Multipath QUIC as defined by the IETF draft [34]. Our extension consists in \(\sim\)7000 Source Lines of Code (SLoC) for the design and \(\sim\)5000 SloC for tests. We reuse the implementation of the Forward Erasure Correction recovery mechanism from previous work [1]. The FEC code leverages Vandermonde matrices [29] to generate coefficients used to generate the repair symbols, allowing multiple repair symbols to be computed with the same set of source symbols. This section benchmarks the key features of MCQUIC in _quiche_. Jaeger _et al._ already evaluated the performance of several QUIC implementations [27], including _quiche_. Their measurement framework starts a client and a server on two machines connected with a 10 Gbps link. In these experiments, the entire network stack is part of the evaluation. Even if the results present performance differences between implementations and show their speed on the wild [27], they do not isolate the QUIC processing. The true performance of a QUIC implementation also heavily depends on the network stack used to send UDP packets. For example, Tyunyayev _et al._[53] used kernel bypass techniques to improve QUIC throughput in a single connection by a factor of 3 without modifying the QUIC packet processing. MCQUIC implements several algorithms that are CPU intensive, including Forward Erasure Correction and source authentication. Instead of evaluating the entire network stack where QUIC is a single component, we evaluate QUIC separately. This is possible as QUIC is a user-space protocol which can be tested as a library. Applications ask the implementation to send specific data. QUIC then generates packets and sends them to the wire, either using a socket or with kernel bypass techniques [53]. The benchmarks in this Section isolate QUIC packet generation (Server) and processing (Client). Figure 5 summarizes this setup. A benchmark consists in asking _quiche_ to send a stream data (stream_send). We measure the time required by the Server to generate the corresponding packets (calls to send). For the Client benchmark, we store (in their order of generation) the packets generated by the Server, and measure the time required for a _quiche_ Client to process them (calls to recv). Our baseline is a unicast QUIC Server with a single Client without losses. We limit the maximum QUIC packet size to 1350 bytes. The Server generates QUIC packets corresponding to a 100 MB stream at a goodput rate of 6.77 Gbps, and the Client processes them at goodput rate of 5.23 Gbps. As we isolate QUIC processing from the network stack, raw performances are not especially representative. Instead, we show relative goodput, i.e., the amount of application data that can be sent in a unit of time, with respect to the baseline. Benchmarks are executed on a server equipped with two 10-core Intel Xeon CPU E5-2687W v3 @3.10 GHz. We repeat each benchmark 100 times and report the median and standard deviation for each point. ### Forward Erasure Correction recovery We first evaluate the impact of the FEC recovery mechanism. Upon reception of an MC_NACK frame, the Server may generate repair symbols and send them in new packets, slowing down the goodput. We evaluate this cost when increasing the number of MC_NACK received by the Source. **Benchmark setup.** We configure the multicast source to send a stream of 100 MB. Packet losses are simulated by notifying the multicast source with an MC_NACK frame every \(x\) packets (\(1/x\) % loss). The multicast source generates as many repair symbols as required to cope with these NACKs. We disable source authentication and set an expiration timer long enough to avoid expiring packets. Figure 6 presents the results. We repeat the experiment with three different FEC window sizes, i.e., the maximum number of source symbols that are protected by a generated repair symbol. In practice, the FEC window size depends on the expiration timer, but we fix it to different values to see its impact on performance. **Server and Client benchmarks.** As expected, increasing the FEC window size decreases the application goodput, as more protected symbols are used to generate a new repair symbol. This trade-off between higher performance (i.e., a smaller window) and higher protection (i.e., a larger window) is function of the expiration timer (ET) set by the application. A longer ET means that source symbols remain longer in the FEC window. Increasing the number of losses decreases performances. Below 1 % losses, the performance reaches a plateau at \(\sim\)60 % of the baseline. This overhead comes from the implementation of the mapping from STREAM frames to source symbols. To avoid modifying the code of _quiche_ too much, we manually copy each protected-frame inside the FEC source symbol window. A zero-copy implementation would be possible at the cost of deeper changes in the core of the _quiche_ packet generation. Moreover, a SOURCE_SYMBOL frame is added for each FEC-protected frame, decreasing by up to 40 B the space available for STREAM data in a QUIC packet. ### Scalability to large groups We compare the maximum application goodput that MC-QUIC reaches compared to unicast QUIC with an increasing number of receivers. We also analyze the impact of adding source authentication to the communication, as discussed in Section 3.4. **Benchmark setup.** We increase the number of receivers, \(n\), from 1 to 40, and measure the goodput to send a stream of 100 MB. In the unicast (\(UC\)) case, the Server must send a copy of each packet to each of the \(n\) clients. As the impact of the Forward Erasure Correction recovery mechanism has already been evaluated in Section 4.1, we disable FEC for the multicast evaluations, i.e., sent STREAM frames are not mapped into source symbols (Appendix A details how the FEC recovery mechanism impacts this benchmark). We do not add losses in this benchmark. To avoid being limited Figure 5. Benchmark setup. Figure 6. Benchmark of the recovery mechanism using FEC with an increasing number of losses and FEC windows of 100, 1000 and 10000 symbols. by the congestion window in the unicast case and be fair compared to MCQUIC, we also disable the congestion control algorithm for unicast benchmarks. **Server benchmark.** Figure (a)a presents the goodput ratio for the Server. We first analyze MCQUIC without authentication (\(MC_{N}\)) with respect to unicast (\(UC\)). We see that for a single receiver, multicast and unicast achieve approximately same goodput. The multicast extension does not add significant overhead to the packet processing. However, by increasing \(n\), we see that the unicast goodput (linearly) decreases with the number of receivers, but remains constant with \(n\) for \(MC_{N}\). As expected, the MCQUIC Server scales independently of the number of receivers. Adding source authentication to the multicast delivery adds security at the cost of lower goodput. MCQUIC with (asymmetric) signatures (\(MC_{A}\)) offers \(\sim\)20 times lower goodput than unauthenticated multicast. However, this method also scales with the multicast group size. For \(n>20\), the cost of generating signatures is even less computationally intensive than unicast QUIC. For medium to large multicast communications, \(MC_{A}\) can be used to offer source authentication and be more scalable for the source, without even considering the reduced consumed bandwidth. MCQUIC with symmetric authentication (\(MC_{Y}\)) has an interesting behavior. For \(n=1\), it offers approximately a third of the goodput of unicast. For each QUIC packet sent on the _data path_, the method must generate an additional QUIC authentication packet. This packet contains the authentication tag computed using the symmetric key of the unicast session between the source and the unique receiver. The tag is computed by encrypting the QUIC data packet. The method thus computes three symmetric encryptions. Moreover, additional packet copies are required to perform these additional encryptions, adding overhead in the process. However, \(MC_{Y}\) scales better with \(n\), even outperforming unicast for \(n>5\). Indeed, even if the number of packet encryptions is at least \(n+2\) for this authentication method (instead of \(n\) for unicast), fewer packets must be generated with \(MC_{Y}\), thus reducing the processing cost. The goodput still decreases by increasing \(n\) as more symmetric authentication tags must be computed and added inside authentication packets. Finally, we observe that \(MC_{A}\) offers higher goodput than \(MC_{Y}\) for \(n>30\). The cost of computing and adding symmetric authentication tags becomes higher than using an asymmetric signature on each data packet. An ideal approach for communications requiring source authentication could use the symmetric approach for \(n<30\) and would switch to the asymmetric method for \(n\geq 30\). **Client benchmark.** Similarly, we analyze the impact of MCQUIC and its authentication methods on the achievable reception goodput on clients on Figure (b)b. First, we see that only the reception goodput of \(MC_{Y}\) is impacted by \(n\). The MC_AUTH frames contain a Client ID/authentication tag pair for each multicast client. With higher values of \(n\), a client must first decrypt longer packets as the MC_AUTH frames carry more information. Then, it must look for its corresponding authentication tag in a longer list, increasing the total processing time. Fortunately, this processing overhead is less noticeable than the performance drop from Figure (a)a. For \(n=1\), the symmetric authentication offers \(60\,\%\) of the receiving goodput compared to unicast. Once more, this is mainly due to (i) the decryption of the additional packet in the _multicast auth_ path for each _multicast data_ packet received and (ii) the encryption of the _multicast data_ packet to locally create the symmetric authentication tag to match the tag received in the MC_AUTH frame. The implementation without source authentication (\(MC_{N}\)) does not add significant overhead compared to unicast as both methods offer the same reception goodput. However, \(MC_{A}\) has a strong negative impact on performance as the client must verify an Ed25519 signature for each packet. Compared to Figure (a)a, the bottleneck becomes the multicast client. A first option to improve the performances would be to choose a different asymmetric algorithm that decreases the overhead of signature verification, or use batch signature verification methods. A second option would be to offload signature verification to a dedicated thread. In that case, unauthenticated QUIC packets may be processed during signature verification, and dropped in case the signature is invalid. This horizontal scaling could be extended to several threads in a controller/worker fashion. Finally, dedicated hardware could speed up the signature verification. **Stream-level authentication.** We now evaluate the performance cost of the _per-stream_ authentication mechanism. Figure 8 presents the goodput of _packet-level_ signatures (\(MC_{A}\)) and _per-stream_ signatures (\(MC_{S}\)). We vary the application stream size as it impacts the frequency of signature computation for \(MC_{S}\). Again, we configure the multicast source to send \(100\) MB of data. With a stream size of \(c\) bytes, we expect \(\frac{10^{7}}{c}\) streams to be sent. We see the benefits of \(MC_{S}\) over \(MC_{A}\). There is a \(+400\,\%\) to \(+500\,\%\) improvement with the \(MC_{S}\) method for streams of size \(\geq 1000\) bytes. Concerning the receivers, we see an improvement of a factor between \(8\) and \(10\) for \(\geq 1000\) bytes streams. The performance of the unauthenticated method (\(MC_{N}\)) decreases for small stream sizes (\(100\,\mathrm{B}\) and \(1000\,\mathrm{B}\)). In practice, _quiche_ limits the number of different streams in a single QUIC packet to one. The Server hence sends a separate QUIC packet for each stream, increasing the number of packets compared to the baseline as these packets are shorter for the same amount of data to send. This adds overhead as more packets must be encrypted on the Server and decrypted on the Client. For such stream sizes, the results are identical for \(MC_{S}\) and \(MC_{A}\) for the same reason. As each stream fits in a single packet, the \(MC_{S}\) method must create and send an MC_ASYM frame in all multicast packets, needing as many signatures as \(MC_{A}\). We see a plateau for \(MC_{S}\) for stream sizes \(\geq 10^{5}\) B. We noticed that for such large sizes, the cost of hashing the entire stream for authentication becomes the limiting factor, balancing with the decreased number of required signatures. However, the performance of \(MC_{S}\) is much closer to the baseline compared to \(MC_{A}\). For \(MC_{A}\), the plateau is reached earlier, as this method must authenticate each packet individually. ## 5. McQUIC Evaluation This section performs end-to-end evaluations of MCQUIC. We first show that MCQUIC can be deployed in a multicast-capable campus network. We then explore lossy scenarios with a larger set of receivers through emulations. Throughout this Section, we consider two use cases for multicast communication, i.e., video conferencing2 and software updates through file transfers. Footnote 2: Such applications can be considered as one-to-many since they usually rely on a duplication server (DS). This DS receives video frames from each client and forward them to all the others. **Video conferencing**. We collected a trace of a five-minutes video call with Tixeo, a secure video conference application (Tixe et al., 2018). The trace recorded the time at which Tixeo sent each video frame, as well as their sizes in byte. We consider that the first frame is sent at time 0. Frame sizes vary from \(\sim\)1000 B to more than 12 kB. An experiment replays the trace. Each video frame is sent in a different QUIC stream. For simplicity, we consider that only the multicast source is sending data to the receivers, and that no client generates video frames that must be shared to the others. Such application can tolerate few lost video frames but requires low latency. We measure the _frame lateness_ on the clients for each sent video frame. It estimates the additional time required for a video frame to arrive at the clients independently of the initial delay from the network. It is computed as the difference between the time at which the whole video frame was received by the client, and the time at which the frame should have been sent by the source (i.e., when Tixeo sent the frame to the transport protocol, as indicated in the trace file). On each client, we further subtract the one-way delay between the client and server. This value is approximated by the minimum lateness computed on the client. The _frame lateness_ thus does not take into account the one-way delays between the source and the clients. **File transfer**. Software updates can benefit from multicast. Multicast file transfers are usually performed in background at a relatively low bitrate to avoid causing congestion. We emulate this use case by sending 1100 B payload packets at a fixed bitrate. Such use case requires that the same content is delivered reliably to all recipients, but is rather flexible on the reception latency. We measure the ratio of clients that receive all the content without error with MCQUIC. We show that even if MCQUIC is partially-reliable, its recovery mechanism can support heavy losses without impact on the receivers with a relatively low expiration timer. ### Experiments in a campus network We deploy MCQUIC in our network composed of two campuses separated by \(\sim\)23 km. Our IT infrastructure already leverages multicast for some software updates. It uses PIM Sparse Mode (PIM-SM) (Grover et al., 2017) and the RendezVous Point (RP) is located in the main campus. This network only supports IPv4 multicast. Even if PIM-SM supports Any Source Multicast, we limit experiments to a single source. We connect on this internal network three Intel PCs (Rendez et al., 2018) with 1.6 GHz Intel Pentium processors and 8 GB RAM. PCs 1 and 2 are located in the main campus and PC 3 is on the secondary site. The PCs are connected to the network with 100 Mbps links. Table 1 in Appendix B provides information regarding the locations of these PCs, the latencies between them and the RP, and the number of hops separating them. PC 1 is our Figure 8. Impact of stream size on \(MC_{A}\) and \(MC_{S}\). Figure 7. Source authentication benchmark. multicast source and the two others act as listeners. Figure 9 presents the results in this setup. **Video trace.** We compare the frame lateness of unicast and multicast using the video communication trace. Figure (a)a shows the Tixeo frame lateness on the receivers. As expected by the low number of receivers, the impact of multicast is not significant regarding the frame lateness. The _per-stream_ authentication method (MC\({}_{S}\)) adds \(\sim\)0.5 ms of lateness at the receivers due to the CPU cost of the verification of the asymmetric signatures. This small experiment demonstrates that our implementation of MCQUIC works in a real multicast network. We did not measure any loss with IP multicast in our campus network. Past research in the MBone (Krishna et al., 2017), the first large multicast network used for research, measured losses below 3 % (Krishna et al., 2017; Krishna et al., 2017) even if we can expect lower rates in today's multicast networks. Recent studies show losses on the Starlink medium (Sarlink et al., 2017) in the order of 0.40 % while QUIC designers (Krishna et al., 2017) reported TCP retransmission rates of 1 %, 2 % and 8 %. We also measured the maximum throughput on PCs 2 and 3 with MCQUIC compared to unicast QUIC using packets of 1100 B payload on Figure (b)b. We do not use source authentication on multicast (\(MC_{N}\)). For a single client, \(MC_{N,2}\) provides similar throughput as unicast QUIC (\(UC_{2}\)). Increasing the audience to two receivers gives approximately the same performance for MCQUIC, while this value is almost halved for unicast. This small result shows that MCQUIC can saturate a 100 Mbps link and offers scalability advantages compared to its unicast equivalent in a real network. Finally, Figure (c)c shows that MCQUIC receivers can seamlessly switch from unicast to multicast while still receiving data. The source sends the Tixeo video trace. PC 3 immediately joins the multicast group. PC 2 starts as a unicast receiver and then joins the multicast group after 70 seconds by sending an MC_STATE frame with the JOIN action to the source. PC 2 then receives application data through the multicast path (\(MC_{N}\)) instead of the unicast one (\(UC\)). 70 seconds later, PC 2 leaves the multicast channel and stops listening to the IP multicast address. The source sends the next data to PC 2 on the unicast path. ### Experiments in emulated networks We now explore different scenarios with more receivers under losses using Mininet (Mininet, 2017). The emulated network is a binary tree whose root is connected to the multicast source, \(S\). Each node in the tree is a client of \(S\). The binary tree contains 50 clients, by filling up each layer of the tree before creating a new layer. Multicast routes are statically set from the root to each leaf with smcrouted. Each link has a bandwidth of 100 Mbps and a delay of 3 ms. We run this network on the same server used for the benchmarks in Section 4. **Video trace under losses.** We add losses in the network using tc. The losses are added on the link between each leaf-node client and its parent in the multicast tree. As a result, only half the receivers are impacted by erased packets, and each lossy client experiences different losses than the others. We explore this scenario to estimate the performance of our Forward Erasure Correction recovery mechanism where a single repair symbol frame can recover distinct losses on different clients. An opposite scenario is to add the loss on a single link at the top of the tree; clients thus experience the same losses and this can lead to the well-known NACK-implosion problem. This scenario is explored in Appendix C. As MCQUIC does not currently include a congestion controller, we also disable it for the unicast experiments. We set the expiration timer to 350 ms. Above this value, the utility of a video frame becomes low due to the interactivity of the application. For example, Zoom recommends a latency below 150 ms for good quality of experience (Krishna et al., 2017). Figure (a)a shows the frame lateness aggregated on all received frames on all 50 clients for MCQUIC without source authentication (\(MC_{N}\)), with _per-stream_ authentication (\(MC_{S}\)) and using unicast (\(UC\)). The frame lateness with multicast does not significantly increase with the added loss as these losses only impact a small percentage of the packets. We expect that the majority of the frame latenesses remain identical. However, the Figure shows that the frame lateness on the clients is higher with unicast compared to multicast. Even without losses, the unicast source must individually send the same data to all clients, inducing a frame lateness increase on the clients. Figure (b)b only presents the 90th percentile of frame latenesses. As expected, the last percentiles are more affected by packet losses since they require one or several RTTs to carry out the retransmission. The Figure highlights the scalability of our FEC repair mechanism as a single FEC repair packet is sufficient to recover distinct losses on different clients. The frame lateness slightly increases with the loss percentage in the multicast scenario as an increasing number of packets are lost and must wait for a negative acknowledgment from clients to trigger repair packets. Figure (c)c presents the data using a cumulative distribution function (CDF) for the 5 % loss experiment. Even in this severe scenario, MCQUIC outperforms unicast, both with _per-stream_ and without source authentication. We note that \(\sim\)96 % of the frames are not impacted by the losses with MCQUIC, whereas it reaches almost 50 ms for unicast communication. Such added frame lateness can have a negative impact for video conferences as the playback buffer allowed for the application decreases. The main benefit of multicast comes from the reduction of the number of bytes sent in a network. Figure 11 reports the number of bytes sent by the source (left) and the receivers (right). Without losses, unicast QUIC (\(UC\)) sends \(\sim\)50 times Figure 11: Impact of packet losses on the number of UDP payload bytes sent by the source (left) and all the receivers (right), aggregating data sent on the unicast and multicast path. Figure 12: Ratio of MCQUIC clients receiving all the content of the file transfer when varying the ET. Figure 10: Impact of packet losses on the video frame lateness with 50 receivers. Each boxplot contains frame lateness aggregated on all received frames from all receivers. Figure 9(a) and 9(b) do not show outliers for readability. The boxes represent the \(25^{\text{th}}\) and \(75^{\text{th}}\) percentiles among the used data and the whiskers the 1.5 IQR. For Figure 9(b), it means that the boxes contain the \(92.5^{\text{th}}\) to \(97.5^{\text{th}}\) percentile of the whole experiment results. Figure 9(c) highlights the cumulative distribution function for a loss of \(5\,\%\) for the last percentiles. Figure 9: Results of MCQUIC deployed in our campus network, using PIM-SM. The integer suffix in Figure 8(a) and Figure 8(c) indicates the PC identifier. more bytes than multicast (\(MC_{N}\) and \(MC_{S}\)) because it must replicate each QUIC packet to each of the 50 receivers. The bandwidth consumption increases almost similarly with MCQUIC compared to unicast QUIC when increasing the losses. As each client at the bottom of the tree is affected by different losses, a repair symbol sent by the source is not always sufficient to recover losses from the other clients, thus requiring to send more repair symbols. Moreover, the MCQUIC source must acknowledge the MC_NACK frames received from its clients. These MC_NACK frames are only sent when a receiver sees a gap in the packet number sequence. When the network is free of losses, an MCQUIC client only sends bytes at the initialization and termination of the connection. This is confirmed by Figure (b)b. When we increase the losses, clients send more MC_NACK frames to the server. **File transfer under losses.** All chunks of a software update must be received by all clients in order to be successful. We start a 5 MB file transfer on the source using MCQUIC with _per-stream_ source authentication. We use the same emulated topology of 50 clients and also add losses on the leaves of the tree. The source chunks the transfered file in 1100 B streams at a rate of 2 Mbps. We vary the expiration timer (ET) from 50 ms to 500 ms. Figure 12 shows that even a small expiration timer of 200 ms is sufficient to ensure correct reception of data packets under 5 % losses. A lost packet is detected by the client when it sees a gap in the packet number sequence. Thus, a lost packet cannot be detected before (at least) the reception of the subsequent packet. A higher bitrate means that this delay is shorter to detect losses. This delay is of 5 ms for a 2 Mbps bitrate. Considering that the delay between the source and the clients at the bottom of the tree is 36 ms, an expiration timer of 50 ms is not always sufficient to allow the client to detect the loss, ask for recovery through an MC_NACK frame and receive the REPAIR frame. Even under more losses than what was measured more than twenty-years ago on the MBone [(8; 57)], a reasonable ET of 200 ms is sufficient at the measured bitrate to ensure the reliable delivery of a file to all clients. ## 6. Conclusion With QUIC, a single transport protocol can support a variety of application requirements as shows by the existing support for reliable streams [(26)] and datagrams [(46)]. This paper goes one step further with MCQUIC, an extension to QUIC that enables this protocol to support both unicast and multicast. MCQUIC provides scalable recovery mechanism with Forward Erasure Correction and source authentication mechanisms independent of the number of members in the multicast group. Furthermore, by leveraging the path management features of Multipath QUIC [(34)], MCQUIC can support multicast and unicast receivers in the same session and allow them to seamlessly switch from both methods. MCQUIC is implemented in the open-source _quiche_[(11)] project, has been demonstrated in our campus network and evaluated using benchmarks and emulated networks. To encourage other researchers and application developers to test MCQUIC in wide area networks, we will release our implementation of MCQUIC upon upcoming publications. Our next steps include discussing the design of MCQUIC within the IETF, adding multicast congestion controls and use it inside real applications to better understand how and when receivers should switch from multicast to unicast.
2309.16420
Analysis of $Ξ(1620)$ resonance with chiral unitary approach
Recently, the $\Xi(1620)$ resonance has attracted much attention, thanks to detailed experimental data by the Belle and ALICE collaborations. This experimental progress has prepared us to conduct theoretical analyses based on experimental data. In this study, we analyze the $\Xi(1620)$ resonance using the chiral unitary model and discuss the properties of the $\Xi(1620)$ resonance and the $K^-\Lambda$ scattering length.
Takuma Nishibuchi, Tetsuo Hyodo
2023-09-28T13:12:05Z
http://arxiv.org/abs/2309.16420v1
# Analysis of \(\Xi(1620)\) resonance with chiral unitary approach ###### Abstract Recently, the \(\Xi(1620)\) resonance has attracted much attention, thanks to detailed experimental data by the Belle and ALICE collaborations. This experimental progress has prepared us to conduct theoretical analyses based on experimental data. In this study, we analyze the \(\Xi(1620)\) resonance using the chiral unitary model and discuss the properties of the \(\Xi(1620)\) resonance and the \(K^{-}\Lambda\) scattering length. ## 1 Introduction The excited \(\Xi\) baryons with strangeness \(S=-2\) are difficult to generate experimentally, and their physical properties have not been well understood for a long time [1]. Recently, new detailed data are being collected, starting from the measurement of the \(\pi^{+}\Xi^{-}\) invariant mass distribution in the \(\Xi_{c}\to\pi\pi\Xi\) decays by the Belle collaboration in 2019 [2], followed by the measurement of the correlation functions in the Pb-Pb heavy ion collisions by the ALICE collaboration in 2021, which determines the \(K^{-}\Lambda\) scattering length [3]. On the other hand, in theoretical studies, the chiral unitary model is commonly used, which generates baryon resonances dynamically from the scattering of mesons and baryons [4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. In 2002, the study in Ref. [4] has predicted the mass and width of \(\Xi(1620)\), and recently, in Ref. [10], the analysis with the higher order terms in the chiral Lagrangian has also been performed. However, the width of \(\Xi(1620)\) obtained in these studies is broader than the result of Belle, and the \(K^{-}\Lambda\) scattering length determined by ALICE is not used to constrain the theoretical models. In this study, we construct the model with a narrow decay width of \(\Xi(1620)\) which is implied by the result of Belle. We also construct a model which reproduces the scattering length determined by ALICE to investigate the nature of \(\Xi(1620)\). Details of this work can be found in Ref. [11]. ## 2 Formulation The coupled-channel meson-baryon scattering amplitude \(T_{ij}(W)\) with the total energy \(W\) is given by the interaction kernel \(V_{ij}(W)\) and the loop function \(G_{i}(W,a)\), which satisfy the following scattering equation \[T_{ij}(W)=V_{ij}(W)+V_{ik}(W)G_{k}(W,a)T_{kj}(W). \tag{1}\] Indicies \(i,j\) denote the meson-baryon channel. We adopt the Weinberg-Tomozawa term for \(V_{ij}(W)\), which is an S-wave interaction satisfying the chiral low-energy theorem, and we use \(G_{i}(W,a_{i})\) with the dimensional regularization to remove the divergence of the loop function. The \(\Xi(1620)\) resonance is coupled to four channels, \(\pi\Xi\), \(\bar{K}\Lambda\), \(\bar{K}\Sigma\) and \(\eta\Xi\) in the isospin basis. Because the Weinberg-Tomozawa term \(V_{ij}(W)\) with no free parameter is determined only by chiral symmetry, the free parameters in this model are the subtraction constants \(a_{i}\), which correspond to the cutoff parameter of the loop momentum. In the calculation of this study, there are four coupled channels, and we construct models by choosing four subtraction constants. ## 3 Numerical result ### Model 1 In the previous study of \(\Xi(1620)\)[4], \(a_{i}\) in all channels are set to be \(-2\) to match the standard cutoff size, resulting in the pole at \(W=1607-140i\) MeV, identified as \(\Xi(1620)\). On the other hand, Belle reported the mass and width of \(\Xi(1620)\) as \(M_{R}=1610\) MeV and \(\Gamma_{R}=60\) MeV [2]. In this study, we assume that the pole of \(\Xi(1620)\) locates at \[z=1610-30i\;\mathrm{MeV}\;, \tag{2}\] based on the Belle result. This pole appears below the \(\bar{K}\Lambda\) threshold and identified as a quasibound state [11]. We search for the model which reproduces the assumed pole position. Following Ref. [4], we set \(a_{\bar{K}\Sigma}=a_{\eta\Xi}=-2\). We bring the pole closer to the assumed one (2) by adjusting the subtraction constants in the \(\pi\Xi\) and \(\bar{K}\Lambda\) channels. As a result, at \(a_{\pi\Xi}=-4.19\) and \(a_{\bar{K}\Lambda}=-0.14\), the assumed pole position is reproduced with an accuracy of 1 MeV, and a model with the quasibound state suggested by the Belle result is constructed. Hereafter, this model is referred to as Model 1. In the left panel of Fig. 1, we plot the elastic \(\pi^{+}\Xi^{-}\) scattering amplitude of Model 1 together with the Breit-Wigner amplitude which has the pole at the same position. We find that a distinct peak of the imaginary part of Model 1 appears, as in the invariant mass distribution of the Belle result. On the other hand, comparing with the Breit-Wigner amplitude, we find that the amplitude is distorted and the peak position is shifted near the threshold of \(\bar{K}\Lambda\). Thus, the threshold effect should be taken into account for the quasibound states near the threshold. ### Model 2 In the ALICE experiment, the central value of the \(K^{-}\Lambda\) scattering length \(a_{0}\) has been determind as [3] \[a_{0}=-0.27-0.40i\;\mathrm{fm}. \tag{3}\] Because the \(\Xi(1620)\) resonance is located near the \(K^{-}\Lambda\) threshold at 1609.4 MeV, the \(K^{-}\Lambda\) scattering length can strongly constrain the scattering amplitude near \(\Xi(1620)\). As in Model 1, we set \(a_{\bar{K}\Sigma}=a_{\eta\Xi}=-2\), and \(a_{\pi\Xi}\) and \(a_{\bar{K}\Lambda}\) are adjusted to reproduce the \(K^{-}\Lambda\) scattering length obtained in the ALICE experiment. The optimization of the scattering length results in \(a_{\pi\Xi}=-2.90\) and \(a_{\bar{K}\Lambda}=0.36\) which reproduce the \(K^{-}\Lambda\) scattering length (3) in the accuracy of 0.01 fm. We call this Model 2. We show the \(K^{-}\Lambda\) scattering amplitude of Model 2 in the right panel of Fig. 1. The imaginary part of the scattering amplitude has a cusp at the threshold energy of \(K^{-}\Lambda\), without showing a clear peak. This result is qualitatively different from the amplitude of Model 1 (left). We also search for the pole in the complex energy plane, but finding no pole on the physically relevant Riemann sheet. To find poles on the other Riemann sheets, we estimate the pole position using the scattering length \(a_{0}\). Based on the effective range expansion, the pole position \(z\) is estimated to be \[z\sim\frac{-1}{2\mu_{K^{-}\Lambda}a_{0}^{2}}+M_{\Lambda}+m_{K^{-}}, \tag{4}\] if \(|a_{0}|\) is sufficiently large. Substituting the scattering length determined by the ALICE experiment into Eq. (4), the pole position is estimated to be \(1701+228i\) MeV on the [ttbttt] sheet, where t and b represent first and second Riemann sheet. In fact, we find that Model 2 has a pole in the [ttbttt] sheet. A pole in this Reimann sheet is called the quasivirtual state [11]. ### Comparison Both Model 1 and Model 2 are based on the experimental data, but the obtained subtraction constants \(a_{\pi\Xi}\) and \(a_{\tilde{K}\Lambda}\) are different. In this section, we search for models with subthreshold pole suggested by the Belle result, while reproducing the scattering lengths of the ALICE experiment by also accounting for the experimental errors. We consider the sum of squares of statistical and systematic errors for both experiments. In Fig. 2, we show the regions of subtraction constants \(a_{\pi\Xi}\) and \(a_{\tilde{K}\Lambda}\) for Model 1 and Model 2, taking into account the errors. From this figure, we find that there is no set of subtraction constants that satisfies both the constrains, because two regions have no common parts. However, the pole position in Model 1 is assumed from the Breit-Wigner fit to the \(\pi^{+}\Xi^{-}\) invariant mass distribution. Because of this assumption, it is not necessarily concluded that the Belle result is not compatible with ALICE. In analyzing the results of the Belle result, it is appropriate to compare directly the model with the \(\pi^{+}\Xi^{-}\) invariant mass distribution without assuming the pole position. ## 4 Summary In this study, we have performed a theoretical analysis of \(\Xi(1620)\), using the chiral unitary model based on the results from the Belle and ALICE experiments. In section 3.1, by adjusting the subtraction constants, we have constructed Model 1 which reproduces the assumed Figure 1: Real part (solid line) and imaginary part (dashed line) of the meson-baryon elastic scattering amplitudes as the functions of the energy \(W\). Left : Comparison of the \(\pi^{+}\Xi^{-}\) scattering amplitude of Model 1 (thick lines) with the Breit-Wigner amplitude (thin lines). Right : the \(K^{-}\Lambda\) scattering amplitude of Model 2. pole position from the Belle result. We find the threshold effect for the \(\Xi(1620)\) peak below the \(\bar{K}\Lambda\) threshold. In section 3.2, we construct Model 2 which reproduces the \(K^{-}\Lambda\) scattering length determined by the ALICE experiment. We show that there is no pole of \(\Xi(1620)\) in the physically relevant Riemann sheet. Instead, a quasivirtual pole is found at the [ttbttt] sheet. In section 3.3, we show that Model 1 and Model 2 are incompatible, even with accounting for the experimental errors. This result suggests that \(\Xi(1620)\) as a shallow quasibound state below the threshold is incompatible with the \(K^{-}\Lambda\) scattering length from the ALICE experiment.
2306.17805
Topologically Attributed Graphs for Shape Discrimination
In this paper we introduce a novel family of attributed graphs for the purpose of shape discrimination. Our graphs typically arise from variations on the Mapper graph construction, which is an approximation of the Reeb graph for point cloud data. Our attributions enrich these constructions with (persistent) homology in ways that are provably stable, thereby recording extra topological information that is typically lost in these graph constructions. We provide experiments which illustrate the use of these invariants for shape representation and classification. In particular, we obtain competitive shape classification results when using our topologically attributed graphs as inputs to a simple graph neural network classifier.
Justin Curry, Washington Mio, Tom Needham, Osman Berat Okutan, Florian Russold
2023-06-30T17:06:23Z
http://arxiv.org/abs/2306.17805v1
# Topologically Attributed Graphs for Shape Discrimination ###### Abstract In this paper we introduce a novel family of attributed graphs for the purpose of shape discrimination. Our graphs typically arise from variations on the Mapper graph construction, which is an approximation of the Reeb graph for point cloud data. Our attributions enrich these constructions with (persistent) homology in ways that are provably stable, thereby recording extra topological information that is typically lost in these graph constructions. We provide experiments which illustrate the use of these invariants for shape representation and classification. In particular, we obtain competitive shape classification results when using our topologically attributed graphs as inputs to a simple graph neural network classifier. ## 1 Introduction Topological Data Analysis studies finite spaces by associating topological invariants to them that serve as intuitive structural summaries for unsupervised analysis or as nonlinear featurizations for downstream supervised learning applications. The most common such invariant is a persistence diagram, which, roughly, gives a concise representation of homological features that are apparent in the data at multiple scales. Graphical topological summaries form another important collection of tools for representing data; these include merge trees, Mapper graphs, and, their continuous counterparts, Reeb graphs. In this paper, we combine Mapper graphs with persistence diagrams in order to define new, highly discriminative shape representations. The main ideas of these constructions are illustrated in Figures 1 and 2. Roughly, the Mapper graph gives a large-scale structural summary of connected components, while the persistence diagram attributions encode finer-scale topological structure. The structure of the paper is as follows. In Sections 2 and 3, we introduce precise mathematical formalism for attributed graphs and their continuous analogues--decorated Reeb graphs--in the language of category theory. We then introduce novel constructions of topologically attributed graphs and prove their stability in Section 4. Sections 5 and 6 are devoted to computational considerations; in particular, how our topologically attributed graphs are constructed and compared in practice. We also provide a classification experiment, where we show that our constructions achieve competitive shape classification performance when they are fed as inputs into a simple graph neural network. ## 2 Categorically Attributed Graphs We view a simple undirected graph \(G=(V,E)\) as a category1\(\mathbf{G}\), with objects corresponding to elements of \(V\cup E\) and with a unique morphism \(e\to v\) whenever a node \(v\) is incident to an edge \(e\). This makes \(\mathbf{G}\) equivalent to a poset Figure 1: The Mapper graph construction applied to the torus admits a natural decoration with homology (with coefficients in a field \(k\)). (\(G,\leq\)) where \(e\leq v\). **Definition 2.1** (Attributed Graph).: An _attributed graph_ is a functor \(F\colon\mathbf{G}\to\mathbf{C}\) which assigns to each vertex \(v\) and each edge \(e\) of \(G\) objects \(F(v)\) and \(F(e)\) in \(\mathbf{C}\), along with a morphism \(F(e\leq v):F(e)\to F(v)\) in \(\mathbf{C}\), provided \(e\leq v\). _Example 2.2_.: Suppose we are considering a social media platform, such as Facebook. Users correspond to nodes of a graph \(G\) and edges correspond to friendships. We can define an attribution valued in the category \(\mathbf{Set}\) as follows: Let \(F(v)\) to be the set of interests or pages that \(v\) follows and let \(F(e)\) be the intersection of these interests or pages. The inclusion \(F(e)\hookrightarrow F(v)\) makes this an attribution. Attributed Graphs for Representing Shapes.In this paper, we are primarily interested in attributed graphs which capture aspects of the geometry and/or topology of a given space (or finite approximation thereof). As such, we will mostly work with attributions that come from homology2, which is an attribution valued in \(\mathbf{Vec}\)--the category of vector spaces and linear maps over a field \(k\). A first example of the type of attributed graph we are interested in is as follows. Footnote 2: See (Hatcher, 2002) for a textbook treatment. _Example 2.3_ (Decorated Mapper Graphs).: Let \(X\) be a compact space and assume \(f\colon X\to\mathbb{R}\) is a continuous map. Let \(\mathcal{U}=(U_{i})_{i\in I}\) be a cover of \(\mathbb{R}\) with no (non-empty) triple intersections. We can pullback this cover along \(f\) to obtain \(f^{-1}(\mathcal{U})\) as a cover of \(X\), where each cover element \(f^{-1}(U_{i})\) is further refined into its connected components. The nerve of this cover defines the _Mapper graph_\(\mathcal{M}_{\mathcal{U},f}\) of \(X\xrightarrow{f}\mathbb{R}\) with respect to \(\mathcal{U}\)(Singh et al., 2007). It has vertices \(V\) corresponding to components \(C\) of \(f^{-1}(U_{i})\) and edges \(E=\{C\cap C^{\prime}\,|\,C,C^{\prime}\in f^{-1}(\mathcal{U})\text{ and }C\cap C^{\prime}\neq\emptyset\}\) corresponding to non-empty intersections of these components. The _decorated Mapper graph (DMG)_\(F\colon\mathcal{M}_{\mathcal{U},f}\to\mathbf{Vec}\) augments the Mapper graph by assigning to each component \(C\in V\) and \(C\cap C^{\prime}\in E\) the homology (with coefficients in a field \(k\)) of the corresponding components, i.e. \(F(C)\coloneqq H_{n}(C)\) and \(F(C\cap C^{\prime})\coloneqq H_{n}(C\cap C^{\prime})\). The inclusion \(C\cap C^{\prime}\subseteq C\) of components induces a map in homology \(F(C\cap C^{\prime}\leq C)\coloneqq H_{n}(C\cap C^{\prime}\subseteq C)\). An example of a DMG is shown in Figure 1. Discrete Shape Representations and TDA.In our next example, we consider an extension of the DMG concept which applies to finite spaces. As a reminder, Topological Data Analysis (TDA) provides a tool for homology inference that replaces homology with _persistent homology_; we assume that the reader is familiar with the basic concepts of TDA, but review one key construction below. **Definition 2.4** (Rips Persistence).: Given a finite metric space \((X,d_{X})\) the _Vietoris-Rips complex at scale \(r\)_ is the simplicial complex \(VR(X,r)\) whose simplices consist of subsets \(\sigma\subseteq X\) where \(d_{X}(x,x^{\prime})\leq 2r\) for all \(x,x^{\prime}\in\sigma\). Notice that if \(r\leq s\), then there is an inclusion \(VR(X,r)\subseteq VR(X,s)\). Passing to the geometric realization of these complexes and the induced continuous maps makes \(VR(X):=VR(X,\bullet)\) into a functor from \((\mathbb{R},\leq)\)--the poset category of the reals--to \(\mathbf{Top}\)--the category of topological spaces and maps, i.e. \(VR(X)\) is an object in the functor category \(\mathbf{Top}^{\mathbb{R}}\). Applying homology \(H_{n}\) then defines the Rips persistent homology \(PH_{n}(X)\), which is an object in \(\mathbf{Vec}^{\mathbb{R}}\). This latter object can then be faithfully encoded as a persistence diagram, which records births and deaths of homological features across scales. **Definition 2.5** (Persistent DMGs).: Given a finite metric space \((X,d_{X})\) and function \(f:X\to\mathbb{R}\), we can construct the (discrete version of the) Mapper graph \(\mathcal{M}_{\mathcal{U},f}\) in a manner similar to Example 2.3 by inferring components via a chosen clustering algorithm applied to \(f^{-1}(U_{i})\). The clusters then replace the components \(C\in f^{-1}(U_{i})\) in the construction above. This allows us to define a _persistent Decorated Mapper Graph_\(F\colon\mathcal{M}_{\mathcal{U},f}\to\mathbf{Vec}^{\mathbb{R}}\) that assigns to each vertex and each edge--corresponding to a cluster and an intersection of clusters, respectively--the persistent homology of each, i.e. \(F(C)\coloneqq PH_{n}(C)\) and \(F(C\cap C^{\prime}\leq C)\coloneqq PH_{n}(C\cap C^{\prime}\subseteq C)\). This structure is illustrated in Figure 2. In this figure, only the node attributions are included. Persistent DMGs give intuitive and informative summaries of discrete shapes, and are the main object that we use in applications below (see Sections 5 and 6). The next part of the paper is concerned with establishing basic theory for these objects, focusing on stability. Figure 2: Persistent Decorated Mapper Graph. (a) A synthetic point cloud data set, nodes colored by the value of its filtration function, height along the \(z\)-axis. (b) A Mapper graph of the dataset. (c) The nodes of the Mapper graph are attributed with persistence diagrams; each node corresponds to a connected component of a level set of the dataset, and the (degree-1) persistent homology of this subset gives the attribution. Nodes of the Mapper graph are colored by total persistence (i.e., \(\sum(d_{\mathrm{I}}-b_{i})\), where the sum is over points \((b_{i},d_{\mathrm{I}})\) in the diagram). ## 3 Decorated Reeb Graphs In this section, we introduce continuous versions of the Mapper graphs and attributions described above. Reeb Graphs.In practice, choosing the cover and clustering schema for Mapper can be an art with sometimes hard to interpret and unstable behavior. These defects are then inherited by the decoration process. These issues have been mostly handled (Munch and Wang, 2016; Carriere and Oudot, 2018) by viewing Mapper graphs as discrete approximations of the Reeb graph (Reeb, 1946), which we review next. **Definition 3.1**.: A _Reeb graph_ is a pair \((R,f)\) consisting of a compact 1-dimensional geometric simplicial complex \(R\) and a piecewise linear map \(f:R\to\mathbb{R}\). A metric \(d_{f}\) on \(R\) is defined by \(d_{f}(x,x^{\prime})=\inf_{\gamma}\max f\circ\gamma-\min f\circ\gamma\), where the infimum is over all paths from \(x\) to \(x^{\prime}\). _Example 3.2_.: Let \(X\) be a compact geometric simplicial complex and let \(f:X\to\mathbb{R}\) be a continuous piecewise linear map. The Reeb graph associated to \(f:X\to\mathbb{R}\) starts by defining \(R\) to be set of equivalence classes \(X/\sim\), where \(x\sim x^{\prime}\) if \(x\) and \(x^{\prime}\) lie in the same connected component of \(f^{-1}(v)\). Since \(f\) is constant on equivalence classes, it factors to define a map \(\hat{f}:R\to\mathbb{R}\) where \(f=\hat{f}\circ q\) and \(q\) is the quotient map \(q:X\to X/\sim\). The pair \((R,\hat{f})\) then defines a Reeb graph in the sense of Definition 3.1. We now make geometric graphs the domain of attribution. **Definition 3.3**.: Let \(R\) be a compact 1D geometric complex and let \(\mathcal{O}(R)\) be its poset category of open sets. A _continuous attribution_ is a functor \(F:\mathcal{O}(R)\to\mathbf{C}\). _Example 3.4_ (Decorated Reeb Graph (DRG)).: When \((R,f)\) is a Reeb graph and \(\mathbf{C}=\mathbf{Vec}\), we refer to a continuous attribution \(F:\mathcal{O}(R)\to\mathbf{Vec}\) as a _decorated Reeb graph_ or _DRG_. Specifically, let \((R,\hat{f})\) be a Reeb graph arising from the construction of Example 3.2. The _homology decorated Reeb graph_ is the continuously attributed graph that assigns to each open set \(U\subset R\) in the Reeb graph the homology \(F(U)=H_{n}(q^{-1}(U))\). Categorical Reeb Graphs.In order to prove that the Reeb graph is stable, (de Silva et al., 2016) used the following definition of a Reeb graph. **Definition 3.5**.: A _categorical Reeb graph_ is a functor \(\mathcal{R}:\mathcal{O}(\mathbb{R})\to\mathbf{Set}\), where \(\mathcal{O}(\mathbb{R})\) is the category of open subsets of \(\mathbb{R}\) ordered by inclusion, that satisfies _constructibility_--there exists some finite collection of critical values \(\tau=\{t_{0},\dots,t_{n}\}\subset\mathbb{R}\) such that if \(I\subseteq J\) are two intervals with equal intersection with \(\tau\), then the map \(\mathcal{R}(I\subseteq J)\) is an isomorphism--and the _cosheaf axiom_--if \(\mathcal{U}=\{U_{i}\}_{i\in I}\) is a cover of an open set \(U\in\mathcal{O}(\mathbb{R})\) then the universal map from the colimit \(\varinjlim\mathcal{R}(U_{i})\to\mathcal{R}(U)\) is an isomorphism. _Example 3.6_.: Every Reeb graph \((R,f)\) gives rise to a categorical Reeb graph \(\mathcal{R}\) via \(\mathcal{R}(U):=\pi_{0}(f^{-1}(U))\), where \(\pi_{0}:\mathbf{Top}\to\mathbf{Set}\) is the path components functor. We now unify Definitions 3.3 and 3.5 to provide an alternative description of Example 3.4. This involves engineering a category that can track both components and homology vector spaces. **Definition 3.7**.: Let \(\mathbf{PVec}\) denote the category of _discretely parameterized vector spaces_. Objects of \(\mathbf{PVec}\) are functors \(\sigma:S\to\mathbf{Vec}\), where \(S\) is a set regarded as a discrete category and a morphism from \(\sigma:S\to\mathbf{Vec}\) to \(\tau:T\to\mathbf{Vec}\) consists of a set map \(\mu:S\to T\) and a natural transformation \(\sigma\Rightarrow\tau\circ\mu\). This category has a functor \(\mathbf{dom}:\mathbf{PVec}\to\mathbf{Set}\) that sends \(\sigma:S\to\mathbf{Vec}\) to \(S\). **Definition 3.8**.: A _categorical decorated Reeb graph_ is a functor \(\mathcal{F}:\mathcal{O}(\mathbb{R})\to\mathbf{PVec}\) such that \(\mathbf{dom}\circ\mathcal{F}\) satisfies the axioms of Definition 3.5. _Example 3.9_.: Let \(F:\mathcal{O}(R)\to\mathbf{Vec}\) be a DRG. This gives rise to a categorical DRG \(\mathcal{F}:\mathcal{O}(\mathbb{R})\to\mathbf{PVec}\) where \(\mathcal{F}(U)\) is the object of \(\mathbf{PVec}\) that maps \(\pi_{0}(\hat{f}^{-1}(U))\to\mathbf{Vec}\) by taking a connected component of \(\hat{f}^{-1}(U)\ni A\subset R\) to \(F(A)\). ## 4 Persistent Decorations and Stability One of the main contributions of TDA has been the observation that connected components and homology are stable only when considered as part of a family of topological spaces. We now review the concepts used to quantify this. Metrics.Two of the most prominent distance metrics used in TDA are Gromov-Hausdorff distance and interleaving distance, which we now define. **Definition 4.1** (Gromov-Hausdorff Distance).: Let \((X,d_{X})\) and \((Y,d_{Y})\) be metric spaces. The _distortion_ of a pair of (not necessarily continuous) maps \(\Phi:X\to Y\) and \(\Psi:Y\to X\) is the quantity \(\text{dist}(\Phi,\Psi)\) defined by \[\text{sup}\{|d_{X}(x,x^{\prime})-d_{Y}(y,y^{\prime})|\mid(x,y),(x^{\prime},y^{ \prime})\in C(\Phi,\Psi)\},\] where \[C(\Phi,\Psi)\coloneqq\{(x,y)\in X\times Y\mid y=\Phi(x)\,\text{or}\,x=\Psi(y )\}.\] The _Gromov-Hausdorff distance_ between \(X\) and \(Y\) is \[d_{\mathrm{GH}}(X,Y)\coloneqq\inf_{\Phi,\Psi}\frac{1}{2}\text{dist}(\Phi,\Psi).\] The stability results we are interested in are based on the interleaving construction of TDA. **Definition 4.2** (Interleaving Distance).: Let \(\mathcal{P}\) be a poset, \(\mathbf{C}\) a category and \(\mathbf{C}^{\mathcal{P}}\) the functor category equipped with a notion of shifting/smoothing for any \(\epsilon\geq 0\), i.e. \((\bullet)^{\epsilon}:\mathbf{C}^{\mathcal{P}}\to\mathbf{C}^{\mathcal{P}}\) is a functor that sends \(F\mapsto F^{\epsilon}\) and this functor is equipped with a natural transformation \(\eta^{\epsilon}:\text{id}_{\mathbf{C}^{p}}\Rightarrow(\bullet)^{\epsilon}\) that interacts in compatible ways3. We say that two objects \(F,G\in\mathbf{C}^{\mathcal{P}}\) are \(\epsilon\)-_interleaved_ if there exist morphisms \(\phi:F\to G^{\epsilon}\) and \(\psi:G\to F^{\epsilon}\) such that \(\eta_{F}^{2\epsilon}=\psi^{\epsilon}\circ\phi\) and \(\eta_{G}^{2\epsilon}=\phi^{\epsilon}\circ\psi\). The _interleaving distance_ between \(F\) and \(G\) is then defined as Footnote 3: This is called a flow structure on the category \(\mathbf{C}^{p}\), whose full details are explored in (Stefanou, 2018; De Silva et al., 2018). \[d_{\text{I}}(F,G)=\inf\{\epsilon\mid\text{$F$ and $G$ are $\epsilon$-interleaved}\}.\] _Example 4.3_ (Rips Persistence).: If we choose \(\mathcal{P}=(\mathbb{R},\leq)\) and \(\mathbf{C}=\mathbf{Vec}\) in Definition 4.2 we obtain the usual interleaving distance for 1-parameter persistence modules. Rips persistent homology for a finite metric space \(X\) (see Definition 2.4), written \(PH_{n}(X)\in\mathbf{Vec}^{\mathbb{R}}\), has a natural notion of shifting by defining \(PH_{n}(X)^{\epsilon}(r)=H_{n}(VR(X)(r+\epsilon))\). The fact that \(VR(X)\) is a functor provides a map from \(VR(X,r)\to VR(X,r+\epsilon)\), which gives the data of the natural transformation \(\eta^{\epsilon}\). Thus the interleaving distance between Rips persistent homology functors is well-defined. Interleavings between Rips persistent homology leads to a foundational stability result of TDA (Chazal et al., 2009; Chazal et al., 2009): for finite metric spaces \(X\) and \(Y\) \[d_{\text{I}}(PH_{n}(X),PH_{n}(Y))\leq d_{\text{GH}}(X,Y).\] The type of stability we're interested in is not only governed by the Gromov-Hausdorff distance between point clouds, but scalar functions on these. This is expressed in the following definition, which is equivalent to a metric used in (Chazal et al., 2009); see also (Bauer et al., 2014), where a similar metric is used in the context of Reeb graphs. **Definition 4.4**.: Let \(X\) and \(Y\) be metric spaces equipped with functions \(f:X\to\mathbb{R}\) and \(g:Y\to\mathbb{R}\), written \(X_{f}\) and \(Y_{g}\), respectively. If \(\Phi:X\to Y\) and \(\Psi:Y\to X\) are maps, then the functional distortion of \(\Phi\) and \(\Psi\) is \[\text{FunDist}\big{(}\Phi,\Psi\big{)}\coloneqq\text{ max }\begin{cases}\frac{1}{2}\text{dist}(\Phi,\Psi)\\ ||f-g\circ\Phi||_{\infty}\\ ||g-f\circ\Psi||_{\infty}.\end{cases}\] The functional distortion distance is then \[d_{\text{FD}}(X_{f},Y_{g})\coloneqq\inf_{\Phi,\Psi}\text{FunDist}\big{(}\Phi,\Psi\big{)}.\] It is straightforward to show that \(d_{\text{FD}}\) is a pseudometric on the space of pairs \((X,f)\). Stability of Persistent Discrete DRGs.We now define a persistent discrete Decorated Reeb Graph construction, which refines the notion of a persistent DMG (Definition 2.5), and will be stable under perturbations of the functional distortion distance. **Definition 4.5** (Persistent Discrete DRG).: Given a finite metric space endowed with a scalar-valued function \(X_{f}\), we define \(f_{r}^{-1}(U)\) to be the full subcomplex of \(VR(X,r)\) on all vertices \(x\in f^{-1}(U)\), for each open subset \(U\subset\mathbb{R}\). We then define the _persistent (discretized) decorated Reeb graph_ of \(X_{f}\) to be the following 2-parameter family of categorical DRGs (Definition 3.8): \[DF:(\mathbb{R}^{2},\leq)\rightarrow\text{Fun}(\mathcal{O}(\mathbb{R}),\mathbf{ PVec})\qquad(r,s)\mapsto\mathcal{F}(r,s)\] where \[\mathcal{F}(r,s)(U):=\{A\in\pi_{0}(f_{r}^{-1}(U))\to H_{\bullet}(VR(A,s))\}.\] _Remark 4.6_.: For a fixed \(r\geq 0\), \(\mathcal{F}(r,s)\) assigns to each connected component of \(f_{r}^{-1}(U)\) the Vietoris-Rips persistent homology of that point cloud at scale \(s\). Then \(DF(r,\bullet)\) can be considered as a DRG \(DF(r,\bullet):\mathcal{O}(\mathbb{R})\rightarrow\mathbf{PVec}^{\mathbb{R}}\), by setting \(DF(r,\bullet)(U)(s)=DF(r,s)(U)\). If we also fix a cover \(\mathcal{U}\), we recover the persistent DMG of Definition 2.5 by choosing clusters associated to \(U\in\mathcal{U}\) to be given by the connected components of \(f_{r}^{-1}(U)\). In the above sense, the persistent discrete DRG refines the notion of a persistent DMG. This relaxation to a more continuous and categorical setting is crucial to our proof of the stability result below. **Theorem 4.7** (Stability of Persistent Discrete DRGs).: _Let \(X_{f}\) and \(Y_{g}\) be finite metric spaces endowed with scalar-valued functions. Let \(DF\) and \(DG\) be their respective persistent discrete DRGs (Definition 4.5). Then we have_ \[d_{\text{I}}(DF,DG)\leq d_{\text{FD}}(X_{f},Y_{g}).\] The \(\epsilon\)-smoothing of \(DF\) is defined by \[DF^{\epsilon}(r,s)(U)=DF(r+\epsilon,s+\epsilon)(U^{\epsilon})\] where \(U^{\epsilon}:=\{t\in\mathbb{R}\mid\exists v\in U\text{ s.t. }|t-v|<\epsilon\}\) is the \(\epsilon\)-thickening of the open set \(U\in\mathcal{O}(\mathbb{R})\). This leads to the notion of the interleaving distance \(d_{\text{I}}\) used in the theorem. Proof Sketch.: Suppose the functional distortion distance of Definition 4.4 between \(X_{f}\) and \(Y_{g}\) is less than \(\delta\). This means that for every \(\epsilon>\delta\) there are maps \(\Phi:X\to Y\) and \(\Psi:Y\to X\) whose distortion is less than \(2\epsilon\). Also, \(||f-g\circ\Phi||_{\infty}\leq\epsilon\), which implies that \(\forall U\in\mathcal{O}(\mathbb{R})\) we have \[f^{-1}(U)\subseteq\Phi^{-1}(g^{-1}(U^{\epsilon})).\] This implies that if \(\sigma\subseteq f^{-1}(U)\) is a subset with \(d(x_{i},x_{j})\leq 2r\) for all pairs of points in \(\sigma\), i.e. \(\sigma\in VR(X,r)\cap f_{r}^{-1}(U)\), then \(\Phi_{r,s}(\sigma)\in g_{r+\epsilon}^{-1}(U^{\epsilon})\cap VR(Y,s+\epsilon)\). Moreover, this containment holds when restricted to a component \(A\in\pi_{0}(f_{r}^{-1}(U))\). Symmetric reasoning using the condition \(||g-f\circ\Psi||_{\infty}\leq\epsilon\) guarantees that \(\forall U\in\mathcal{O}(\mathbb{R})\) \[g^{-1}(U)\subseteq\Psi^{-1}(f^{-1}(U^{\epsilon}))\] and in particular \(\Phi_{r,s}(\sigma)\in g_{r+\epsilon}^{-1}(U^{\epsilon})\cap VR(Y,s+\epsilon)\) is carried to a simplex \(\Psi_{r+\epsilon,s+\epsilon}\circ\Phi(\sigma)\in f_{r+2\epsilon}^{-1}(U^{2 \epsilon})\cap VR(X,s+2\epsilon)\) that is contiguous to \(\sigma\) inside \(f_{r+2\epsilon}^{-1}(U^{2\epsilon})\cap VR(X,s+2\epsilon)\), thus guaranteeing that the induced map on homology \(VR(A,s)\to VR(A,s+\epsilon)\) for each component \(A\in\pi_{0}(f_{r}^{-1}(U))\) is the same as \(\Psi_{r+\epsilon,s+\epsilon}\circ\Phi_{r,s}\). This establishes half of the interleaving condition and the other half is argued _mutatis mutandi_. Stability of Barcode Transforms.We end this theoretical section with another stability result, which deals more directly with DRGs. While the constructions involved are somewhat more straightforward, we discuss their limitations in practice at the end of the section. **Definition 4.8** (Barcode Transform).: Let \(F\colon\mathcal{O}(R)\to\mathbf{Vec}\) be the decorated Reeb graph associated to \(f:X\to\mathbb{R}\), where each open set \(U\subseteq R\) is assigned a finite-dimensional vector space. We define the _barcode transform_ of \(F\) to be the map \[BF\colon R \to\mathbf{Vec}^{\mathbb{R}}\] \[r\in R \mapsto\Big{(}t\in\mathbb{R}_{\geq 0}\mapsto F\big{(}B_{d_{f}}(r,t) \big{)}\Big{)}\] Since every persistence module can be identified with a barcode, we can view the barcode transform as an assignment of a barcode to each point in the Reeb graph. Using \(\epsilon\)-smoothings of open sets, i.e. setting \(\mathcal{P}=\mathcal{O}(\mathbb{R})\) and \(F^{\epsilon}(U)\coloneqq F(U^{\epsilon})\) in Definition 4.2, we can define interleaving distances for categorical Reeb graphs and categorical decorated Reeb graphs. Moreover, the interleaving distance of categorical Reeb graphs gives rise to an interleaving distance of concrete Reeb graphs as defined in (de Silva et al., 2016). In the following we define the functional distortion distance for barcode transforms and show that it is controlled by the interleaving distance of the Reeb graphs and their corresponding categorical decorated Reeb graphs. **Definition 4.9**.: Let \(F\colon\mathcal{O}(R)\to\mathbf{Vec}\) and \(G\colon\mathcal{O}(S)\to\mathbf{Vec}\) be concrete decorated Reeb graphs over \((R,f)\) and \((S,g)\). We define the functional distortion distance of the corresponding barcode transforms by \[d_{\mathrm{FD}}\big{(}BF,BG\big{)}\coloneqq\inf_{\Phi,\Psi}\max \begin{cases}\mathrm{FunDist}(\Phi,\Psi)\\ \sup_{\begin{subarray}{c}r\in R\end{subarray}}d_{\mathrm{I}}\big{(}BF(r),BG \circ\Phi(r)\big{)}\\ \sup_{s\in S}d_{\mathrm{I}}\big{(}BF\circ\Psi(s),BG(s)\big{)}\end{cases}\] where FunDist is taken w.r.t. \(f\) and \(g\) and the infimum is over all functions \(\Phi\colon R\to S\) and \(\Psi\colon S\to R\). **Theorem 4.10**.: _Let \(F\colon\mathcal{O}(R)\to\mathbf{Vec}\) and \(G\colon\mathcal{O}(S)\to\mathbf{Vec}\) be concrete decorated Reeb graphs over \((R,f)\) and \((S,g)\) and \(\mathcal{F},\mathcal{G}\colon\mathcal{O}(\mathbb{R})\to\mathbf{PVVec}\) the corresponding categorical decorated Reeb graphs, then_ \[d_{\mathrm{I}}\big{(}R,S\big{)}\leq d_{\mathrm{FD}}\big{(}BF,BG\big{)}\leq 6 d_{\mathrm{I}}\big{(}\mathcal{F},\mathcal{G}\big{)}\,.\] Proof Sketch.: We begin with the inequality on the left. As shown in (Bauer et al., 2015), \(d_{\mathrm{I}}\big{(}R,S\big{)}\leq d_{\mathrm{FD}}\big{(}R_{f},S_{g}\big{)}\) and, since the functional distortion distance on Reeb graphs corresponds to the first part of the functional distortion distance of barcode transforms (Definition 4.9), we obviously get \(d_{\mathrm{FD}}\big{(}R_{f},S_{g}\big{)}\leq d_{\mathrm{FD}}\big{(}BF,BG\big{)}\). To demonstrate the inequality on the right, let \(\mathbf{dom}\colon\mathbf{PVVec}\to\mathbf{Set}\) be the functor that sends functors in \(\mathbf{PVVec}\) to its domain. We observe that \(\mathbf{dom}\mathcal{F}=\mathcal{R}\) the categorical Reeb graph of \((R,f)\). Hence, given an \(\epsilon\)-interleaving between \(\mathcal{F}\) and \(\mathcal{G}\), applying \(\mathbf{dom}\) yields an \(\epsilon\)-interleaving between \(\mathcal{R}\) and \(\mathcal{S}\) and, furthermore, an \(\epsilon\)-interleaving between \((R,f)\) and \((S,g)\). As shown in (Bauer et al., 2015), \(d_{\mathrm{FD}}(R_{f},S_{g})\leq 3d_{\mathrm{I}}(R,S)\). One can now check that the functions \(\Phi\colon R\to S\) and \(\Psi\colon S\to R\) constructed in the proof of this inequality satisfy \(d_{\mathrm{I}}\big{(}BF(r),BG\circ\Phi(r)\big{)}\leq 6\epsilon\) and \(d_{\mathrm{I}}\big{(}BF\circ\Psi(s),BG(s)\big{)}\leq 6\epsilon\) for all \(r\in R\) and for all \(s\in S\). The details of the last part of the proof are quite technical, and we provide full details in the Appendix. _Remark 4.11_.: We remark that this result is interesting from a theoretical perspective, but has some shortcomings in practice. In particular, the functional distortion distance used here is infinite if the ranks of \(BF(r)\) and \(BG(r)\) do not agree for all sufficiently large \(r\). ## 5 Computation We now describe constructions of attributed graphs from point cloud data. In the following, we generically refer to such attributed graphs as Decorated Reeb Graphs (DRGs). Creating Reeb Graphs.Reeb graphs are most naturally defined for continuous metric spaces, so one needs to approximate a Reeb graph structure for discrete data. We provide a construction similar to the Mapper algorithm (Singh et al., 2007) for estimating Reeb graphs. We first fix a scale \(r\) for the Vietoris-Rips complex \(VR(X,r)\). Choosing an appropriate value of \(r\) is treated as a hyperparameter tuning process; similar ideas for Reeb graph estimation go back at least to (Ge et al., 2011). For the shape datasets considered in this paper, we used a simple heuristic which took \(r=m\cdot r_{0}\), where \(r_{0}\) is the smallest scale at which the VR complex is connected and \(m\) is a small integer (we typically took \(m=2\) or \(3\)). Next, we choose a resolution parameter \(n\) and uniformly subdivided the image of \(f\) into \(n\) bins, \(U_{1},\dots,U_{n}\) (we treat this as a partition of the range, but one could instead thicken slightly and work with an open cover, similar to the usual Mapper construction). This is used to approximate the Reeb graph \(G\) of \((VR(X,r),f)\): each node \(v\) of \(G\) corresponds to a connected component \(A_{v}\subset X\) of one of the sets \(f^{-1}(U_{i})\), and there is an edge between nodes \(v\) and \(w\) if \(A_{v}\subset f^{-1}(U_{i})\), \(A_{w}\subset f^{-1}(U_{i+1})\) and \(A_{v}\) and \(A_{w}\) are connected by an edge in the 1-skeleton of \(VR(X,r)\). An alternative approach would be to compute the exact Reeb graph for \(VR(X,r)\) via algorithms of (Harvey et al., 2010) or (Parsa, 2012); these algorithms are very efficient, but we found that they did not scale well in the Vietoris-Rips setting due to blowup in the size of the simplicial complexes. Creating DRGs - Local Approach.Let \(G=(V,E)\) be an estimated Reeb graph for \(VR(X,r)\). A simple approach to adding persistent homology decorations to \(G\) is as follows. In this _local approach_, degree-\(n\) persistent homology is computed for the subset of \(X\) corresponding to each node in \(G\). The resulting data structure \((G,D)\) consisting of a finite graph \(G=(V,E)\) and an attribution function \(D:V\rightarrow\mathbb{R}\times\textbf{Barcodes}\), where **Barcodes** is the set of (persistent homology) barcodes. The attribution takes a node \(v\in V\) to \(D(v)=(\bar{f}(v),B(v))\), where \(\bar{f}(v)=\frac{1}{|A_{v}|}\sum_{x\in A_{v}}f(x)\) and \(B(v)\) is the persistent homology barcode of \(A_{v}\). This method for constructing a DRG can be seen as an approximation of a particular slice of the persistent discrete DRG structure \(DF:(\mathbb{R}^{2},\leq)\rightarrow\textbf{Fun}(\mathcal{O}(\mathbb{R}), \textbf{PVec})\) introduced in Definition 4.5, as we observed in Remark 4.6. Creating DRGs - Barcode Transform Approach.The following is an alternative approach to adding persistent homology decorations to \(G=(V,E)\), an estimated Reeb graph for \(VR(X,r)\). For each \(v\in V\), we define a filtration on \(VR(X,r)\) by distance to the set \(A_{v}\) and compute the degree-\(n\) persistent homology of the resulting filtered simplicial complex. This once again results in a data structure of the form \((G,D)\) with the attribution function \(D:V\rightarrow\mathbb{R}\times\textbf{Barcodes}\) now recording average function value and persistent homology of the distance-to-\(A_{v}\) function. This method for constructing the DRG is a simplification of the true barcode transform for the decorated Reeb graph of \(VR(X,r)\) (see Definition 4.8). _Remark 5.1_.: The local DRG algorithm easily scales to handle datasets with thousands of points and provides intuitive data summaries consisting of an approximation of the Reeb graph skeleton, attributed with persistence diagrams encoding local structural information (see Figures 2 and 3). However, we note that this representation is an aggressive simplification of the structure described in Definition 4.5, so that the theoretical stability result of Theorem 4.7 does not directly apply in this setting. On the other hand, the barcode transform approach to constructing DRGs gives an approximation of the true barcode transform of \(VR(X,r)\), and is therefore much more closely tied to theory. This construction requires several computations of persistent homology on the full complex \(VR(X,r)\) (endowed with different filtrations), so that it is not scalable to large datasets. As such, most of our computational experiments below will focus on the local DRG construction. Comparing DRGs.Since the interleaving distance considered in Theorem 4.7 is not applicable to our data representation and is computationally intractable, we use the _Fused Gromov-Wasserstein (FGW) framework_(Vayer et al., 2020) for metric-based analysis of DRGs. Intuitively, the FGW distance, defined below, is a more easily approximable proxy for the functional distortion distance \(d_{\mathrm{FD}}\). Let \((G_{1},D_{1})\), \((G_{2},D_{2})\) be DRGs constructed as described above (via either the local or barcode transform approaches). The \(\alpha\)_-FGW distance_ is defined by \[d_{\mathrm{FGW},\alpha}((G_{1},D_{1}),(G_{2},D_{2}))^{2} \tag{1}\] \[=\inf_{\pi\in\mathcal{C}(V_{1},V_{2})}\alpha\cdot L_{\mathrm{gr} }(\pi)+(1-\alpha)\cdot L_{\mathrm{bc}}(\pi)\] where the set \(\mathcal{C}(G_{1},G_{2})\) and the loss functions \(L_{\mathrm{gr}}\) and \(L_{\mathrm{bc}}\) are defined as follows. An element \(\pi\in\mathcal{C}(G_{1},G_{2})\) is a matrix of size \(|V_{1}|\times|V_{2}|\) satisfying \(\sum_{v_{1}\in V_{1}}\pi(v_{1},v_{2})=\frac{1}{|V_{2}|}\) for all \(v_{2}\in V_{2}\) and \(\sum_{v_{2}\in V_{2}}\pi(v_{1},v_{2})=\frac{1}{|V_{1}|}\) for all \(v_{1}\in V_{1}\)--intuitively, this is the space of probabilistic couplings of the uniform measures on \(V_{1}\) and \(V_{2}\), respectively. The _graph loss_\(L_{\mathrm{gr}}(\pi)\) is defined by \[\sum\left(d_{1}(v_{1},w_{1})-d_{2}(v_{2},w_{2})\right)^{2}\pi(v_{1},v_{2})\pi( w_{1},w_{2}), \tag{2}\] where the sum is over all \(v_{i},w_{i}\in V_{i}\) and \(d_{1}:V_{i}\times V_{i}\rightarrow\mathbb{R}\) is a choice of function representing graph structure--for example, we typically use the shortest path distance, where each edge \((v_{i},w_{i})\) in \(E_{i}\) is weighted by \(|\overline{f}_{i}(v_{i})-\overline{f}_{i}(w_{i})|\). Finally, the _barcode loss_\(L_{\mathrm{bc}}(\pi)\) is defined by \[\sum_{v_{i}\in V_{i}}d_{b}(D_{1}(v_{1}),D_{2}(v_{2}))^{2}\pi(v_{1},v_{2}), \tag{3}\] where \(d_{b}\) is the standard bottleneck distance between barcodes. The intuition for the distance is as follows: \(\pi\in\mathcal{C}(V_{1},V_{2})\) is interpreted as a probabilistic registration of the nodes of \(G_{1}\) and \(G_{2}\), the loss \(L_{\mathrm{gr}}\) measures how well the registration preserves the graph structure, the loss \(L_{\mathrm{bc}}\) measures how well the registration preserves attributions, and the hyperparameter \(\alpha\) balances contributions of graph structure and attributions; the optimization problem therefore searches for a probabilistic registration which incurs the least total distortion of these structures. Fixing a methodology for assigning a distance graph function \(d:V\times V\rightarrow\mathbb{R}\) to a DRG \((G,D)\), it follows from Theorem 1 of (Vayer et al., 2020) that \(d_{\mathrm{FGW},\alpha}\) defines a pseudometric on the space of DRGs, for any choice of \(\alpha\in[0,1]\). The idea for FGW distance originates from the Gromov-Wasserstein (GW) distances introduced by Memoli in (Memoli, 2007); roughly, the GW 2-distance is obtained by setting \(\alpha=1\) in the FGW formula. The GW distances can be seen as \(L^{p}\)-relaxations of the Gromov-Hausdorff (GH) distance and are used for the comparison of metric measure spaces (mm-spaces). The FGW distance was introduced in (Vayer et al., 2020) to adapt GW distance to the setting of mm-spaces whose points come with feature attributions. In particular, our use of \(d_{\mathrm{FGW},\alpha}\) can be seen as a relaxation of the functional distortion distance of Definition 4.9. Such a relaxation makes the discrete optimization problem arising in GH distances amenable to approximation by gradient descent, as the search space of the optimization becomes a compact, convex polytope \(\mathcal{C}(V_{1},V_{2})\subset\mathbb{R}^{|V_{1}|\times|V_{2}|}\). For finite mm-spaces of size \(O(n)\), a gradient descent iteration for approximation of GW distance has \(O(n^{3}\log(n))\) cost (Peyre et al., 2016). For the FGW distance in our setting, computation is complicated by the need to evaluate \(|V_{1}|\times|V_{2}|\) bottleneck distances, each of which carries a cubic cost in the number of points in the diagrams. To ease this computational burden, we frequently replace the bottleneck distance computations with Euclidean distance between persistence image vectorizations of the diagrams (Adams et al., 2017). We remark that (Fused) Gromov-Wasserstein distances have been used successfully in several recent works to compare other topological invariants--see, e.g., (Li et al., 2021; Curry et al., 2022; Li et al., 2023). Related Constructions.Our constructions have a similar flavor to other enriched topological invariants in the literature. Most similar are the Decorated Merge Trees (DMTs) introduced by the first four authors in (Curry et al., 2022). A DMT is a certain rooted tree attributed with homological information which can be extracted from a dataset endowed with a filter function--roughly, a DMT captures the topology of sublevel sets of the filtration, while the constructions in the present paper are concerned with level set topology. Our constructions also share features of Persistent Homology Transforms (PHTs) (Turner et al., 2014), which associate a collection of persistence diagrams to a shape in Euclidean space by using projections to various 1-dimensional subspaces as filter functions. Our constructions also yield families of persistence diagrams, but the families are parameterized by nodes of Mapper graphs, rather than by collections of lines. ## 6 Experiments We now illustrate our computational pipeline with several experiments. Our source code is available at our GitHub repository4. Footnote 4: [https://github.com/trneedham/Topologically-Attributed-Graphs](https://github.com/trneedham/Topologically-Attributed-Graphs) Example DRGs.In Figure 3, we provide a few examples of DRGs constructed via the methods described in Section 5. The first row of Figure 3 shows a shape from ModelNet10 (Wu et al., 2015), a curated collection of CAD models of household objects. The CAD model has been converted to point cloud data by sampling. We show the DRG computed via the local approach, with respect to height along the \(z\)-axis; nodes of the DRG are colored by total persistence of their diagram attributes. We show the diagrams associated to two of the nodes, as well as the associated subsets of the original dataset. The second row of the figure shows a humanoid figure from the SHREC14 dataset (Pickup et al., 2014); once again, the pointcloud data is obtained by sampling a triangulated surface. For this example, the filtration function is the \(p\)-eccentricity (with \(p=100\)) (see (Memoli, 2011), Definition 5.3) of the shortest path distance on the 1-skeleton of the underlying Vietoris-Rips complex. We show the associated DRG (via the local approach), some persistence diagram attributes, and the persistence image vectorizations of these diagrams. Finally, the third row of Figure 3 shows the synthetic dataset from Figure 2, once again endowed with the height function. In this case, the DRG is computed via the Barcode Transform approach. Observe that the associated persistence diagrams capture the global topology of the shape--note that the death time of each point is \(\infty\) and that one point in each diagram is of multiplicity 2. The difference between the diagrams is the birth times of the features; the DRG nodes are colored by average birth time in their diagrams. Synthetic Shape Comparison.To explore the behavior of the Fused Gromov-Wasserstein (FGW) distance \(d_{\mathrm{FGW},\alpha}\) defined in (1), we consider a synthetic point cloud dataset consisting of four classes: torus, solid torus, cylinder and solid cylinder. Each torus shape consists of 400 points sampled from a toroidal surface with minor radius 1 and major radius 6 and solid tori are generated similarly, but we take 1600 samples to get a comparable density. Cylinders consist of 400 points sampled from the surface of a cylinder of radius 1 and length \(2\cdot\pi\cdot(6+1)\) so that the surface area is comparable to the torus and solid cylinders are generated similarly with 1600 sample points. Each shape in the dataset has its point coordinates perturbed independently at random and the resulting point cloud is then randomly rigidly rotated. The full dataset consists of 20 samples of each shape class. Each shape is converted to a DRG using the local approach, with filter functions given by 1st PCA coordinates. Node attributes are converted to persistence images, for computational efficiency. For each \(\alpha\in\{0.0,0.25,0.5,1.0\}\), we construct the shape-to-shape distance matrix with respect to \(d_{\mathrm{FGW},\alpha}\), where the \(d_{\mathrm{l}}\)'s in (2) are shortest path distances in the Reeb graph and \(d_{b}\) in (3) is replaced by Euclidean distance between persistence images. Multidimensional Scaling plots for the distance matrices are shown in 4. When \(\alpha=0\), the distance only sees the persistence image attributes, and the torus and cylinder classes are indistinguishable (the topology of each connected component of each of their level sets are very similar). For \(\alpha\in\{0.25,0.5\}\), both the global structure of the Reeb graph and the local persistence image structures are considered, and all classes are able to be distinguished. Finally, when \(\alpha=1\), only the Reeb graph structures are considered in the metric; here, the torus and solid torus classes are unable to be distinguished, and the same is true of the cylinder and solid cylinder classes. This experiment suggests that there is a fairly robust range of \(\alpha\)-values where the metric meaningfully takes into account both Reeb graph and local persistence structures for distinguishing shapes. We remark that we ran the same experiment using DRGs constructed via the Barcode Transform method; here the results were much less interpretable. This is due to the fact that the distance between Barcode Transform DRGs is strongly controlled by the homotopy types of the shapes (cf. Remark 4.11); it was difficult to tune parameters so that the underlying Vietoris-Rips complexes used in the construction consistently had the correct homotopy type. Graph Neural Network-Based Classification.To more thoroughly test the capability of DRGs to distinguish shapes, we use DRGs as inputs to a Graph Neural Network (GNN) classifier. Our data comes from the ModelNet10 dataset (Wu et al., 2015) of CAD models of household objects. The data consists of 10 classes of objects, pre-partitioned into a training set (3991 objects) and a testing set (908 objects). We sample 1024 points form each object to form a dataset of point clouds. From each point cloud, we extract two DRGs: one using \(z\)-coordinates as the filtration function and the other using \(x\)-coordinates (see the first row of Figure 3). The nodes of the DRGs are attributed with vectorizations of the persistence diagrams--in this case, we used summary statistics of the persistence diagrams, following (Ali et al., 2022), Definition 2.1. We also attributed each node with the average Euclidean position of its associated pointcloud. These vector-attributed graphs were used as input to a simple GNN, consisting of four convolutional layers of width 256, implemented in PyTorch. All neural networks in this experiment were trained on a single CPU. Results of the classification experiment are reported in Table 1. Besides results for the DRGs with filter function given by \(z\)-coordinate (\(\mathrm{DRG}_{z}\)) and \(x\)-coordinate (\(\mathrm{DRG}_{x}\)), we also report the combined prediction from the two models (\(\mathrm{DRG}_{xz}\)). The combined prediction was made by averaging the predictions of the \(\mathrm{DRG}_{z}\) and \(\mathrm{DRG}_{x}\) models--we \begin{table} \begin{tabular}{c|c c c c c} \hline \hline PointNet & Reeb & Dgms & \(\mathrm{DRG}_{z}\) & \(\mathrm{DRG}_{x}\) & \(\mathrm{DRG}_{\mathrm{xz}}\) \\ \hline 89.43 & 77.64 & 63.55 & 84.69 & 85.24 & 87.11 \\ \hline \hline \end{tabular} \end{table} Table 1: ModelNet10 Classification Results. Figure 4: Synthetic shape dataset results. The top row shows a sample from each of the shape classes in the experiment; points are colored by their filter function values (first PCA coordinate). The remaining figures show Multidimensional Scaling embeddings of the dataset, coming from pairwise distance matrices with respect to \(d_{\mathrm{FGW},\alpha}\) for various \(\alpha\)-values. Figure 3: Examples of DRGs created with our methods. See Section 6 for a detailed description. also trained a GNN using a disjoint pair of DRGs (with \(x\) and \(z\)-coordinate filtrations) to represent each shape and got similar classification scores. To test the contributions of the diagram attributes, we also trained a network on graphs where nodes were only attributed with average Euclidean coordinates (\(\mathrm{Reeb}\)). Likewise, we tested the contribution of the graph structure by converting each DRG into a complete graph on its nodes, each of which is attributed with persistence statistics and Euclidean coordinates (\(\mathrm{Dgms}\)). We see that the combination of graph structure and topological attributes provide a large boost in classification accuracy, with the best accuracy obtained by the combination of \(x\) and \(z\)-filtrations. To test against a baseline, we use the popular PointNet architecture for point cloud classification (Qi et al., 2017). The PointNet classification accuracy is essentially state-of-the-art for ModelNet10 classification (when using only point clouds as input, without additional structure from the CAD models), and we see that it has a slight edge over the DRG classification score. However, we note that the PointNet model5 contains 3,463,763 parameters, compared to our 209,162 parameter GNN. Moreover, achieving this level of accuracy took \(\sim\)12 hours of training time for PointNet, while our GNN model contained an order of trained in \(\sim\)10 minutes; we preprocessed the data to extract Reeb graphs, which took \(\sim\)1.5 hours, bringing the total time for processing and training to around \(\sim\)3 hours (processing using both \(x\) and \(z\) filtrations). This suggests that the DRG representations have a rich structure with easily learnable features. Footnote 5: A PyTorch implementation, following [https://github.com/nikitakaraevv/pointnet](https://github.com/nikitakaraevv/pointnet). We tested the representational richness of DRGs further by retraining the DRG and PointNet models on only 10% of the ModelNet10 training data, then testing classification accuracy on the full training set. In this sparse training data setting, we see that all models still perform reasonably well, but point out that the gap between PointNet and \(\mathrm{DRG}_{xz}\) has essentially vanished, even though the latter is less complex by an order of magnitude. ## 7 Discussion In this paper, we introduced formalism for topologically attributed graphs and provided theoretical results on their stability. We also demonstrated the potential applicability of these ideas through proof-of-concept experiments. Future work will involve building a closer connection between the theory of topologically attributed Reeb graphs and their computational execution. Notably, our computational pipeline does not incorporate the more sheaf-theoretic or categorical features of decorated Reeb graphs, and integration of these aspects is an important goal. We also plan to continue to develop the computational pipeline toward more robust applications. One interesting direction will be to develop the pipeline to handle more general filtration functions, or to incorporate discovery of effective filter functions into a machine learning framework. We also intend to extend this framework to handle more general attributed simplicial complexes, to which newly developed tools of Topological Deep Learning (Hajji et al., 2023) will apply.
2309.16346
Symmetric matrices with banded heavy tail noise: local law and eigenvector delocalization
In this work we consider deterministic, symmetric matrices with heavy-tailed noise imposed on entries within a fixed distance $K$ to the diagonal. The most important example is discrete 1d random Schr\"odinger operator defined on $0,1,\cdots,N$ where the potentials imposed on the diagonal have heavy-tailed distributions and in particular may not have a finite variance. We assume the noise is of the form $N^{-\frac{1}{\alpha}}\xi$ where $\xi$ are some i.i.d. random potentials. We investigate the local spectral statistics under various assumptions on $\xi$: when it has all moments but the moment explodes as $N$ gets large; when it has finite $\alpha+\delta$-moment for some $\delta>0$; and when it is the $\alpha$-stable law. We prove in the first two cases that a local law for each element of Green function holds at the almost optimal scale with high probability. As a bi-product we derive Wegner estimate, eigenvalue rigidity and eigenvector de-localization in the infinity norm. For the case of $\alpha$-stable potentials imposed on discrete 1d Laplacian, we prove that (i) Green function entries are bounded with probability tending to one, implying eigenvectors are de-localized in the infinity norm; (ii) with positive probability some entries of the Green function do not converge to that of the deterministic matrix; and (iii) the trace of Green function converges to the Stieltjes transform of arcsine law with probability tending to one. These findings are in contrast to properties of Levy matrices recently uncovered. We extend our results to other scaling in front of the noise and derive local laws on the corresponding intermediate scales, and further extend to Wigner matrices perturbed by finite band heavy-tail noise.
Yi Han
2023-09-28T11:13:49Z
http://arxiv.org/abs/2309.16346v1
# Symmetric matrices with banded heavy tail noise: ###### Abstract. In this work we consider deterministic, symmetric matrices with heavy-tailed noise imposed on entries within a fixed distance \(K\) to the diagonal. The most important example is discrete 1d random Schrodinger operator defined on \(0,1,\cdots,N\) where the potentials imposed on the diagonal have heavy-tailed distributions and in particular may not have a finite variance. We assume the noise is of the form \(N^{-\frac{1}{\alpha}}\xi\) where \(\xi\) are some i.i.d. random potentials. We investigate the local spectral statistics under various assumptions on \(\xi\): when it has all moments but the moment explodes as \(N\) gets large; when it has finite \(\alpha+\delta\)-moment for some \(\delta>0\); and when it is the \(\alpha\)-stable law. We prove in the first two cases that a local law for each element of Green function holds at the almost optimal scale with high probability. As a bi-product we derive Wegner estimate, eigenvalue rigidity and eigenvector de-localization in the infinity norm. For the case of \(\alpha\)-stable potentials imposed on discrete 1d Laplacian, we prove that (i) Green function entries are bounded with probability tending to one, implying eigenvectors are de-localized in the infinity norm; (ii) with positive probability some entries of the Green function do not converge to that of the deterministic matrix; and (iii) the trace of Green function converges to the Stieltjes transform of arcsine law with probability tending to one. These findings are in contrast to properties of Levy matrices recently uncovered. We extend our results to other scaling in front of the noise and derive local laws on the corresponding intermediate scales, and further extend to Wigner matrices perturbed by finite band heavy-tail noise. Supported by EPSRC grant EP/W524141/1 ## 1. Introduction In this paper we investigate deterministic, symmetric matrices perturbed by noise acting on its diagonal, or only acting on elements sufficiently close to the diagonal. A primary example is the 1d random Schrodinger operator defined on \(\{0,1,\cdots,N\}\) with zero boundary condition \[H_{n}\varphi(k)=\varphi(k-1)+\varphi(k+1)+\mathfrak{a}(k)\varphi(k),\quad \varphi(0)=\varphi(N)=0. \tag{1.1}\] where \(\mathfrak{a}(k)\) are random potentials that have variance \(\frac{1}{N}\), so that the effect of noise induces fluctuations on microscopic scales but the global density of states is same as the discrete Laplacian. Another motivating example is the tridiagonal matrix models for beta ensembles [17] and their generalizations, see Example 1.34 for details. A typical assumption in the literature for these matrices is that the random potentials only act on the diagonal and sub-diagonal, and the randomness has sub-Gaussian tails, or at least they should have a finite second moment. Under both assumptions, the method based on transfer matrix recursions and Prufer coordinates are applicable and lead to a detailed description of the bulk and edge behavior, see for example [33], [36], [28], [7], [15]. Motivated by recent progress in Levy matrices [3], [2] (symmetric matrices with i.i.d. entries having \(\alpha\)-stable distribution), in this paper we investigate the statistical properties of these matrices assuming that the random potentials no longer have a finite second moment. Moreover, we are not restricted to tridiagonal or Jacobi matrices, and the noise can act on all matrix elements having distance at most \(K\) to the diagonal. Meanwhile, the method of moments, which is also applicable to tridiagonal matrices with all moments [37], [32], is far from effective when we are in the heavy-tailed setting. Our investigation of these matrices proceed through a direct computation of its Green function. We will assume the knowledge of a local law for the matrix when all random potentials are set to be zero. Then we carry out a resolvent expansion to investigate to what extent the local law continues to hold for the noisy matrix. We uncover a complex picture: when we impose \(\alpha\)-stable noise, then the trace of Green function should converge to the correct limit (the Stieltjes transform of arcsine law, for example) with high probability, but with high probability some individual entries of Green function will not converge to the deterministic limit. Nonetheless, with high probability Green function entries are bounded, so eigenvectors are de-localized in the \(L^{\infty}\) norm. The results we propose depend strongly on the tails of the random variable \(\xi\), so we state the theorems separately under the corresponding tail assumptions. In several theorems we prove, we only prove some property holds with probability tending to \(1\), but counterexamples imply that they should not hold with overwhelming probability (i.e. with probability at least \(1-N^{-D}\) for any \(D>0\)). These properties are all specific to the finite band structure of the noise that we impose, and are known to be false for Levy matrices [35], [3] and most other random matrix models. ### Models and assumptions In this paper we will mostly investigate matrices of the form \[H_{N}=H_{N}^{\infty}+A_{N} \tag{1.2}\] where \(H_{N}^{\infty}\) is a (deterministic) symmetric matrix and \(A_{N}\) is the noisy matrix with a finite bandwidth. The notation \(H_{N}^{\infty}\) suggests that we are looking at a zero temperature model where the noise is frozen. The primary example of deterministic symmetric matrices falling under the assumption of this paper is the discrete Laplacian in 1d, that is, \[H_{N}^{\infty}=\begin{pmatrix}0&1&0&\cdots&0\\ 1&0&1&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&1&0&1\\ 0&\cdots&0&1&0\end{pmatrix}. \tag{1.3}\] We first introduce some general notations. Define the resolvent (or Green's function) for a matrix \(H_{N}\) and its trace \(m_{N}\) as \[G(z):=\frac{1}{H_{N}-z},\quad m_{N}(z):=\frac{1}{N}\operatorname{Tr}G(z), \quad z\in\mathbb{C}^{+}, \tag{1.4}\] where \(\mathbb{C}_{+}\equiv\{E+i\eta\in\mathbb{C}:\eta>0\}\). We denote by \(G^{\infty}(z)\) the resolvent matrix of \(H_{N}^{\infty}\). The following estimate is the main assumption on \(H_{N}^{\infty}\) that we will need throughout this paper. **Proposition 1.1**.: _Given any small \(\epsilon>0\) and \(\kappa>0\), define_ \[\mathcal{S}\equiv\mathcal{S}(\epsilon,\kappa):=\{z=E+i\eta:|E|\leq 2-\kappa,N^{-1+ \epsilon}\leq\eta\leq 1\}. \tag{1.5}\] _Then the resolvent matrix \(G^{\infty}\) of \(H^{\infty}_{N}\) defined in (1.3) satisfies the following (deterministic) upper bound: there exists a constant \(C\) depending only on \(\epsilon,\kappa\) such that_ \[\sup_{z\in\mathcal{S}}\max_{1\leq i,j\leq N}\big{|}G^{\infty}_{ij}(z)\big{|}\leq C. \tag{1.6}\] The method to prove Proposition 1.1 is to compute explicitly the resolvent matrix \(G^{\infty}\) and check its elements are bounded on \(\mathcal{S}\). We will be working more generally with other symmetric matrices \(H^{\infty}_{N}\) satisfying the same estimate. **Definition 1.2**.: _( \(H^{\infty}_{N}\): deterministic, symmetric matrix with bounded resolvent)_ \(H^{\infty}_{N}\) _is an \(N\times N\) symmetric square matrix with elements \(h_{ij}=h_{ji}\) that satisfy: for any \(\kappa>0\) and \(\epsilon>0\), there exists a constant \(C\) depending only on \(\epsilon,\kappa\) that gives_ \[\sup_{z\in\mathcal{S}(\epsilon,\kappa)}\max_{1\leq i,j\leq N}\big{|}G^{\infty }_{ij}(z)\big{|}\leq C. \tag{1.7}\] _where \(\mathcal{S}(\epsilon,\kappa)\) is defined in (1.5)._ _Remark 1.3_.: This assumption and the forthcoming proofs can be modified, after multiplying by a scalar, to cover symmetric matrices that satisfy the Green function estimate (1.6) with \(E=\Re(z)\) satisfying \(|E|\leq D-k\) rather than \(2-\kappa\), for any \(D>0\). Now we introduce our assumptions on the noisy matrix \(A_{N}\). In the classical context of Anderson localization [5], [31],[26], the standard assumption is \(A_{N}=\operatorname{diag}(\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N})\), where \((\mathfrak{a}_{1},\cdots,\mathfrak{a}_{N})\) is a class of i.i.d. random potentials with \(N\)-independent variance. In such cases however, the eigenvectors are localized and decay exponentially fast [4][22]. To restore de-localization, one needs to reduce the variance of random potentials. Typical assumptions on \(A_{N}\) in the literature [27][28] can be summarized as follows: **Assumption 1.4**.: _(Standard assumption on noise, tridiagonal)_ \[A_{ij}=A_{ji},\quad A_{ij}=0\text{ for all }|i-j|>1,\quad A_{ij}=\frac{1}{ \sqrt{N}}\xi_{ij}, \tag{1.8}\] _where \(\xi_{ij},\,1\leq i\leq j\leq n,|i-j|\leq 1\), are a set of independent random variables (possibly depending on \(N\)) that satisfy, for some \(\delta>0\),_ \[\mathbb{E}[\xi_{ij}]=0,\quad\mathbb{E}[|\xi_{ij}|^{2+\delta}]\leq C<\infty. \tag{1.9}\] In this paper we make the following new assumptions on the noisy matrix \(A_{N}\), which cover all \(\alpha\)-stable laws that have infinite variance. Also, the matrix \(A_{N}\) need not be diagonal or tridiagonal: it has a finite but arbitrary bandwidth \(K\). As illustrated in the proof, we can let \(K\) be slowly growing, comparable to \(\log N\). In the heavy-tailed case, we take the scaling \(\frac{1}{N^{\frac{1}{\alpha}}}\) in the definition of \(A_{N}\). This scaling is consistent with the Levy matrix literature, and (as we will show) this scaling is the typical one that leads to random fluctuations on microscopic scales but guarantees some form of local laws on all mesoscopic scales \([N^{-1+\epsilon},1]\). **Definition 1.5**.: _(Finite band noisy matrix \(A_{N}\)) Given any \(\alpha\in(0,2)\), let \(A_{N}=(A_{ij})\) be an \(N\times N\) matrix that satisfy, for a fixed integer (bandwidth) \(K>0\):_ \[A_{ij}=A_{ji},\quad A_{ij}=0\text{ for all }|i-j|>K,\quad A_{ij}=\frac{1}{N^{ \frac{1}{\alpha}}}\xi_{ij},\text{ for all }|i-j|\leq K. \tag{1.10}\] _where \(\xi_{ij},1\leq i\leq j\leq n,\)\(|i-j|\leq K\), are independent random variables that satisfy one of the following three assumptions: (for \(i>j\) we set \(\xi_{ij}=\xi_{ji}\))_ **Assumption 1.6**.: _(Finite but exploding moments) Let \(A_{N}\) be the square matrix defined in Definition 1.5 for some \(\alpha\in(0,2)\). We assume that for some \(\omega>0\), setting \(q=N^{\frac{\omega}{100}}\), we have the following moment estimates: for some \(\delta>0\),_ \[|\mathbb{E}[A_{ij}]|\leq\frac{C}{N^{1+\delta}},\quad\mathbb{E}[|A_{ij}|^{p}] \leq\frac{C}{Nq^{p-\alpha}}\text{ for any }p\geq 2. \tag{1.11}\] Typical cases covered are \(\xi_{ij}=\widetilde{\xi}_{ij}1_{|\widetilde{\xi}_{ij}|\leq N^{\frac{1}{\alpha }}q^{-1}},\) where \(\widetilde{\xi}\) has a finite \(\alpha\)-th moment and \(\widetilde{\xi}\) has a symmetric law. To check this it suffices to note that \(|A_{ij}|^{p}\leq(q^{-1})^{p-\alpha}|A_{ij}|^{\alpha}.\) **Assumption 1.7**.: _(Finite \(\alpha+\delta\)-moment) Let \(A_{N}\) be the square matrix defined in Definition 1.5 for some \(\alpha\in(0,2),\) and we assume that the random variables \(\xi_{ij}\) satisfy, for some \(\delta>0,\)_ \[\xi_{ij}\overset{\mathrm{Law}}{=}-\xi_{ij},\quad\mathbb{E}[|\xi_{ij}|^{\alpha +\delta}]<C<\infty,\quad\text{ for all }j-K\leq i\leq j. \tag{1.12}\] **Assumption 1.8**.: _(\(\alpha\)-stable noise) Let \(A_{N}\) be the square matrix defined in Definition 1.5 for some \(\alpha\in(0,2),\) and we assume that the random variables \(\xi_{ij}\) are i.i.d. modulo symmetric restriction, have a symmetric law \(\xi_{ij}\overset{\mathrm{Law}}{=}-\xi_{ij},\) and for a bounded function \(L(x)\)_ \[G(x):=\mathbb{P}(|\xi_{ij}|\geq x)=L(x)x^{-\alpha},\quad x\geq 1. \tag{1.13}\] _Remark 1.9_.: After some modifications our result generalizes to the case where \(L(x)\) is a slow varying function. The boundedness assumption in (1.13) is only meant to simplify some computations. All the following results can be easily generalized to the case where \(\xi_{ij}\) is replaced by \(\xi_{ij}+c_{ij}\) where the constant \(c_{ij}\) satisfies \(\frac{1}{N^{\frac{1}{\alpha}}}|c_{ij}|\leq\frac{C}{N^{1+\delta}}\) for some \(\delta>0.\) _Remark 1.10_.: The assumption that \(\xi_{ij}\) has symmetric law is used to guarantee that truncated versions of \(\xi_{ij}\) have mean zero. This assumption can possibly be removed with some additional effort, and is not used in other places of proof. ### Finite but exploding moments: local laws and corollaries The main theorem of local laws under Assumption 1.6 are presented as follows: **Theorem 1.11**.: _(Local law assuming all moments) Assume that the deterministic matrix \(H_{N}^{\infty}\) satisfies Assumption 1.2 and the noisy matrix satisfies Assumption 1.6. Let \(G(z)\) be the green function of \(H_{N}:=H_{N}^{\infty}+A_{N}\), and \(G^{\infty}(z)\) be the Green function of \(H_{N}^{\infty}\). Then for any \(\epsilon>0\) and \(\kappa>0\), we can find constant \(C>0\) and \(\nu>0\) depending only on \(\epsilon\), \(\kappa\), \((\xi_{ij})_{i,j}\) and the upper bound in (1.7) such that_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa)}\max_{1\leq i,j\leq N}| G-G^{\infty}|_{ij}(z)\geq C\log N^{\log\log N}\left(\frac{1}{q}+\frac{1}{N^{ \delta}}+\sqrt{\frac{1}{N\eta}}\right)\right)\leq e^{-\nu\log N^{\log\log N}}. \tag{1.14}\] **Example 1.12**.: _(No entry-wise local law with overwhelming probability) We show by a counterexample that Assumption 1.6 is almost optimal for such a local law, and the claimed local law may fail under Assumption 1.7 or 1.8. To see this, we choose \(H_{N}^{\infty}\) to be the matrix (1.3), and let \(A_{N}\) be a diagonal matrix with only its \((i,i)\) entry nonzero, with \(A_{ii}=\frac{1}{N^{\alpha}}\xi_{ii}\) and \(\xi_{ii}\) is the \(\alpha\)-stable law. From resolvent expansion one must have \(G_{ii}(z)-G_{ii}^{\infty}(z)=-G_{ii}(z)A_{ii}G_{ii}^{\infty}(z)\). Properties of the arcsine law imply that \(G_{ii}^{\infty}(z)\) is bounded away from zero. If (1.14) were true in this setting then \(G_{ii}\) would be close to \(G_{ii}^{\infty}\) with overwhelming probability (i.e., with probability at least \(1-N^{-D}\) for any \(D>0\)), but with probability \(N^{-1}\) we have \(|A_{ii}|>1\), leading to a contradiction. For more discussion, see Theorem 1.21 (3)._ The local law we just proved has the following corollaries: **Corollary 1.13**.: _(Wegner's estimate) Assume that \(H_{N}^{\infty}+A_{N}\) satisfies all the assumptions in Theorem 1.11. For any interval \(I\subset\mathbb{R}\) let \(\mathcal{N}_{I}\) denote the number of eigenvalues of \(H_{N}^{\infty}+A_{N}\) in \(I\). Then for any \(\epsilon>0\) and \(\kappa>0\), one can find constants \(C>0\) and \(\nu>0\) depending on \(\epsilon,\kappa,\alpha\) such that_ \[\mathbb{P}\left(\sup_{I\subset I_{\kappa,\epsilon}}\mathcal{N}_{I}\geq C|I|N \right)\leq e^{-\nu\log N^{\log\log N}} \tag{1.15}\] _where \(I_{\kappa,\epsilon}\) is the set of all intervals \(I\subset[-2+\kappa,2-\kappa]\) with length \(N^{-1+\epsilon}<|I|<1\)._ The local law we just proved helps us to obtain convergence rates of the density of states to the arcsine law, and to obtain rigidity estimates of eigenvalues around their classical locations. **Corollary 1.14**.: _(Local arcsine law and eigenvalue rigidity) Assume the matrix \(H_{N}^{\infty}\) is the 1d Laplacian (1.3) and \(A_{N}\) satisfies Assumption 1.6. Denote by \(\rho^{as}\) the arcsine law, i.e. the probability law on \(\mathbb{R}\) with density_ \[\rho^{as}=\frac{1}{2\pi\sqrt{1-E^{2}/4}}1_{|E|<2}. \tag{1.16}\] _Denote by \(\mu_{N}\) the empirical measure of eigenvalues of \(H_{N}^{\infty}+A_{N}\), i.e. the the measure \(\frac{1}{N}\sum_{i}\delta_{\lambda_{i}}\) where \(\lambda_{i}\) are eigenvalues of \(H_{N}^{\infty}+A_{N}\)._ _Then given any \(\kappa>0\) we can find \(\nu>0\) and (sufficiently small) \(c_{*}>0\) depending on \(\kappa\) and \(\alpha\), such that_ \[\mathbb{P}\left(\sup_{I\subset[-2+\kappa,2-\kappa]}|\mu_{N}(I)-\rho(I)|\geq N ^{-c_{*}}\right)\leq e^{-\nu\log N^{\log\log N}} \tag{1.17}\] _where the supremum ranges over all intervals \(I\subset[-2+\kappa,2-\kappa]\)._ _Without loss of generality we consider eigenvalues in \([0,2-\kappa]\). For \(i=1,\cdots,\frac{N}{2}\) define the classical location of the arcsine law \(\rho\) as follows: the classical location of the \(i\)-th positive eigenvalue is the constant \(\gamma_{i}\) that satisfies_ \[N\int_{0}^{\gamma_{i}}\rho(dx)=i-\frac{1}{2},\quad i=1,\cdots,\frac{N}{2}. \tag{1.18}\] _Let \(0\leq\lambda_{1}^{\prime}\leq\lambda_{2}^{\prime}\leq\cdots\) be the ordered non-negative eigenvalues of the \(N\times N\) random matrix \(H_{N}^{\infty}+A_{N}\). Then for any sufficiently small \(\kappa>0\) we can find positive constants \(C\), \(\nu\) and (sufficiently small) \(c_{*}\) depending on \(\kappa\) and \(\alpha\) such that_ \[\mathbb{P}\left(\sup_{i\geq 1:\lambda_{i}^{\prime}\leq 2-\kappa}|\lambda_{i}^{ \prime}-\gamma_{i}|\geq CN^{-c_{*}}\right)\leq e^{-\nu\log N^{\log\log N}}. \tag{1.19}\] The proof of this corollary is in Section 4.2. In these estimates we have excluded the spectral edge \(\pm 2\) because the arcsine law has unbounded density there. To obtain nontrivial scaling limit at the edge one should take a different scaling of potentials (scale them by \(N^{-\frac{3}{2}}\) rather than \(N^{-\frac{1}{2}}\) when they have finite second moments, see for example [23]). _Remark 1.15_.: In the case of tridiagonal matrix (taking \(K=1\)) and when the random variables \(\xi_{ij}\) have sub-Gaussian tails, the results of bulk rigidity and eigenvector delocalization are not exactly new as they can be derived from the SDE characterization of microscopic bulk scaling limits in [36], see [8] for details. These results however seem to be new when we have larger bandwidth \(K>1\), when we have heavy tails, and when we consider the matrices \(H_{N}^{\sigma}\) (1.24) in which random potentials \(\xi_{ij}\) do not have the diffusive scaling. _Remark 1.16_.: We could allow the bandwidth \(K\) in definition (1.8) to be \(N\)-dependent, in such a way that \(K=K(N)=o(N^{\sigma})\) as \(N\to\infty\), for any \(\sigma>0\). As an example one can take \(K=\log N\). In this case, the results of Theorem 1.11 and Corollary 1.13 remain true, and the proof is essentially unchanged. A limitation of Corollary 1.14 is that the convergence rate is very slow and we do not expect this to be optimal. The reason is that in the estimate (1.14), no matter the value of \(\eta\), the convergence rate is no faster than \(N^{-\frac{\epsilon}{10\alpha}}\), which is much slower than the \(N^{-1+\epsilon}\) rate that can be easily derived when all moments are finite. This is a typical feature for random matrices with sparsity or heavy tails, and we are not quite able to improve these estimates. ### Local laws assuming more moment **Theorem 1.17**.: _(Entry-wise local law given additional moments) Assume that \(H_{N}^{\infty}\) satisfies Definition 1.2. Assume that the noisy matrix \(A=(A_{ij})\) satisfies Assumption 1.7, that is, we assume each \(\xi_{ij}\) has finite \(\alpha+\delta\)-th moment. Then for any \(\epsilon>0\) and \(\kappa>0\), we can find constant \(C>0\) and some (small) \(c_{*}>0\) depending only on \(\epsilon,\kappa\) and \(\xi_{ij}\) such that_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa)}\max_{1\leq i,j\leq N} \left|G_{ij}(z)-G_{ij}^{\infty}(z)\right|\geq C\left(N^{-\frac{\epsilon}{20 \alpha}}+N^{\frac{\epsilon}{40}}\sqrt{\frac{1}{N\eta}}\right)\right)\leq N^{- c_{*}\epsilon}. \tag{1.20}\] The proof is in Section 3.3. **Corollary 1.18**.: _(Eigenvector de-localization) Under the same assumption as Theorem 1.17, for any \(\epsilon>0\) and \(\kappa>0\), we can find a sufficiently small constant \(c_{*}>0\) and a large constant \(C>0\) depending on \(\epsilon,\kappa,\alpha\) such that the following is true: the possibility that \(H_{N}^{\infty}+A_{N}\) has an eigenvalue \(\lambda\in[-2+\kappa,2-\kappa]\) whose corresponding normalized eigenvector \(\mathbf{v}_{\lambda}\) satisfies \(\|\mathbf{v}_{\lambda}\|_{L^{2}}=1\) and \(\|\mathbf{v}_{\lambda}\|_{L^{\infty}}\geq CN^{-\frac{1}{2}+\epsilon}\) is at most \(N^{-c_{*}\epsilon}.\) If instead we assume that \(A_{N}\) satisfies Assumption 1.6, then the said event has probability at most \(e^{-\nu\log N^{\log\log N}}\)._ The proof of this corollary is standard and omitted (see also the proof of Corollary 1.21). See for example [10], Theorem 2.10. To prove the first part we use the local law in Theorem 1.17 and to prove the second part we use the local law in Theorem 1.11. ### Local laws and de-localization for alpha- stable noise When elements \(A_{ij}\) of \(A_{N}\) have \(\alpha\)- stable laws with the \(\frac{1}{N^{\frac{1}{\alpha}}}\) scaling in front of it, the proof of eigenvector delocalization is significantly more complicated and will be the task of this section. Before the proof, we first show that eigenvector de-localization should not be expected to take place with overwhelming probability (probability larger than \(1-N^{-D}\) for any \(D\)). **Example 1.19**.: _(Localized approximate eigenfunctions) Assume that \(H_{N}^{\infty}\) is the matrix (1.3), and let \(A_{N}\) be a diagonal matrix with all diagonal elements \(A_{ii}=\frac{1}{N}\xi_{ii}\), where \(\xi_{ii}\) are i.i.d. variables with Cauchy distribution. We can construct an approximate eigenfunction of \(H_{N}\) as follows: there is an an eigenvector \(\mathbf{v}\) of \(H_{N}\) that has eigenvalue \(\lambda\). Assume without of generality that we can find three consecutive induces \(i-1,i,i+1\) such that three coordinates \(\mathbf{v}_{i-1},\mathbf{v}_{i},\mathbf{v}_{i+1}\) are non-zero, and that \(|\mathbf{v}_{i}|=\|\mathbf{v}\|_{\infty}\). In step 1, we re-sample the value of \(A_{ii}\), and assume that its new value \(A^{\prime}_{ii}\) satisfies \(|\lambda-A^{\prime}_{ii}|\leq N^{-0.5}|\lambda-A_{ii}|\). Then for the relation \(\mathbf{v}_{i-1}+\mathbf{v}_{i+1}=(\lambda-A^{\prime}_{ii})\mathbf{v}^{\prime}_ {i}\) to be satisfied with \(\mathbf{v}_{i-1},\mathbf{v}_{i+1}\) fixed, one needs to have \(|\mathbf{v}^{\prime}_{i}|\geq N^{0.5}|\mathbf{v}_{i}|\geq 1\). In step 2, we resample the value of \(A_{i-1,i-1}\) and \(A_{i+1,i+1}\), and assume that with the new value, we have both equations \(|\mathbf{v}_{i-2}+\mathbf{v}^{\prime}_{i}-(\lambda-A^{\prime}_{i-1,i-1}) \mathbf{v}_{i-1}|<0.1N^{-0.5}\) and \(|\mathbf{v}^{\prime}_{i}+\mathbf{v}_{i+2}-(\lambda-A^{\prime}_{i+1,i+1}) \mathbf{v}_{i+1}|<0.1N^{-0.5}\) hold. Then we define a new vector \(\mathbf{v}^{\prime}=(\mathbf{v}_{1},\cdots,\mathbf{v}_{i-1},\mathbf{v}^{\prime }_{i},\mathbf{v}_{i+1},\cdots,\mathbf{v}_{n})\). Re-normalize \(\mathbf{v}^{\prime}\) so that it has \(L^{2}\) norm one, we have by construction that \(\mathbf{v}^{\prime}\) is completely localized in the \(L^{\infty}\) sense: \(\|\mathbf{v}^{\prime}\|_{\infty}\sim 1\). Moreover, \(\mathbf{v}^{\prime}\) is an approximate eigenvector of \(H^{\prime}_{N}\) with \(A_{i-1,i-1},A_{ii},A_{i+1,i+1}\), re-sampled, in the sense that \(|H^{\prime}_{N}\mathbf{v}^{\prime}|\leq N^{-0.5}\), and the probability that these three random variables can be sampled to satisfy the aforementioned condition is at least \(N^{-3}\), which is much larger than the possibility of an overwhelmingly small event._ This hints that the obstruction to eigenvector de-localization is that some of the atypical indices of \(A_{ii}\) (the indices \(i\) such that \(|A_{ii}|\sim 1\)) are too close to each other. With this idea, we prove the following de-localization result that holds with probability tending to one. We first introduce some definitions: **Definition 1.20**.: _(Removal set) For any non-negative integers \(K\) and \(p\), define \(\Delta_{K}\) as the set of \(E\in(-2,2)\) such that \(\sin(\ell\arccos(\frac{1}{2}E))=0\) for some \(\ell=1,2,\cdots,K\). Then define \(\Delta_{K}^{p}:=\Delta_{K}+[-10^{-p},10^{-p}]\), which is a neighborhood of \(\Delta_{K}\) consisting of intervals of length \(2*10^{-p}\), centered at elements in \(\Delta_{K}\). Define a parameter region_ \[\mathcal{S}(\epsilon,\kappa,K,p):=\{z=E+i\eta:E\in[-2+\kappa,2-\kappa]\setminus \Delta_{K}^{p},\quad N^{-1+\epsilon}\leq\eta\leq 1\}. \tag{1.21}\] _Note that if \(K=0\) (diagonal matrix) or \(K=1\) (tridiagonal matrix), then the set \(\Delta_{K}\) is empty, so that \(\Delta_{K}^{p}\) is empty as well, and \(\mathcal{S}(\epsilon,\kappa,K,p)\) is simply \(\mathcal{S}(\epsilon,\kappa)\)._ **Theorem 1.21**.: _(Entry-wise Green function bounds in \(\alpha\)-stable case) Assume that \(H^{\infty}_{N}\) is the matrix (1.3) of 1d Laplacian, and \(A_{N}\) satisfies Assumption 1.8, Then, with \(G\) denoting the Green function of \(H^{\infty}_{N}+A_{N}\),_ 1. _Green function is bounded: we can find a (large)_ \(C>0\) _and a (small)_ \(c_{*}>0\) _depending on_ \(\epsilon,\kappa,\alpha,K,p\) _such that for_ \(N\) _large,_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa,K,p)}\max_{1\leq i,j\leq N }|G_{ij}(z)|\leq C\right)\geq 1-N^{-c_{*}\epsilon}.\] (1.22) 2. _Entry-wise local law fails with positive probability: we can find some constant_ \(C_{0}>0\) _and some_ \(P_{0}\in(0,1)\) _depending on_ \(\epsilon,\kappa,\alpha,K,p\) _such that for_ \(N\) _large,_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa,K,p)}\max_{1\leq i,j\leq N }|G_{ij}(z)-G^{\infty}_{ij}(z)|\geq C_{0}\right)\geq P_{0}.\] (1.23) _In particular, if \(K=0\) (diagonal) or \(K=1\) (tridiagonal), then the constants \(C\) and \(C_{0}\) do not depend on \(K\) and \(p\), and the estimates hold for all \(z\in\mathcal{S}(\epsilon,\kappa)\)._ The proof of this theorem is given in Section 5. It is interesting to compare this result with Theorem 1.17: Theorem 1.17 deals with the case where \(\xi_{ij}\) has \(\alpha+\delta\) moments, so that the effect of noise is much smaller, and \(G_{ij}\) converges to \(G^{\infty}_{ij}\) with probability going to one; whereas this theorem (1.21) deals with the critical case where the noise has compelling effect with the Laplacian: the Green function is still bounded with high probability but most likely \(G_{ij}\) does not converge to \(G_{ij}^{\infty}\) for some entries \((i,j).\) _Remark 1.22_.: The estimates in Theorem 1.21 can be made uniform over \(z\in\mathcal{S}(\epsilon,\kappa)\) when \(K=0\) or \(K=1,\) i.e. when we consider tridiagonal or Jacobi matrices. This is also the case covered by most existing literature on random Schrodinger operators. When \(K\geq 2,\) our estimates hold outside a removal set defined in 1.20, and it seems hard to make the estimates uniform over \(\mathcal{S}(\epsilon,\kappa)\). We are not sure if this is merely a technical restriction or is truly a barrier that the Green function will be unbounded near those removed sets. Nonetheless, by choosing \(p\) large enough, we have derived Green function estimates for Lebesgue a.e. \(E\in(-2+\kappa,2-\kappa)\) despite the bound being not uniform. The proof of Theorem 1.21 uses more properties on the matrix \(H_{N}^{\infty}\) than those stated in Definition 1.2: every entry of \(G_{ij}^{\infty}(z)\) is bounded. We use the fact that the off-diagonal elements of \(G_{ij}^{\infty}(z)\) decay exponentially fast, see Proposition (2.3); and that the imaginary parts of \(G_{ij}^{\infty}(z)\) have a certain pattern when \((i,j)\) is close to but not on the diagonal, see Proposition 2.4. For this reason we have not stated the result for general matrices \(H_{N}^{\infty}\) satisfying Definition 1.2. **Corollary 1.23**.: _(Eigenvector de-localization) Under the same assumption as in Theorem 1.21, for any \(\epsilon>0\) and \(\kappa>0\), we can find a sufficiently small constant \(c_{*}>0\) and a large constant \(C>0\) depending on \(\epsilon,\kappa,\alpha,K,p\) such that the following is true: the possibility that \(H_{N}^{\infty}+A_{N}\) has an eigenvalue \(\lambda\in[-2+\kappa,2-\kappa]\setminus\Delta_{K}^{p}\) such that its corresponding normalized eigenvector \(\mathbf{v}_{\lambda}\) satisfies \(\|\mathbf{v}_{\lambda}\|_{L^{2}}=1\) and \(\|\mathbf{v}_{\lambda}\|_{L^{\infty}}\geq CN^{-\frac{1}{2}+\epsilon}\) is at most \(N^{-c_{*}\epsilon}.\) In particular if \(K=0\) or \(1\), then the claim holds for all eigenvalues in \([-2+\kappa,2-\kappa]\)._ The proof is given in Section 5.2.1. ### Different scaling of noise We now investigate the case where, instead of the \(\frac{1}{N^{\frac{1}{\alpha}}}\) leading coefficient, we have a different scaling in front of \(\xi_{ij}\). The techniques in this paper still apply, and we prove local laws at certain intermediate scales. **Theorem 1.24**.: _(Different scaling: when all moments exist) For any \(\sigma\in(0,\frac{1}{\alpha}),\) consider the matrix_ \[H_{N}^{\sigma}:=H_{N}^{\infty}+N^{\sigma}A_{N}, \tag{1.24}\] _where \(H_{N}^{\infty}\) satisfies Definition 1.2, \(A_{N}\) satisfies the following variant of Assumption 1.6: for any \(\omega>\frac{\epsilon}{8}>0\) and \(\delta>0\),_ \[|\mathbb{E}[A_{ij}]|\leq\frac{C}{N^{1+\sigma+\delta}},\quad\mathbb{E}[|A_{ij} |^{p}]\leq\frac{C}{N(N^{\sigma+\omega})^{p-\alpha}}\text{ for any }p\geq 2. \tag{1.25}\] _Denote the Green function of \(H_{N}^{\sigma}\) by \(G^{\sigma}(z)\). Then \(G^{\sigma}\) satisfies the following local law: we can find constants \(C\) and \(\nu\) depending on \(\epsilon,\omega,\kappa\) such that_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa,\sigma)}\max_{1\leq i, j\leq N}\left|G_{ij}^{\sigma}(z)-G_{ij}^{\infty}(z)\right|\geq CN^{-\min( \frac{\omega}{2},\frac{\epsilon}{4})}\right)\leq e^{-\nu\log N^{\log\log N}}, \tag{1.26}\] _where_ \[\mathcal{S}(\epsilon,\kappa,\sigma):=\{z=E+i\eta:|E|\leq 2-\kappa,N^{-1+ \epsilon+\sigma\alpha}\leq\eta\leq 1\}.\] _As a corollary, Wegner's estimate (1.15) holds for \(H_{N}^{\sigma}\) for all intervals \(I\subset[-2+\kappa,2-\kappa]\) with length \(N^{-1+\epsilon+\sigma\alpha}<|I|<1\). Moreover, eigenvector de-localization holds in the following form: with probability at least \(1-e^{-\nu\log N^{\log\log N}}\), for any eigenvalue \(\lambda\in[-2+\kappa,2-\kappa]\) its \(L^{2}\)-normalized eigenvector \(\mathbf{v}_{\lambda}\) satisfies \(\|\mathbf{v}_{\lambda}\|_{\infty}\leq CN^{-\frac{1-\epsilon-\sigma\alpha}{2}}\)._ The proof of this theorem is contained in Section 3.2.1. We have essentially covered all the scales where there is a polynomially decaying factor in front of \(\xi_{ij}\), as we only impose \(\sigma\in(0,\frac{1}{\alpha})\). Remark that compared to Assumption 1.6, we have enlarged the value of \(q\), thus imposing a much stronger growth condition on the large values \(A_{ij}\) can take. Concrete examples where (1.25) is satisfied is when \(\xi_{ij}\) is defined from a symmetric random variable \(\xi\) having finite \(\alpha\)-moment, conditioning that \(|\xi|\leq N^{\frac{1}{\alpha}-\sigma-\omega}\). **Theorem 1.25**.: _(Different scaling: \(\alpha\)-stable case) In this case assume \(H_{N}^{\infty}\) is the matrix of 1d Laplacian (1.3) and assume \(A_{N}\) satisfies Assumption 1.8. For any \(\sigma\in(0,\frac{1}{2\alpha})\), denote by \(G^{\sigma}\) the Green function of \(H_{N}^{\sigma}\) as in (1.24). Then all the three claims of Theorem 1.21 continue to hold for \(G^{\sigma}\), after replacing every \(\mathcal{S}(\epsilon,\kappa,K,p)\) by \(\mathcal{S}(\epsilon,\kappa,2\sigma,K,P),\) where_ \[\mathcal{S}(\epsilon,\kappa,2\sigma,K,p):=\{z=E+i\eta:E\in[-2+\kappa,2- \kappa]\setminus\Delta_{K}^{p},\quad N^{-1+2\sigma\alpha+\epsilon}\leq\eta\leq 1\}. \tag{1.27}\] _In particular, eigenfunction de-localization in Corollary 1.23 continues to hold in the following form: we can find a small constant \(c_{*}>0\) and a large constant \(C>0\) depending on \(\epsilon,\kappa,\alpha,K,p\) such that, with probability at least \(1-N^{-c_{*}\epsilon}\), given any eigenvalue \(\lambda\in[-2+\kappa,2-\kappa]\setminus\Delta_{K}^{p}\), its \(L^{2}\)-normalized eigenvector \(\mathbf{v}_{\lambda}\) must satisfy \(|\mathbf{v}_{\lambda}|_{L^{\infty}}\leq CN^{-\frac{1-\epsilon-2\sigma\alpha}{ 2}}\)._ The proof of this theorem is given in Section 5.3. _Remark 1.26_.: Despite its effectiveness, our method cannot detect local statistics at intermediate scale \([N^{-1},N^{-1+\sigma\alpha}]\). Some other methods are needed for a finer analysis. For example: are eigenvectors de-localized or localized at this intermediate scale? Can we describe a scaling limit of eigenvalues? _Remark 1.27_.: In the \(\alpha\)-stable case, Theorem 1.25 imposes a condition \(\sigma\in(0,\frac{1}{2\alpha})\). These conditions define in some sense a _sub-critical_ regime, in which the locations \((i\leq j)\) where \(N^{\sigma}A_{ij}\) takes atypically large values are sparse with high probability. On the other hand, if \(\sigma\in(\frac{1}{2\alpha},\frac{1}{\alpha})\), then the global density of states still converges to the arcsine law [25], but the local statistics cannot be investigated via techniques in this paper. The issue is that the locations \((i\leq j)\) where \(N^{\sigma}A_{ij}\) may take atypically large values are no longer sparse with high probability, significantly complicating the analysis. ### A simple criterion for local laws of Stieltjes transform To complete the investigation, we show that even if entry-wise local law fails for the matrix, one can still prove the trace of Green function converges to the correct limit with probability tending to one. The reason behind it is that only a small fraction of Green function entries will deviate from the expected limit, and the trace takes a law of large numbers. **Theorem 1.28**.: _(Local law for Stieltjes transform: 1d Laplacian with \(\alpha\)-stable noise) Assume that \(H_{N}^{\infty}\) is the matrix given in (1.3) and \(A_{N}\) satisfies Assumption 1.8. Recall that \(m\) and \(m^{\infty}\) denote the trace of Green function of \(H_{N}^{\infty}+A_{N}\) and \(H_{N}^{\infty}\) respectively. Then for any \(\epsilon>0\) and \(\kappa>0\) we can find constants \(C\) and \(c_{*}\) depending on \(\epsilon,\kappa,\alpha\) such that_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa)}|m(z)-m^{\infty}(z)| \geq CN^{-\frac{\epsilon}{40}}\right)\leq N^{-c_{*}\epsilon} \tag{1.28}\] _and the same holds if we replace \(m^{\infty}(z)\) by \(m^{as}(z)=\frac{1}{\sqrt{z^{2}-4}}\) for \(z\in\mathbb{C}_{+},\) which is the Stieltjes transform of the arcsine law._ _More generally, for any \(\sigma\in(0,\frac{1}{2\alpha})\), consider \(H_{N}^{\infty}+N^{\sigma}A_{N}\), where the two matrices satisfy the assumptions stated above. Denote the trace of the Green function of \(H_{N}^{\infty}+N^{\sigma}A_{N}\) by \(m^{\sigma}(z)\). Then for any \(\epsilon>0\) and \(\kappa>0\) we can find positive constants \(C\) and \(c_{*}\) such that_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa,2\sigma)}|m^{\sigma}(z)- m^{\infty}(z)|\geq CN^{-\frac{\epsilon}{40}}\right)\leq N^{-c_{*}\epsilon}, \tag{1.29}\] _where_ \[\mathcal{S}(\epsilon,\kappa,\sigma):=\{z=E+i\eta:|E|\leq 2-\kappa,N^{-1+ \epsilon+\sigma\alpha}\leq\eta\leq 1\}.\] The proof is given in Section 6. _Remark 1.29_.: We again impose a technical restriction \(\sigma\in(0,\frac{1}{2\alpha})\) and do not work to the whole decaying regime \(\sigma\in(0,\frac{1}{\alpha})\). The reason is that we need to compute the Green function of \(H_{N}^{\infty}\) when a certain number of rows and columns with the same index are removed, and this restrictive hypothesis \(\sigma\in(0,\frac{1}{2\alpha})\) guarantees with high probability that such a Green function bound is available via Proposition 1.1. It would be possible to get rid of this technical assumption if one can adopt a different estimate on the Green function of \(H_{N}^{\infty}\) after this type of row and column removal. **Corollary 1.30** (Corollary of local law).: _Let \(H_{N}^{\infty}\) and \(A_{N}\) satisfy the same assumptions as in Theorem 1.28.Then with probability at least \(1-N^{-c_{*}\epsilon}\) the following statements are true: (i) the Wegner's estimate in Corollary 1.13 also holds for \(H_{N}^{\infty}+A_{N}\). (ii) The Wegner's estimate also holds for \(H_{N}^{\infty}+N^{\sigma}A_{N}\) provided that the interval \(I\) in the statement of Corollary 1.13 has length \(|I|>N^{-1+2\sigma\alpha+\epsilon}\). (iii)The rigidity and convergence rate estimates of eigenvalues stated in Corollary 1.14 hold for both \(H_{N}^{\infty}+A_{N}\) and \(H_{N}^{\infty}+N^{\sigma}A_{N}\)._ The proof of Corollary 1.30 is essentially the same as Corollary 1.13 and 1.14 : in the proof of these two results we only used the local law for the trace of Green function, and entry-wise local law is not used. ### Comparison to Wigner matrices and Levy matrices In this paper we introduced a class of matrices that are heavy tail perturbations of square symmetric matrices, where the heavy tail random variables are sufficiently close to the diagonal. We choose a scaling factor \(\frac{1}{N^{\frac{1}{\alpha}}}\) in front of the noise, and some more general scaling factors are considered as well, so that the density of states of \(H_{N}\) is asymptotically close to the noiseless part \(H_{N}^{\infty}\). The effect of noise takes place on a local scale, where we prove that at the almost optimal scale \(N^{-1+\epsilon}\), the Stieltjes transform converges with high probability to the correct limit but individual entries of Green function may not converge with a non-vanishing probability. We prove Wegner estimates and eigenvalue rigidity, and with probability converging to \(1\) the eigenfunctions are de-localized in the \(L^{\infty}\) norm sense. However, localized approximate eigenfunctions may arise with a polynomially small probability. In the forthcoming Theorem 1.32, we will prove that we have similar phenomenon if we replace the deterministic matrix \(H_{N}^{\infty}\) by an independent Wigner matrix \(W_{N}\). The statistical properties of these matrices are very different from those revealed in the recent investigation of Levy matrices with \(\alpha\)-stable distribution [16], [6], [35]. [13], [12], [9]. When \(\alpha\in(1,2)\), [3] proves that entr-ywise local law holds with overwhelming probability, and eigenfunctions are de-localized with overwhelming probability. When \(\alpha\in(0,1)\), [2] proves that there is a mobility edge: above this energy the eigenvectors are localized and below this the eigenvectors are de-localized with probability tending to one. The primary difference of our model to Levy matrices is possibly that our noise is essentially one dimensional, so that individual peaks will have a huge effect. We cannot use analytical tools like ergodicity of Dyson Brownian motion to smooth the peaks out. ### Other generalizations Finally we discuss other generalizations of the method in this paper. **Example 1.31**.: _(Wigner matrix with heavy tail perturbations) Our method of proof can be easily extended to cover Wigner matrices perturbed by heavy tailed noise. That is, we consider \(W_{N}+A_{N}\) where \(W_{N}\) and \(A_{N}\) are independent, \(A_{N}\) satisfies the same assumption as before but \(W_{N}\) is an \(N\times N\) Wigner matrix whose entries have sub-exponential decay._ For the Wigner matrix \(W_{N}\), entry-wise local semicircle law is satisfied with overwhelming probability [20], which trivially implies that Green function entries are bounded, (1.7). Now we can condition on \(W_{N}\) and apply the same proof procedure in this paper. Thus results in Theorem 1.11 and 1.17 can be extended without much difficulty to the matrix \(W_{N}+A_{N}\). We can also extend Theorem 1.28 to \(W_{N}+A_{N}\). This time we can cover all \(\sigma\in(0,\frac{1}{\alpha})\). **Theorem 1.32**.: _(Local law for Stieltjes transform: Wigner matrix with banded \(\alpha\)-stable perturbation) Let \(W_{N}=(W_{ij})\) be an \(N\times N\) real symmetric matrix with independent entries modulo symmetry restriction, and satisfy \(\mathbb{E}[W_{ij}]=0\), \(\mathbb{E}[|W_{ij}|^{2}]=\frac{1}{N}\) and \(W_{ij}\) has sub-exponential decay: for some \(\vartheta>0\), \(x>1\) and all \(i,j\in[1,N]\) :_ \[\mathbb{P}(|W_{ij}|>N^{-\frac{1}{2}}x)\leq\vartheta^{-1}\exp(-x^{\vartheta}).\] _Let \(A_{N}\) be the matrix satisfying Assumption 1.8 and assume that \(W_{N}\) and \(A_{N}\) are independent. Let \(m(z)\) denote the trace of Green function of \(W_{N}+A_{N}\) and \(m^{sc}(z)\) the Stieltjes transform of the semicircle law: for all \(z\in\mathbb{C}\) with \(\Im z>0\), \(m^{sc}(z)=\frac{-z+\sqrt{z^{2}-4}}{2}\). Then for any \(\epsilon>0\) we can find constants \(C\) and \(\nu\) depending on \(\epsilon,\alpha\) such that_ \[\mathbb{P}\left(\sup_{z\in\hat{\mathcal{S}}(\epsilon)}|m(z)-m^{sc}(z)|\geq CN ^{-\frac{\epsilon}{40}}\right)\leq e^{-\nu\log N^{\log\log N}}, \tag{1.30}\] _where_ \[\hat{\mathcal{S}}(\epsilon):=\{z=E+i\eta:|E|\leq 5,N^{-1+\epsilon}\leq\eta \leq 1\}.\] _More generally, for any \(\sigma\in(0,\frac{1}{\alpha})\), consider \(W_{N}+N^{\sigma}A_{N}\), where the two matrices satisfy the assumptions stated above. Denote the trace of the Green function of \(H_{N}^{\infty}+N^{\sigma}A_{N}\) by \(m^{\sigma}(z)\). Then for any \(\epsilon>0\) we can find constants \(C\) and \(\nu\) such that_ \[\mathbb{P}\left(\sup_{z\in\hat{\mathcal{S}}(\epsilon,\sigma)}|m^{\sigma}(z)-m ^{sc}(z)|\geq CN^{-\frac{\epsilon}{40}}\right)\leq e^{-\nu\log N^{\log\log N}}, \tag{1.31}\] _where_ \[\hat{\mathcal{S}}(\epsilon,\sigma):=\{z=E+i\eta:|E|\leq 5,N^{-1+\epsilon+ \sigma\alpha}\leq\eta\leq 1\}.\] The proof is given in Section 6.1. _Remark 1.33_.: In this theorem we can cover the whole decaying regime \(\sigma\in(0,\frac{1}{\alpha})\) and the local law holds with overwhelming probability. This is because removing the rows and columns of a Wigner matrix indexed by a certain index set, the submatrix is again a Wigner matrix and its local law estimates are available. Moreover, in this local law we can cover the spectral edge as the estimate holds throughout \(E\in[-5,5]\). Finally, we can check that the counterexamples in 1.12 continue to hold for \(W_{M}+A_{N}\), i.e. entry-wise local law does not hold with overwhelming probability. **Example 1.34**.: _(Matrix models for beta-ensembles) Another important example where our method can possibly be applied concerns the matrix models for beta ensembles. In Dumitriu,- Edelman [17], they showed that a form of random Jacobi matrices \(H_{\beta}\) given below, where \(N(0,\cdot)\) is the normal distribution and \(\chi_{\cdot}\) is the chi-distribution with specified parameters, are matrix models for the Gaussian \(\beta\)-ensemble,_ \[H_{\beta}=\frac{1}{\beta}\begin{pmatrix}N(0,2)&\chi_{(N-1)\beta}&0&\cdots& \cdots&0\\ \chi_{(N-1)\beta}&N(0,2)&\chi_{(N-2)\beta}&0&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\ddots\\ 0&\cdots&\chi_{3\beta}&N(0,2)&\chi_{2\beta}&0\\ 0&\cdots&0&\chi_{2\beta}&N(0,2)&\chi_{\beta}\\ 0&\cdots&\cdots&0&\chi_{\beta}&N(0,2)\end{pmatrix}. \tag{1.32}\] _One can replace these Gaussian and chi distributions by general random variables that have the same mean and variance and study the resulting random Jacobi matrix. This is usually conducted via an approach based on transfer matrix recursions or Prufer coordinates (see for example [15],[7], [29], [30],) assuming the random variables have sub-Gaussian tails._ _We can instead regard such matrices as noisy perturbations of the following deterministic matrix_ \[H_{\infty}=2\begin{pmatrix}0&\sqrt{(N-1)/N}&0\\ \sqrt{(N-1)/N}&0&\sqrt{(N-2)/N}\\ &\ddots&\ddots&\\ &&0&\sqrt{2/N}&0\\ &&\sqrt{2/N}&0&\sqrt{1/N}\\ &&0&\sqrt{1/N}&0\end{pmatrix}. \tag{1.33}\] _and apply the results established in this paper, so that we can cover the case when the random variables have heavy tails. A technical challenge is that the Green function estimates of \(H_{\infty}\) can not be obtained very easily. A proof of local law of the trace of the Green function \(H_{\infty}\) can be found in [34] via Hermite polynomials and the steepest descent method. However, one still needs an entry-wise local law so that the results of this paper can be applicable._ ## 2. Arcsine laws and Green function for the symmetric matrix In this section we cover some properties of the arcsine law (1.16) and the matrix model of 1d Laplacian (1.3). These results can be derived from simple computations, but they seem to be new in the literature and can possibly be used in other contexts. ### Green's function for the 1d Laplacian In this section we prove Proposition 1.1, i.e. the Green function \(G^{\infty}\) of \(H_{N}^{\infty}\) defined in (1.3) is uniformly bounded from above when \(\eta>>N^{-1}\). Proof of Proposition 1.1.: The following argument is taken from [24]: for \(D\in(-2,2)\), we can explicitly compute \((H_{N}^{\infty}+DI_{N})^{-1}\) as follows: Let \(M_{k}\) denote the determinant of \(H_{k}^{\infty}+DI_{k}\), then we have the recursive formula \[M_{k+1}=DM_{k}-M_{k-1},\quad M_{0}=1,M_{1}=D. \tag{2.1}\] Assume that \(D=-2\cos\lambda\), then the solution to this recursion is \[M_{k}=(-1)^{k}\sin((k+1)\lambda)/\sin\lambda. \tag{2.2}\] Then thanks to the tridiagonal structure of \(M_{k}\), if we denote by \(R_{ij}\) the \((i,j)\)-th element of the inverse matrix, then \[R_{ij}=(-1)^{i+j}M_{i-1}M_{N-j}/M_{N},\quad i\leq j. \tag{2.3}\] that is, \[R_{ij}=\frac{\sin(i\lambda)\sin((N-j+1)\lambda)}{\sin\lambda\sin((N+1)\lambda )}=\frac{\cos((N+1-|j-i|)\lambda)-\cos((N+1-j-i)\lambda)}{2\sin\lambda\sin((N+ 1)\lambda)}. \tag{2.4}\] This expression admits an analytic extension to the complex plane when \(\lambda\) is complex valued. By definition of the domain \(\mathcal{S}\) and the trigonometric expansion \[\cos(x-iy)=\cos x\cosh y-i\sin x\sinh y, \tag{2.5}\] \[\sin(x-iy)=\sin x\cosh y-i\cos x\sinh y, \tag{2.6}\] one sees that for any \(D\in\mathcal{S}\) we can find a unique \(\lambda\) with \(D=-2\cos\lambda\), such that \(\Re\lambda\in[\kappa^{\prime}\pi,\pi-\kappa^{\prime}\pi]\) for some sufficiently small \(\kappa^{\prime}>0\) depending on \(\kappa\). Moreover, from the expression \(\frac{\partial}{\partial x}\arccos(x+iy)=-\frac{1}{1-(x+iy)^{2}}\) and \(\frac{\partial}{\partial y}\arccos(x+iy)=-i\frac{1}{1-(x+iy)^{2}}\), we can see that for \(D\in\mathcal{S}\) and \(\lambda=\arccos\frac{D}{2}\), one must have \(|\Im\lambda|\geq C^{\prime}N^{-1+\epsilon}\) for some \(C^{\prime}\) depending only on \(\kappa\). Now we turn back to the inverse formula (2.4) and show this expression is uniformly upper bounded. Concerning the term \(\sin\lambda\), one can show that \(|\sin\lambda|\) is uniformly bounded away from \(0\) since \(\Re\lambda\) is away from \(0\) and \(\pi\), and \(\Im\lambda\) is not too large. Concerning the term \(\sin((N+1)\lambda))\), thanks to the fact that \(|\Im\lambda|\geq C^{\prime}N^{-1+\epsilon}\), one sees from trigonometric expansion that \(|\sin((N+1)\lambda))|\sim e^{N|\Im\lambda|}|\) and that this factor dominates both terms of the numerator of the right hand side of (2.4). Therefore, \(|R_{ij}|\) is uniformly bounded from above for all indices \(i,j\in[1,N]\). This completes the proof of Proposition 1.1. ### Properties of the arcsine law **Proposition 2.1**.: _The Stieltjes transform \(m^{as}(z)\) of the arcsine law (1.16) is given by_ \[m^{as}(z)=\frac{1}{2\sqrt{z^{2}/4-1}},\quad z\in\mathcal{S}. \tag{2.7}\] Proof.: This is not a new result but we sketch its proof. A standard result from residue calculus shows that for any complex constant \(a\) outside \([-1,1]\), we have \[\int_{-1}^{1}\frac{dx}{\sqrt{1-x^{2}}(x-a)}=-\frac{\pi}{\sqrt{a^{2}-1}}. \tag{2.8}\] After rearranging the constants, for any complex \(z\) outside \([-2,2]\), we have \[m^{as}(z)=-\int_{-2}^{2}\frac{dx}{2\pi\sqrt{1-x^{2}/4}(x-z)}=\frac{1}{2\sqrt{ z^{2}/4-1}}. \tag{2.9}\] **Lemma 2.2**.: _Let \(m^{\infty}(z)\) denote the Stieltjes transform of \(H^{\infty}_{N}\) and \(m^{as}(z)\) denote the Stieltjes transform of the arcsine law, for any \(z\in\mathcal{S}(\epsilon,\kappa)\). Then we can find a constant \(C>0\) depending on \(\epsilon\) and \(\kappa\) such that_ \[|m^{\infty}(z)-m^{as}(z)|\leq C(N^{-0.5\epsilon}+e^{-N^{1-0.5\epsilon}\eta}). \tag{2.10}\] Proof.: Recall from Section 2.1 that for \(\lambda=\arccos\frac{1}{2}z\), the \((j,j)\)-th diagonal term of the resolvent matrix \(G^{\infty}:=(H_{N}^{\infty}-z)^{-1}\) is given by (we avoid using subscript \(i\) to avoid confusion with the imaginary unit) \[G^{\infty}_{jj}=\frac{\cos((N+1)\lambda)-\cos((N+1-2j)\lambda)}{2\sin\lambda \sin((N+1)\lambda))}. \tag{2.11}\] We also argued in Section 2.1 that for \(z\in\mathcal{S}(\epsilon,\kappa)\), necessarily we have \(|\Im\lambda|>C\eta>N^{-1+\epsilon}\) for some \(C>0\), and \(\Im\lambda<0\). From the trigonometric expansion (2.5) of \(\sin(x)\) and \(\cos(x)\), one sees that whenever \(j\in[N^{1-0.5\epsilon},N-N^{1-0.5\epsilon}]\), necessarily \(|\cos((N+1-2j)\lambda)|\leq e^{-N^{1-0.5\epsilon}\eta}|\cos((N+1)\lambda|\). Thanks to the fact that \(G^{\infty}_{jj}\) is uniformly bounded (proved in Section 2.1), we conclude that \[\left|G^{\infty}_{jj}-\frac{1}{2\sin\lambda\tan((N+1)\lambda)}\right|\leq Ce^ {-N^{1-0.5\epsilon}\eta},\quad j\in[N^{1-0.5\epsilon},N-N^{1-0.5\epsilon}]. \tag{2.12}\] In the next step, from the trigonometric expansion \[\tan(x+iy)=\frac{\sin 2x+i\sinh 2y}{\cos 2x+\cosh 2y},\] and from the assumption \(\Im\lambda<-C\eta<-CN^{-1+\epsilon}\), we see that \[\left|\frac{1}{\tan((N+1)\lambda)}-i\right|\leq e^{-N\eta}. \tag{2.13}\] Since \(|\frac{1}{\sin\lambda}|\) is bounded from above on \(\mathcal{S}\), we conclude that \[\left|G^{\infty}_{jj}-\frac{i}{2\sin\lambda}\right|\leq Ce^{-N^{1-0.5\epsilon }\eta},\quad j\in[N^{1-0.5\epsilon},N-N^{1-0.5\epsilon}]. \tag{2.14}\] For indices \(i\) that are outside \([N^{1-0.5\epsilon},N-N^{1-0.5\epsilon}]\), the above estimate does not apply but these terms are all uniformly upper bounded. There are altogether \(2N^{1-0.5\epsilon}\) such terms. Finally, note the following trigonometric identity \[\sin(\arccos(\frac{z}{2}))=\sqrt{1-z^{2}/4}=i\sqrt{z^{2}/4-1}.\] Summing everything up, we deduce that for any \(z\in\mathcal{S}(\epsilon,\kappa)\), \[|m^{\infty}(z)-m^{as}(z)|\leq 2\frac{N^{1-0.5\epsilon}}{N}+Ce^{-N^{1-0.5\epsilon }\eta}. \tag{2.15}\] This completes the proof. **Proposition 2.3**.: _(Off-diagonal decay of Green function). The Green function \(G^{\infty}_{ij}(z)\) satisfies the following decay estimate: for any \(\epsilon>0,\kappa>0\) we can find a constant \(C\) such that, setting \(L=N^{1-0.5\epsilon}\), then for any \(\eta\in[N^{-1+\epsilon},1]\) and \(E\in[-2+\kappa,2-\kappa]\) we must have_ \[|G^{\infty}_{ij}(z)|\leq e^{-N^{0.5\epsilon}},\quad|i-j|>L. \tag{2.16}\] More generally, setting \(L=N^{1-\sigma\alpha-0.5\epsilon}\), then for any \(\eta\in[N^{-1+\sigma\alpha+\epsilon},1]\) we must have \[|G^{\infty}_{ij}(z)|\leq e^{-N^{0.5\epsilon}},\quad|i-j|>L. \tag{2.17}\] Proof.: We only prove the first claim, as the second one is exactly analogous. Recall from Section 2.1 that for \(\lambda=\arccos\frac{1}{2}z\), the \((j,k)\)-th term of \(G^{\infty}:=(H^{\infty}_{N}-z)^{-1}\) is given by \[G^{\infty}_{jk}=\frac{\cos((N+1-|j-k|)\lambda)-\cos((N+1-j-k)\lambda)}{2\sin \lambda\sin((N+1)\lambda))}. \tag{2.18}\] Then we have \(N+1-|j-k|\in[1,N-L]\) and \(N+1-j-k\in[-N+L,N-L]\). Then thanks to the fact that \(|\Im\lambda|>N^{-1+\epsilon}\) and the trigonometric identity (2.5), this immediately implies that \[\left|\frac{\cos((N+1-|j-k|)\lambda)}{\sin((N+1)\lambda))}\right|\leq e^{-N^{0.5\epsilon}},\quad\left|\frac{\cos((N+1-j-k)\lambda)}{\sin((N+1)\lambda))} \right|\leq e^{-N^{0.5\epsilon}},\] which completes the proof. We will also make crucial use of an estimate on the imaginary part of Green function entries. As the computation shows, we have to take a special care in the general case \(K\geq 2\). **Proposition 2.4**.: _(Imaginary part of Green function) For any \(z=E+i\eta\) with \(E\in[-2+\kappa,2-\kappa]\) and \(\eta\in[N^{-1+\epsilon},1]\), we derive the following estimates:_ 1. _Imaginary part of Stieltjes transform: we can find some constant_ \(C_{\kappa}>0\) _such that_ \[\Im m^{as}(z)>C_{\kappa}.\] (2.19) _The real part is small: for any_ \(\kappa>0\) _we can find sufficiently small_ \(d_{\kappa}>0\) _such that for all_ \(E\in[-2+\kappa,2-\kappa]\) _and_ \(\eta\in[N^{-1+\epsilon},d_{\kappa}]\)_, we have_ \[|\Re m^{as}(z)|\leq 10^{-3}C_{\kappa}.\] (2.20) 2. _In the case_ \(K=1\)_, given any_ \(\kappa>0\)_, we can find_ \(C_{\kappa,1}\in[0,1)\) _and a sufficiently small_ \(d_{\kappa}>0\) _such that_ _for all_ \(i,j\in[N^{1-0.5\epsilon},N-N^{1-0.5\epsilon}]\) _and any_ \(\eta=\Im(z)\in[N^{-1+\epsilon},d_{\kappa}]\)_, we have_ \[|\Im G^{\infty}_{ij}(z)|\leq C_{\kappa,1}\Im G^{\infty}_{ii}(z),\quad\text{ for any }|i-j|\leq 1.\] _More generally, given any_ \(\sigma\in(0,\frac{1}{\alpha})\) _the estimate also holds if_ \(i,j\in[N^{1-\sigma\alpha-0.5\epsilon},N-N^{1-\sigma\alpha-0.5\epsilon}]\) _and_ \(\eta=\Im(z)\in[N^{-1+\sigma\alpha+\epsilon},d_{\kappa}]\)_._ 3. _For general positive integers_ \(K\)_, given any_ \(p\in\mathbb{N}_{+}\) _and_ \(\kappa>0\)_, we may find_ \(C_{\kappa,K,p}\in[0,1)\) _and small_ \(d_{\kappa}>0\) _such that_ _for all_ \(i,j\in[N^{1-0.5\epsilon},N-N^{1-0.5\epsilon}]\)_, any_ \(\eta=\Im(z)\in[N^{-1+\epsilon},d_{\kappa}]\) _and any_ \(E\in[-2+\kappa,2-\kappa]\setminus\Delta^{p}_{K}\)_, we have_ \[|\Im G^{\infty}_{ij}(z)|\leq C_{\kappa,K,p}\Im G^{\infty}_{ii}(z),\quad\text{ for any }|i-j|\leq K,\] _where_ \(\Delta^{p}_{K}:=\Delta_{K}+[-10^{-p},10^{-p}]\) _is a neighborhood of_ \(\Delta_{K}\) _and_ \(\Delta_{K}\) _is the set of all_ \(E\in[-2,2]\) _such that_ \(\sin(\ell\arccos(\frac{1}{2}E))=0\) _for some_ \(\ell=2,\cdots,K\)_. More generally, given any_ \(\sigma\in(0,\frac{1}{\alpha})\) _the estimate also holds if_ \(i,j\in[N^{1-\sigma\alpha-0.5\epsilon},N-N^{1-\sigma\alpha-0.5\epsilon}]\) _and_ \(\eta=\Im(z)\in[N^{-1+\sigma\alpha+\epsilon},d_{\kappa}]\)_._ Proof.: The first claim follows by direct computation: \[m^{as}(E+i\eta)=\frac{1}{\sqrt{E^{2}-\eta^{2}-4+2Ei\eta}}=\frac{i\sqrt{\eta^{2 }+4-E^{2}+2Ei\eta}}{\sqrt{(E^{2}-\eta^{2}-4)^{2}+4E^{2}\eta^{2}}}. \tag{2.21}\] Then noting that we assume \(E\in[-2+\kappa,2-\kappa]\) and \(\eta\in[0,1]\), the claimed lower bound follows from elementary complex analysis, and can be made to depend only on \(\kappa\). Also, from (2.21) we see that assuming \(\eta\in[N^{-1+\epsilon},d_{\kappa}]\) for some very small positive constant \(d_{\kappa}\), we have \[|\Re m^{as}(E+i\eta)|\leq 10^{-3}C_{\kappa}. \tag{2.22}\] This completes the proof of part (1). For the second and third claim, first observe that by our assumptions on indices \(i\) and \(j\), and the assumption on \(\eta\), one must have, in both case \(\sigma=0\) and \(\sigma>0\), that \[|\cos((N+1-j-k)\lambda)|\leq Ce^{-N^{0.5\epsilon}}|\cos((N+1-|j-k|)\lambda)|,\] so we only need to consider the first term in the numerator of (2.18). If \(i=j\), then following the steps in the proof of Lemma 2.2, one can show that \(G_{ii}^{\infty}(z)\) is close to \(m^{ac}(z)\) with an exponentially small error. Thus the desired conclusion follows from the first claim. The more general case \(0<|i-j|\leq K\) is somewhat more elaborate. We assume for simplicity that \(|i-j|=K\). Then by trigonometric identities \[\frac{\cos((N+1-K)\lambda)}{\cos((N+1)\lambda)}=\cos(K\lambda)+\cot((N+1) \lambda)\sin(K\lambda). \tag{2.23}\] The ratio (2.23) is of interest because, up to a vanishing error, \[\frac{G_{ij}^{\infty}(z)}{G_{ii}^{\infty}(z)}=(2.23)(1+O(e^{-N^{0.5\epsilon}} )). \tag{2.24}\] Thanks to the fact that \(G_{ii}^{\infty}(z)\) is very close to \(m^{as}(z)\) (see proof of Lemma 2.2) and that by part (1) of this Proposition, the real part of \(m^{as}(z)\) is very small if \(\eta\in[0,d_{\kappa}]\), we may approximately regard \(G_{ii}^{\infty}(z)\) as a purely imaginary number and we hope for the case that (2.23) is not close to a real number, so that \(\Im G_{ij}^{\infty}(z)\) is strictly smaller than \(\Im G_{ii}^{\infty}(z)\). Now we prove that the said claim holds. First, thanks to (2.13), we have that \[|\cot((N+1)\lambda)-i|\leq e^{-N^{0.5\epsilon}}.\] Write \(\lambda=a+ib\). Recall that \(\lambda=\arccos(z/2)\), and \(\Im z\) is very small, by properties of the arcsine function, we have the estimate \(|b|\leq d_{\kappa}^{\prime}d_{\kappa}\) and \(\min(a,\pi-a)\geq e_{\kappa}^{\prime}\) for some \(d_{\kappa}^{\prime}\), \(e_{\kappa}^{\prime}\) that only depend on \(\kappa\). For \(z\in\mathcal{S}(\epsilon,\kappa)\), assuming in addition that \(d_{\kappa}\) is chosen sufficiently small so that \(d_{\kappa}<10^{-3}e_{\kappa}^{\prime}\), then \(\cos(K\lambda)=\cos(Ka)(1+O(10^{-3}))\) and \(\sin(K\lambda)=\sin(Ka)(1+O(10^{-3}))\), so that they are both real numbers up to a very small error. We have to distinguish the case \(K=1\) (tridiagonal) and \(K>1\) (general banded). In the case \(K=1\), since \(\sin(a)\) is nowhere vanishing and \(\cosh(b)\) is at least \(1\), it is easy to verify that, combining (2.24) and (2.23) and the fact that \(G_{ii}^{\infty}(z)\) is an imaginary number up to a very small error, we have \[|\Im G_{ij}^{\infty}(z)|\leq C_{\kappa,1}\Im G_{ii}^{\infty}(z), \tag{2.25}\] where \(C_{\kappa,1}\in[0,1)\) depends only on \(\kappa\) and is strictly smaller than \(1\). For the general case \(K\geq 2\), we would like to remove the sets of \(z=2\cos(a+bi)\) such that \(\sin(Ka)=0\), as this would lead to \(|\Re(2.23)|\) being arbitrarily close to one and \(|\Im(2.23)|\) being arbitrarily close to zero. As we have required that \(\eta=\Im(z)<d_{\kappa}\) for a sufficiently small \(d_{\kappa}\), we only have to remove a very small neighborhood of each \(E\in[-2+\kappa,2-\kappa]\) such that \(\sin(\ell\arccos(\frac{1}{2}E))=0\) for some \(\ell=1,2,\cdots,K\). Let \(\Delta_{K}\) denote the set of all such \(E\), and write \(\Delta_{K}^{p}:=\Delta_{K}+[-10^{-p},10^{-p}]\) the union of short intervals with length \(2\times 10^{-p}\) centered at one of the points in \(\Delta_{K}\), for any \(p\in\mathbb{N}_{+}\). Then whenever \(E\in[-2+\kappa,2-\kappa]\setminus\Delta_{K}^{p}\) and \(\eta\in[N^{-1+\epsilon},d_{\kappa}]\), we still have \[|\Im G_{ij}^{\infty}(z)|\leq C_{\kappa,K,p}\Im G_{ii}^{\infty}(z), \tag{2.26}\] where \(C_{\kappa,K,p}\in[0,1)\) depends on \(\kappa,K,p\) and is strictly less than one. Then the proof of part (3) is finished. ## 3. Entry-wise local law ### Green's function identities and concentration estimates For given matrices \(B\) and \(C\) of the same dimension, we have the identity \[B^{-1}-C^{-1}=B^{-1}(C-B)C^{-1}. \tag{3.1}\] Now we take \(B=H_{N}^{\infty}+A_{N}-zI\) and \(C=H_{N}^{\infty}-zI\), then \(B^{-1}=G\) and \(C^{-1}=G^{\infty}\), and we obtain \[G-G^{\infty}=-GA_{N}G^{\infty}. \tag{3.2}\] We will use frequently the Ward's identity: for any \(H\) an \(N\times N\) matrix, and \(G=(H-z)^{-1}\) its Green function for any given \(z\in\mathbb{C}_{+}\) with \(\eta=\Im z>0\), we have \[\sum_{j=1}^{N}|G_{jk}|^{2}=\frac{\Im G_{kk}}{\eta}. \tag{3.3}\] The following comparison result of Green's function will also be used: **Proposition 3.1** ([1],Appendix B).: _Given \(E\in\mathbb{R}\), \(\eta,\eta^{\prime}\in\mathbb{R}_{>0}\), and any \(N\times N\) matrix \(H\). Denote by \(z=E+i\eta\), \(z^{\prime}=E+i(\eta+\eta^{\prime})\), \(G(z)=(H-z)^{-1}=\{G_{jk}\}\) and \(G^{\prime}(z)=(H-z^{\prime})^{-1}=\{G^{\prime}_{jk}\}\), then for any \(j,k\in[1,N]\) we have_ \[|G_{jk}-G^{\prime}_{jk}|\leq\frac{\eta^{\prime}}{2\eta}(|\Im G^{\prime}_{jj}| +|\Im G_{kk}|). \tag{3.4}\] _In particular we have for each \(j\in[1,N]\),_ \[\frac{\min(|G^{\prime}_{jj}|,|G_{jj}|)}{\max(|G^{\prime}_{jj}|,|G_{jj}|)}>1- \frac{\eta^{\prime}}{\eta}. \tag{3.5}\] We will need the following large deviations estimate for sums of independent random variables with growing moments: **Proposition 3.2** ([21],Lemma 3.8).: _Given \((a_{i})\) a family of centered independent random variables that satisfy_ \[\mathbb{E}|a_{i}|^{p}\leq\frac{C^{p}}{N^{\gamma}q^{ap+b}} \tag{3.6}\] _for any \(2\leq p\leq(\log N)^{\log\log N}\), for some given \(a\geq 0\) and \(b,\gamma\in\mathbb{R}\). Then there exists some \(\nu>0\) such that for all \(2\leq\xi\leq\log\log N\), we have_ \[\mathbb{P}\left(\left|\sum_{i}\Psi_{i}a_{i}\right|\geq(\log N)^{\xi}\left[ \frac{\sup_{i}|\Psi_{i}|}{q^{a}}+(\frac{1}{N^{\gamma}q^{b+2a}}\sum_{i}|\Psi_{i }|^{2})^{1/2}\right]\right)\leq e^{-\nu(\log N)^{\xi}}. \tag{3.7}\] The following generalization is easy to obtain: assume \((a_{i})\) are not necessarily centered but still satisfies the moment condition (3.6) and satisfy, for some \(\delta>0\) and constant \(C\), \[|\mathbb{E}[a_{i}]|\leq\frac{C}{N^{1+\delta}}, \tag{3.8}\] then we have, outside a set of probability at most \(e^{-\nu(\log N)^{\xi}}\), the following estimate \[\left|\sum_{i}\Psi_{i}a_{i}\right|\leq(\log N)^{\xi}\left[(\frac{1}{q^{a}}+ \frac{1}{N^{\delta}})\sup_{i}|\Psi_{i}|+(\frac{1}{N^{\gamma}q^{b+2a}}\sum_{i}| \Psi_{i}|^{2})^{1/2}\right]. \tag{3.9}\] ### Green's function estimate in the heavy-tailed case The following Theorem is a generalization of Theorem 1.11, and constitutes the essential building block for all other proofs in this paper. Similar proof techniques will be used extensively in the sequel. **Theorem 3.3**.: _Let \(\overline{H}_{N}^{\infty}\) be a real symmetric matrix as in Definition 1.2. Let \(\overline{A}_{N}\) be an \(N\times N\) real symmetric random matrix with independent elements \(\overline{A}_{ij}\) for all \(|i-j|\leq K\) and zero otherwise. Assume that we can find some \(\delta>0\) and \(\gamma>0\) such that_ \[\mathbb{E}[|A_{ij}|]\leq\frac{C}{N^{1+\delta}},\quad\mathbb{E}[|A_{ij}|^{p}] \leq\frac{C}{N^{\gamma}q^{p-\alpha}},\text{ for each }p\geq 2,\] _where \(q\) is \(N\)-dependent and \(q>>\log N^{\log\log N}\)._ _Then for any \(\kappa>0\), \(\epsilon>0\) we can find positive constants \(C\) and \(\nu\) depending only on \(\kappa,\epsilon\) such that_ \[\mathbb{P}\left(\max_{\begin{subarray}{c}z\in\mathcal{S}\\ \eta\geq N^{-\gamma+0.5\epsilon}\end{subarray}}\max_{i,j}|\overline{G}_{ij}- \overline{G}_{ij}^{\infty}|\geq C\log N^{\log\log N}\left(\frac{1}{q}+\frac{1 }{N^{\delta}}+\sqrt{\frac{1}{N^{\gamma}\eta}}\right)\right)\leq e^{-\nu\log N ^{\log\log N}}, \tag{3.10}\] _where \(\overline{G}\) is the Green function of \(\overline{H}_{N}^{\infty}+\overline{A_{N}}\) and \(\overline{G}^{\infty}\) is the Green function of \(\overline{H}_{N}^{\infty}\)._ Proof.: We start with the resolvent identity (3.2) which gives that \[[\overline{G}-\overline{G}^{\infty}]_{ij}=-\sum_{p,l\in[1,N]}\overline{G}_{i,p}\overline{A}_{p,l}\overline{G}_{lj}^{\infty}=-\sum_{t=-K}^{K}J_{t}, \tag{3.11}\] where each \(J_{t}\) contains the terms in the summation with a term \(\overline{A}_{kl}\) that satisfies \(k-l=t\): \[J_{t}=\sum_{p\in[1,N]}\overline{G}_{i,p}\overline{A}_{p,p-t}\overline{G}_{p-t,j}^{\infty}, \tag{3.12}\] since the matrix \(\overline{A}_{N}\) has bandwidth \(K\). Here we adopt the convention that \(\overline{A}_{p,p-t}=0\) if \(p-t<0\) or \(p-t>N\). Then we apply concentration inequalities to each term \(J_{t}\). The resolvent matrix \(\overline{G}^{\infty}\) is deterministic but \(\overline{G}\) is not independent of \(\overline{A}\). We will carry out an inductive argument such that at each step, the matrix elements of \(\overline{G}\) are uniformly bounded with overwhelming probability. First fix a very large constant \(C\), which is larger than the upper bound in (1.7). Fix some \(E\in(-2+\kappa,2-\kappa)\) and for each \(m\in[1,N]\) define an event \(P_{m}\) as follows: (The event \(P_{m}\) depends on \(N\) but we suppress the \(N\)-dependence to simplify notation) \[P_{m}\stackrel{{\rm def}}{{=}}\left\{\sup_{i,j\in[1,N]}\left| \overline{G}_{i,j}\left(z_{m}\right)\right|<C\right\}, \tag{3.13}\] where we abbreviate \[z_{m}=E+i\frac{N+1-m}{N}. \tag{3.14}\] Obviously, \(\mathbb{P}(P_{1})=1\) as the constant \(C\) is chosen sufficiently large. We will show via an inductive argument that for some \(\nu\) to be determined later, \[\mathbb{P}(P_{m}^{c})\leq\mathbb{P}(P_{m-1}^{c})+e^{-\nu\log N^{\log\log N}} \tag{3.15}\] for any \(m\leq N-N^{1-\gamma+0.5\epsilon}\). Assume that we have verified (3.15) for each event \(P_{1},\cdots,P_{m}\) ((3.15) is an empty statement if \(m=1\)). Thanks to (3.5) and the inductive assumption, we deduce that \[|\overline{G}_{ii}(z_{m+1})|\leq 2C,\quad\omega\in P_{m}\] for each \(i\in[1,N]\). Then apply (3.4) and the inductive hypothesis, we deduce that \[|\overline{G}_{ij}(z_{m+1})|\leq 2C,\quad\omega\in P_{m}\] for each \(i,j\in[1,N]\times[1,N]\). We now wish to apply the concentration inequality in Proposition 3.2 to estimate each summation \(J_{t}\). Two complications arise at this point: first, \(\overline{G}_{i,p}\) are not independent from \(\overline{A}_{p,l}\), and second, entries of \(\overline{G}\) are bounded by \(2C\) with high probability but not almost surely. To solve the second issue we work on the event \(P_{m}\), and to solve the first issue we note that the proof of Proposition 3.2 only involves computing very high powers of the random variables involved, so if \(\overline{G}_{ij}\)'s are bounded a.s. by some constant \(C\), then we can drag the constant out (see Appendix A) and consequently the dependence of \(\overline{A}\) and \(\overline{G}\) does not ruin the analysis. More concretely, we define \[\widetilde{\overline{G}}_{i,p}=\begin{cases}\overline{G}_{i,p},\quad\omega\in P _{m},\\ 2C,\quad\omega\in P_{m}^{c}\end{cases}. \tag{3.16}\] and consider \[\widetilde{J}_{t}=\sum_{p\in[1,N]}\widetilde{\overline{G}}_{i,p}\overline{A}_ {p,p-t}\overline{G}_{p-t,j}^{\infty}. \tag{3.17}\] Then \(J_{t}=\widetilde{J}_{t}\) when \(\omega\in P_{m}\), and by definition, \(\widetilde{\overline{G}}_{i,p}\) is almost surely bounded by \(2C\) for all indices \((i,p)\). Now we apply Proposition (3.2) (indeed, we apply its generalization (3.9)) to each \(\widetilde{J}_{t}\) (to eliminate the dependence between \(\overline{A}\) and \(\widetilde{\overline{G}}\), we use the almost sure upper bound on \(\widetilde{\overline{G}}\) and drag it out of the expectation when computing high moments, see Appendix A for details) and deduce: for each \(2\leq\xi\leq N\) and for each \(t=-K,-K+1,\cdots,K-1,K\), \[\mathbb{P}(|\widetilde{J}_{t}|\geq(\log N)^{\xi}[\frac{2C}{q}+\frac{2C}{N^{ \delta}}+(\frac{1}{N^{\gamma}}\sum_{k\in[1,N]}|G_{k-t,j}^{\infty}|^{2})^{1/2} ])\leq e^{-\nu(\log N)^{\xi}}. \tag{3.18}\] By Ward's identity (3.3), note that we have set \(G_{p-t,j}^{\infty}\) to zero when \(p-t\notin[1,N]\): \[\sum_{p\in[1,N]}|G_{p-t,j}^{\infty}|^{2}\leq\sum_{p\in[1,N]}|G_{p,j}^{\infty} |^{2}=\frac{\Im G_{j,j}^{\infty}}{\eta}\leq\frac{C}{\eta}. \tag{3.19}\] Since \(q>1\), we can ignore the \(q\) factor and just write, for \(2\leq\xi\leq\log\log N\): \[\mathbb{P}\left(|\widetilde{J}_{t}|\geq C(\log N)^{\xi}\left[\frac{1}{q}+\frac{ 1}{N^{\delta}}+\sqrt{\frac{1}{N^{\gamma}\eta}}\right]\right)\leq e^{-\nu(\log N) ^{\xi}}. \tag{3.20}\] We take a union bound over \(t=-K,\cdots,K\): upon slightly changing \(\nu\), \[\mathbb{P}\left(\sup_{t=-K,\cdots,K}|\widetilde{J}_{t}|\geq C(\log N)^{\xi} \left[\frac{1}{q}+\frac{1}{N^{\delta}}+\sqrt{\frac{1}{N^{\gamma}\eta}}\right] \right)\leq e^{-\nu(\log N)^{\xi}}. \tag{3.21}\] Since \(\widetilde{J}_{t}=J_{t}\) on \(P_{m}\) for each \(t=-K,\cdots,K\), we deduce that \[\mathbb{P}\left(\sup_{t=-K,\cdots,K}|J_{t}|\geq C(\log N)^{\xi}\left[\frac{1}{ q}+\frac{1}{N^{\delta}}+\sqrt{\frac{1}{N^{\gamma}\eta}}\right]\right)\leq e^{- \nu(\log N)^{\xi}}+\mathbb{P}(P_{m}^{c}). \tag{3.22}\] We plug in these estimates into the resolvent expansion in (3.11). Then one sees that for any \(m<N-N^{1-\gamma+0.5\epsilon}\), \[\mathbb{P}\left(\big{|}|\overline{G}-\overline{G}^{\infty}|_{ij}(z_{m})\big{|} \geq C(\log N)^{\log\log N}\left[\frac{1}{q}+\frac{1}{N^{\delta}}+\sqrt{\frac {1}{N^{\gamma}\eta}}\right]\right)\leq\mathbb{P}[P_{m}^{c}]+e^{-\nu(\log N)^{ \log\log N}} \tag{3.23}\] for each \(i,j\in[1,N]^{2}\). In particular, thanks to the boundedness of \(\overline{G}^{\infty}\) from Definition 1.2 and our assumption \(\eta>N^{-\gamma+0.5\epsilon}\), we have shown \(|\overline{G}_{ij}(z_{m+1})|<C\) for each \(i,j\) on an event of probability at least \(1-\mathbb{P}[P_{m}^{c}]-e^{-\nu(\log N)^{\log\log N}}\). Thus we have proved that (3.15) holds for \(m+1\) in place of \(m\), after slightly changing the value of \(\nu>0\), We can now run this inductive procedure from \(m=1\) up to \(m=N-N^{1-\gamma+0.5\epsilon}\), and after slightly modifying the value of \(\nu\), we have proved that (3.23) holds for any \(z_{m}\) outside a set of probability at most \(e^{-\nu(\log N)^{\log\log N}}\). Combined with a standard continuity estimate (that is, find a set of \(N^{6}\) mesh points in \(\mathcal{S}\) with distance \(N^{-3}\) between two neighboring ones, and note that the Green function \(\overline{G}\) is \(N^{2}\) -Lipschitz continuous in \(z\)), we can upgrade the above estimate to be uniform over \(z\in\mathcal{S}\). This completes the proof of Proposition 3.3. #### 3.2.1. Local law under different scaling In this section we prove Theorem 1.24. This will be essentially a corollary of Theorem 3.3. Proof of Theorem 1.24.: We first compute the moments as follows: \[|\mathbb{E}[N^{\sigma}A_{ij}]|\leq\frac{C}{N^{1+\delta}}\quad\mathbb{E}[|N^{ \sigma}A_{ij}|^{p}]\leq\frac{C^{p}}{N^{1-\sigma\alpha}N^{\omega(p-\alpha)}}, \quad\text{ for each }p\geq 2.\] Then we apply Theorem 3.3 with \(\gamma=1-\sigma\alpha\) and \(q=N^{\omega}\). Note also that we made the assumption \(\eta\geq N^{-1+\epsilon+\sigma\alpha}\). This completes the proof, and see the next section for the statements concerning Wegner's estimate and eigenvector de-localization. ### Local law given additional moments In this section we prove Theorem 1.17. Proof of Theorem 1.17.: We set \(q=N^{\frac{\epsilon}{10\alpha}}\) for some sufficiently small \(\epsilon\). Since \(\xi_{ij}\) are independent and have a finite \(\alpha+\delta\)-th moment, using a union bound and choosing the \(\epsilon>0\) sufficiently small, we can find some \(c_{*}\) such that on an event \(P_{N}\) with probability at least \(1-N^{-c_{*}\epsilon}\), all the elements \(A_{ij}\) have absolute value less than \(q^{-1}\). Let \(\widehat{A}_{ij}\) denote the random variables \(A_{ij}\) conditioned on \(P_{N}\). Then since \(\xi_{ij}\) has a symmetric law we must have \(\mathbb{E}[\widehat{A}_{ij}]=0\). Moreover, for any \(p\geq 2\) we have \(\mathbb{E}[|\widehat{A}_{ij}|^{p}]\leq q^{-(p-\alpha)}\mathbb{E}[|\widehat{A} _{ij}|^{\alpha}]\leq\frac{C}{Nq^{p-\alpha}}\) as our conditioning only decreases the variance and \(A_{ij}=\frac{1}{N^{\frac{1}{\alpha}}}\xi_{ij}\) with \(\xi_{ij}\) having finite \(\alpha\)-moment. Then the claimed local law is a direct corollary of Theorem 3.3. ## 4. Proof of Corollaries ### Wegner estimate We first prove the Wegner estimate in Corollary 1.13. The Wegner estimates in Theorem 1.24 and Corollary 1.30 can be derived in exactly the same way. Proof of Corollary 1.13.: The Wegner estimate is a simple consequence of the local law established in Theorem 1.11 and the following identity from [18], Proposition 2.1 : for any probability distribution \(F(x)\) whose Stieltjes transform is given by \(m(z)\), given \(z=E+i\eta\) and consider \(I=[E-\eta/2,E+\eta/2]\), then \[\mathcal{N}_{I}=N\int_{I}dF(x)\leq\frac{5}{4}N\eta\int_{E-\eta/2}^{E+\eta/2} \frac{\eta dF(x)}{(x-E)^{2}+\eta^{2}}\leq\frac{5}{4}N\eta\Im m(z). \tag{4.1}\] For Theorem 1.24 and Corollary 1.30, one only needs to use the local laws derived in the respective settings. ### Proof of local arcsine law and eigenvalue rigidity The proof of Corollary 1.14 on the local arcsine law follows standard procedure but has some slight differences. Proof of Corollary 1.14, local arcsine law.: The proof is essentially an adaptation of standard methods, based on an application of the Helffer-Sjostrand formula. See for example [10], Section 8. However we are dealing with the arcsine law rather than semicircle law, and in our case the convergence rate is very slow, so we give a sketch of proof and illustrate the differences. Let \(\eta=N^{-1+\epsilon}\), consider interval \(I\subset[-2+\kappa,2-\kappa]\) and define \(f\equiv f_{I,\eta}\in\mathcal{C}_{c}^{\infty}(\mathbb{R},[0,1])\) such that \(f(x)=1\) when \(x\in I\), \(f(x)=0\) when \(\operatorname{dist}(x,I)>\epsilon\), \(\|f\|_{\infty}\leq C\eta^{-1}\) and \(\|f^{\prime\prime}\|_{\infty}\leq C\eta^{-2}\). Consider smooth even function \(\chi\in\mathcal{C}_{c}^{\infty}(\mathbb{R},[0,1])\) with \(\chi(y)=1\) for \(|y|\leq 1\), \(\chi(y)=0\) for \(|y|>2\), and \(\|\chi^{\prime}\|_{\infty}\leq C\). Denote by \[\hat{\mu}:=\mu-\rho,\quad\hat{m}(z)=m(z)-m^{as}(z). \tag{4.2}\] Then by the Helffer-Sjostrand formula, \[\int f(\lambda)\hat{\mu}(d\lambda)= -\frac{1}{2\pi}\int dx\int_{|y|\leq\eta}dyf^{\prime\prime}(x)\chi (y)\Im\hat{m}(x+iy) \tag{4.3}\] \[-\frac{1}{2\pi}\int dx\int_{|y|>\eta}dyf^{\prime\prime}(x)\chi(y) y\Im\hat{m}(x+iy)\] (4.4) \[+\frac{1}{2\pi}\int dx\int dy(f(x)+iyf^{\prime}(x))\chi^{\prime}( y)\hat{m}(x+iy)). \tag{4.5}\] From the local law in Theorem 1.11, we can find some \(c_{*}>0\) such that with probability at least \(1-e^{-\nu\log N^{\log\log N}}\), we have \[|\hat{m}(x+iy)|\leq N^{-c_{*}}\] for all \(x+iy\in\mathcal{S}(\epsilon,\kappa)\). We note that in the local law Theorem 1.11, the convergence rate is very slow even at macroscopic scales-this is a barrier that cannot be overcome here when the \(\xi_{ij}\)'s do not have sub-Gaussian tails. Then we can bound (4.5) thanks to the fact that \(\operatorname{Supp}\chi^{\prime}\in[-2,2]\setminus[-1,1]\): \[|(\ref{eq:1})|\leq CN^{-c_{*}}.\] From the estimate \[y\Im m(x+iy)\leq y\Im m(x+iy)\leq\eta\Im m(x+i\eta)\leq C\eta\] where the last step follows from the monotonicity of \(y\to y\Im m(x+iy)\) for the Stieltjes transform of any probability measure, we can bound (4.4) as \[|(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeqeq: Note that \(f^{\prime}(\lambda)=\frac{1}{2\pi\sqrt{1-\lambda^{2}/4}}\) is bounded from above and below on \([0,2-\kappa]\), so we approximately get \(f^{\prime}(\gamma_{i})\approx f^{\prime}(\lambda_{i}^{\prime})\) and then by the mean value theorem, \[|\gamma_{i}-\lambda_{i}^{\prime}|\leq\frac{f(\gamma_{i})-f(\lambda_{i}^{\prime })}{f^{\prime}(\gamma_{i})}\leq CN^{-c_{*}}.\] This holds jointly for all \(i\geq 0\) such that \(\lambda_{i}^{\prime}\leq 2-\kappa\), with probability at least \(1-e^{-\nu\log N^{\log\log N}}\). This completes the proof. The proof of eigenvalue rigidity statements in Corollary 1.30 is exactly the same and hence omitted. ## 5. Eigenvector delocalization for alpha-stable laws In this section we assume \(A_{N}\) satisfies Assumption 1.8. The proof under Assumption 1.8 is much harder than the previous cases. ### A slightly weaker version To illustrate the main ideas, we first prove the following slightly weaker version of Theorem 1.21. The proof of Theorem 1.21 follows similar lines. We stress that in this Proposition, we can take all \(z\in\mathcal{S}(\epsilon,\kappa)\) and do not need to restrict to \(\mathcal{S}(\epsilon,\kappa,K,p)\). **Proposition 5.1**.: _Under the same assumption as Theorem 1.21, assume for simplicity that \(A_{N}\) is a diagonal matrix (i.e. \(K=0\)). Then we can find some \(P_{*}\in(0,1)\) such that for \(N\) sufficiently large, we have the following bound (where \(C\) is the constant given in (1.7))_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa)}\max_{1\leq i,j\leq N} |G_{ij}(z)|\leq 4C\right)\geq P_{*}. \tag{5.1}\] Proof.: In the proof we assume that \(\eta\in[N^{-1+\epsilon},1]\). We fix a length scale \(L=N^{1-0.5\epsilon}\) and a cutoff value \(q=N^{\frac{0.01\epsilon}{\alpha}}\). Then for each \(i\), \(\mathbb{P}[|A_{ii}|>q^{-1}]\leq N^{-1+0.01\epsilon}\). We first show that it is very unlikely to have a pair \((i,j)\in[1,N]^{2}\) such that \(|A_{ii}|>q^{-1}\) and \(|A_{jj}|>q^{-1}\) for \(|i-j|\leq 2L\): denote this event by \(D^{1}_{N}\), then \[\mathbb{P}(D^{1}_{N})\leq NN^{-1+0.01\epsilon}2LN^{-1+0.01\epsilon}=N^{-0.48 \epsilon}, \tag{5.2}\] where the \(N\) factor is the choice of the location for the first such \(A_{ii}\) and the \(2L\) factor is the choice of the location for \(A_{jj}\) which has distance at most \(L\) to \(A_{ii}\). Moreover, it is also unlikely to have some \(i\in[0,L]\cup[N-L,N]\) with \(|A_{ii}|\geq q^{-1}\): denote this event by \(D^{2}_{N}\), \[\mathbb{P}(D^{2}_{N})\leq 2LN^{-1+0.01\epsilon}=N^{-0.49\epsilon}. \tag{5.3}\] Denote \(D_{N}=D^{1}_{N}\cup D^{2}_{N}\). Then \[\mathbb{P}(D_{N})\leq 2N^{-0.48\epsilon}. \tag{5.4}\] It is more convenient to re-sample the entries of \(A_{N}\) via the following procedure. We first sample the set of labels \((T,F)\) independently on each label \((i,i)\), where a label \(T\) is assigned with probability \(\mathbb{P}[|A_{ii}|<q^{-1}]=1-N^{-1+0.01\epsilon}\) and a label \(F\) is assigned with probability \(\mathbb{P}[|A_{ii}|>q^{-1}]=N^{-1+0.01\epsilon}\). Then with probability at least \(1-2N^{-0.48\epsilon}\) the event \(D_{N}\) does not happen. Having sampled the labels \((T,F)\), we then sample \(\xi_{ii}:=N^{\frac{1}{\alpha}}A_{ii}\) for \((i,i)\) with label \(T\) from the probability law (1.13) conditioned to take values in \(\xi_{ii}\in[-N^{\frac{1-0.01\epsilon}{\alpha}},N^{\frac{1-0.01\epsilon}{ \alpha}}]\), and we sample \(\xi_{ii}\) for \((i,i)\) with label \(F\) from the probability law (1.13) conditioned to take values in \(\xi_{ii}\in(-\infty,-N^{\frac{1-0.01\epsilon}{\alpha}}]\cup[N^{\frac{1-0.01 \epsilon}{\alpha}},\infty)\). It is not hard to check that the matrix \(A_{N}\) sampled from the said procedure, with \(0\) filled in the remaining elements, has the same distribution as \(A_{N}\) in Assumption 1.8, conditioned on \(D_{N}^{c}\). Now we make an additional assumption that none of \(A_{ii}\) is large: define \[E_{N}:=\left\{|A_{ii}|\geq\frac{1}{4\sup_{z\in\mathcal{S},i,j\in[1,N]^{2}}|G_{ ij}^{\infty}(z)|}\text{ for some }i\in[1,N]\right\}.\] Thanks to the derivation in Section 2.1, the supremum in the denominator is non-zero and we denote the supremum by \(C\). By definition of the probability law (1.13), \[\mathbb{P}(E_{N}^{c})\geq(1-\frac{(4C)^{\alpha}}{N})^{N}\geq C_{*}>0 \tag{5.5}\] for some constant \(C_{*}\) independent of \(N\). We now prove that the entries \(G_{ij}\) are bounded with positive probability, via induction on \(\eta\) from \(\eta=1\) to \(\eta=N^{-1+\epsilon}\), where \(z=E+\eta i\). Recall that we abbreviate \[z_{m}=E+i\frac{N+1-m}{N},\quad m=1,\cdots,N,\] and now we consider the event \[Q_{m}\stackrel{{\rm def}}{{=}}\left\{\sup_{i,j\in[1,N]^{2}}\sup _{E\in[-2+\kappa,2-\kappa]}|G_{ij}\left(z_{m}\right)|<4C\mid E_{N}^{c}\cap D_ {N}^{c}\right\}, \tag{5.6}\] The goal is to show by induction that for some \(\nu>0\), \[\mathbb{P}(Q_{m}^{c})\leq\mathbb{P}(Q_{m-1}^{c})+e^{-\nu\log N^{\log\log N}}. \tag{5.7}\] Before we proceed, we mention that conditioning on \(E_{N}^{c}\) does not break the independence of the \(A_{ii}\)'s, and conditioning on \(D_{N}^{c}\) does not break the independence of the \(A_{ii}\)'s such that \((i,i)\) has label \(T\), i.e. \(|A_{ii}|<q^{-1}\). Further, the conditioned versions of \(A_{ii}\)'s are still centered thanks to the fact that \(\xi_{ii}\) is assumed to have a symmetric law. Finally, for any label \((i,i)\) with label \(T\), our conditioning procedure reduces the variance and high moments of these \(A_{ii}\)'s. In the following we still use the notation \(A_{ii}\) to denote \(A_{ii}\) conditioned on \(E_{N}^{c}\cap D_{N}^{c}\). For the initial state \(m=1\), \(Q_{m}\) holds almost surely thanks to the fact that \(\Im z_{1}=1\). Now we condition on \(Q_{m-1}\) and work towards the estimate for \(Q_{m}\). Thanks to 3.5, we must have \[\sup_{i,j\in[1,N]^{2}}\sup_{E\in[-2+\kappa,2-\kappa]}|G_{ij}(z_{m})|<8C,\quad \omega\in Q_{m-1}.\] Consider an arbitrary index pair \((i,j)\in[1,N]^{2}\). Then from resolvent expansion, \[G_{ij}-G_{ij}^{\infty}=-\sum_{k}G_{ik}A_{kk}G_{kj}^{\infty}=T_{ij}^{1}+T_{ij}^ {2}+T_{ij}^{3}, \tag{5.8}\] where we decompose the summation into three terms, with \[T_{ij}^{1} =-\sum_{|k-j|\leq L,|A_{kk}|<q^{-1}}G_{ik}A_{kk}G_{kj}^{\infty},\] \[T_{ij}^{2} =-\sum_{|k-j|\leq L,|A_{kk}|>q^{-1}}G_{ik}A_{kk}G_{kj}^{\infty},\] and \[T_{ij}^{3}=-\sum_{|k-j|\geq L}G_{ik}A_{kk}G_{kj}^{\infty}.\] The summation \(T^{2}_{ij}\) has at most one term as we condition on the event \(D^{c}_{N}\). By definition of \(E^{c}_{N}\), it is immediate that \(|T^{2}_{ij}|\leq\frac{1}{4}\cdot 8C=2C\) on the event \(Q_{m-1}\). The term \(T^{3}_{ij}\) can be bounded directly: (i) by the exponential decay property in Proposition 2.3, we have \(|G^{\infty}_{kj}|\leq e^{-N^{0.5\epsilon}}\); (ii) there are at most \(N\) terms in the summation; (iii) \(|G_{ik}|\) is bounded by \(8C\) and (iv) \(|A_{kk}|\) is bounded by \(\frac{1}{4C}\). Combining these facts we have \(|T^{3}_{ij}|\leq 2Ce^{-N^{0.25\epsilon}}\) and is thus very small. We bound the term \(T^{1}_{ij}\) via concentration inequality. Just as in equations (3.16) and (3.17), we introduce an (almost surely bounded) random variable \(\widetilde{G}_{ik}(z_{m})\) which equals \(G_{ik}(z_{m})\) on \(Q_{m-1}\) and equals \(8C\) on \(Q^{c}_{m-1}\). Then define \(\widetilde{T}^{1}_{ij}\) as \(T^{1}_{ij}\) replacing \(G_{ik}\) by \(\widetilde{G}_{ik}\). Thus by definition, we see \(T^{1}_{ij}=\widetilde{T}^{1}_{ij}\) on \(Q_{m-1}\). We will use an estimate derived as in (6.4), which further implies, thanks to our conditioning procedure, that for any index \((i,i)\) with label \(T\), we have \[\mathbb{E}[|A_{ii}|^{p}]\leq\frac{C}{N^{1-10^{-3}\epsilon}q^{p-\alpha}},\quad q =N^{\frac{\epsilon}{100\alpha}}.\] Now we apply Proposition 3.2, which implies that with probability at least \(1-e^{-\nu\log N^{\log\log N}}\) we have \[|\widetilde{T}^{1}_{ij}|\leq N^{0.001\epsilon}\left[\frac{8C^{2}}{N^{\frac{0. 001\epsilon}{\alpha}}}+C^{2}\sqrt{\frac{1}{N^{1-0.001\epsilon}\eta}}\right] \leq C^{2}N^{-0.004\epsilon},\] where we used Ward's identity and the assumption \(\eta\geq N^{-1+\epsilon}\). This bound can be upgraded to hold for all indices \((i,j)\in[1,N]^{2}\) and any \(E\in[\kappa-2,2-\kappa]\) simultaneously with probability at least \(1-e^{-\nu\log N^{\log\log N}}\) for some \(\nu>0\) thanks to the fact that each \(A_{kk}\) is bounded by \(1\), each \(G_{ik}\) is \(N^{2}\)-Lipschitz in \(z\) and that \(e^{-\log N^{\log\log N}}\) decays faster than any polynomial rate \(N^{-D}\). We finally conclude with \[\mathbb{P}(|T^{1}_{ij}|\geq C^{2}N^{-0.004\epsilon})\leq e^{-\nu\log N^{\log \log N}}+\mathbb{P}(Q^{c}_{m-1}). \tag{5.9}\] Now turn back to (5.8). With probability at least \(1-(e^{-\nu\log N^{\log\log N}}+\mathbb{P}(Q^{c}_{m-1}))\), we must have \[|G_{ij}|\leq|G^{\infty}_{ij}|+|T^{1}_{ij}|+|T^{2}_{ij}|+|T^{3}_{ij}|\leq C+2C+ C^{2}N^{-0.005\epsilon}\leq 4C,\] for \(N\) large, so one necessarily deduces that \(|G_{ij}|\leq 4C\) for all \((i,j)\). This establishes (5.7) for subscript \(m\). Running the induction procedure down to \(\eta=N^{-1+\epsilon}\), this proves the boundedness of all \(G_{ij}\). This finishes the proof of Proposition 5.1 thanks to the estimate (5.7), (5.4) and (5.5). ### Proof of main result Now we are ready to prove Theorem 1.21. We will use crucially the property that \(G^{\infty}_{ij}\) has an non-vanishing imaginary part when \(i\) and \(j\) are close, see Proposition 2.4. Proof of Theorem 1.21, (1).: We again choose \(\eta\in[N^{-1+\epsilon},1]\), \(L=N^{1-0.5\epsilon}\) and \(q=N^{\frac{0.01\epsilon}{\alpha}}\). We work under the general assumption that \(A_{N}\) has bandwidth \(K\), i.e. \(A_{ij}=0\) whenever \(|i-j|>K\). Define as in the previous proof the event that atypical \(A_{ij}\) locations are not distant: \[D^{1}_{N}:=\{\text{there exists }i\leq j,i^{\prime}\leq j^{\prime}\in[1,N]^{4}: |A_{ij}|>q^{-1},|A_{i^{\prime}j^{\prime}}|>q^{-1},|i-i^{\prime}|\leq 2L\}. \tag{5.10}\] \[D^{2}_{N}:=\{\text{there exists }i\leq j\in[1,N]^{2}:|A_{ij}|>q^{-1},i\in[0,L] \cup[N-L,N]\}, \tag{5.11}\] and \(D_{N}:=D_{N}^{1}\cup D_{N}^{2}\). Since \(L\) is rapidly growing in \(N\) and \(K\) is fixed, \(|i-i^{\prime}|\leq 2L\) implies \(|j-j^{\prime}|\leq 2L+2K\sim 2L\), and \(i\leq L\) implies \(j\leq L+K\sim L\). We show as in the previous proof that \(\mathbb{P}(D_{N})\) is very small: for some fixed constant \(c_{*}\): \[\mathbb{P}(D_{N})\leq K^{2}NLN^{2(-1+0.01\epsilon)}+2KLN^{-1+0.01\epsilon}\leq N ^{-c_{*}\epsilon}.\] As in the previous proof, we assign labels \(T\) and \(F\) to entries \((i,j)\) independently, where a label \(T\) is assigned with probability \(P:=\mathbb{P}(|\xi_{ij}|\leq N^{\frac{1}{\alpha}}q^{-1})\) ans a label \(F\) is assigned with probability \(1-P\). We then resample the matrix \(A_{N}\) via first re-sampling the labels \(T\) and \(F\), then sample \(A_{ij}\) based on their labels. For some very large \(C\) depending on the constant \(C_{\kappa,K,p}\) in Proposition 2.4 (the value of \(C\) will be later specified), We consider the event \[R_{m}\stackrel{{\rm def}}{{=}}\left\{\sup_{i,j\in[1,N]}\sup_{E \in[-2+\kappa,2-\kappa]\setminus\Delta_{K}^{p}}|G_{i,j}\left(z_{m}\right)|<4C, \mid D_{N}^{c}\right\}, \tag{5.12}\] and we aim to show by induction that for some \(\nu>0\), \[\mathbb{P}(R_{m}^{c})\leq\mathbb{P}(R_{m-1}^{c})+e^{-\nu\log N^{\log\log N}}. \tag{5.13}\] Instead of starting from \(m=1\), we now start from a fixed number \(m_{0}\) such that \(\frac{N+1-m_{0}}{N}<d_{\kappa}\), where \(d_{\kappa}\) is given in Proposition 2.4. Upon choosing \(C:=\hat{C}\) sufficiently large we can assume (5.13) is already satisfied for \(m=1,\cdots,m_{0}\) with the bound \(\hat{C}\). Assume that we have verified (5.13) up to \(m-1\geq m_{0}\). We now verify it holds for \(m\). Again thanks to 3.5, we must have \[\sup_{i,j\in[1,N]^{2}}\sup_{E\in[-2+\kappa,2-\kappa]}|G_{ij}(z_{m})|<8C,\quad \omega\in R_{m-1}.\] In the following we say a site \((i,j)\) is atypical if \(|A_{ij}|\geq q^{-1}\). We now divide the possible index pairs \((i,j)\) into three cases: * In case (A), j is at distance at least \(L\) to any index \(k\) such that \(|A_{j^{\prime}k}|>q^{-1}\) for some \(j^{\prime}\). Informally, this means for any \(k\) within distance \(L\) to \(j\), there is no atypical sites of \(A_{*k}\) entering the sum (5.14). Such index pairs are safe to analyze. * In case (B), within distance \(0<|j-k|<L\) there exists an index \(k\) and another index \(j^{\prime}\notin\{i,j\}\) such that \(|A_{j^{\prime}k}|>q^{-1}\) (on \(D_{N}^{c}\) there is at most one such index pair \((j^{\prime},k)\) modulo symmetry constraint). Informally, this means that there must be an atypical \(A_{j^{\prime}k}\) in the summation such that \(k\) is within distance \(L\) to the endpoint \(j\), but \(A_{j^{\prime}k}\) and \(A_{kj^{\prime}}\) are not adjacent to \(G_{ij}\). These index pairs are the most dangerous to analyze. * In case (C), one can find an index \(k\) with \(|A_{jk}|>q^{-1}\). Informally, this means that \(j\) is the first label of an atypical value \(A_{jk}\) which enters into the sum. The analysis of case (C) will be the prerequisite for analyzing case (B). In any of these cases, by resolvent expansion, we have \[G_{ij}-G_{ij}^{\infty}=-\sum_{k,l}G_{ik}A_{kl}G_{Lj}^{\infty}=T_{ij}^{1}+T_{ij} ^{2}+T_{ij}^{3}, \tag{5.14}\] where we decompose the sum into three terms, with all the summations over \(k,l\in[1,N]^{2}:\) \[T_{ij}^{1}=-\sum_{|l-j|\leq L,|A_{kl}|<q^{-1}}G_{ik}A_{kl}G_{lj}^{\infty},\] \[T_{ij}^{2}=-\sum_{|l-j|\leq L,|A_{kl}|>q^{-1}}G_{ik}A_{kl}G_{lj}^{\infty},\] and \[T_{ij}^{3}=-\sum_{|l-j|\geq L}G_{ik}A_{kl}G_{lj}^{\infty}.\] First consider case (A), where the \(T_{ij}^{2}\) term does not present. Then we decompose \(T_{ij}^{1}\) into \(2K+1\) sub-sums, consisting of the appearance of \(A_{kl}\) with \(k-l=-K,\cdots,K\). For each term we use concentration inequality (Proposition 3.2, and defining the truncated version \(\widetilde{G}_{ik}\) and \(\widetilde{T}_{lj}^{1}\) as in (3.16) and (3.17)), we deduce that with probability at least \(1-e^{-\nu\log N^{\log\log N}}-\mathbb{P}(R_{m-1}^{c})\), we have \[|T_{ij}^{1}|\leq N^{0.001\epsilon}\left[\frac{8C^{2}N^{0.002\epsilon}}{N^{ \frac{0.01\epsilon}{\alpha}}}+C^{2}\sqrt{\frac{1}{N^{1-10^{-3}\epsilon}\eta}} \right]\leq C^{2}N^{-0.001\epsilon}. \tag{5.15}\] For the term \(T_{ij}^{3}\), we use the exponential decay of \(G_{lj}^{\infty}\) for large \(|l-j|\) (Proposition 2.3) to bound almost surely \(|T_{ij}^{3}|\leq e^{-N^{c_{*}\epsilon}}\) for some \(c_{*}>0\). Taking these terms back to the resolvent expansion we necessarily have \(|G_{ij}(z_{m})|\leq 4C\). We are done for case (A). Then we consider case (C). The bounds for \(T_{ij}^{1}\) and \(T_{ij}^{3}\) are exactly the same as in case (A) and they can be upper bounded by a vanishing quantity \(N^{-0.001\epsilon}\). As we are in case (C), \(T_{ij}^{2}\) contains only one or two terms, depending on whether the atypical value \(A_{jl}\) is on the diagonal or not, which are \[T_{ij}^{2}=\begin{cases}-G_{ij}A_{jl}G_{lj}^{\infty},\quad j=l,\\ -G_{ij}A_{jl}G_{lj}^{\infty}-G_{il}A_{lj}G_{jj}^{\infty},\quad j\neq l.\end{cases}\] (there is one by definition of case (C), and there is only one modulo symmetry restriction as we work on \(D_{N}^{c}\)). To fix ideas, we first work in the case \(j=l\), so we only have one term in \(T_{ij}^{2}\). Then we can rewrite the resolvent expansion as \[G_{ij}(1+A_{jl}G_{lj}^{\infty})=G_{ij}^{\infty}+T_{ij}^{1}+T_{ij}^{3}. \tag{5.16}\] This is the most critical point of the proof. We argue that whatever the value of \(A_{jl}\), we can always invert \(1+A_{jl}G_{lj}^{\infty}\) and the norm of the inverse is uniformly bounded, so that \(|G_{ij}|\) has an upper bound that is independent of \(N\) and \(m\)! Indeed, if \(|A_{jl}|<\frac{1}{4C_{*}}\), where \(C_{*}\) is the supremum of the norms of \(G_{ij}^{\infty}\) on \(\mathcal{S}\), then the real part of \(1+A_{jl}G_{lj}^{\infty}\) is at least \(\frac{3}{4}\). If otherwise, then \(|A_{jl}|>\frac{1}{4C_{*}}\), then since \(j=l\), \(\Im G_{lj}^{\infty}\) is bounded strictly away from \(0\) uniformly for \(E=\Re z\in[-2+\kappa,2-\kappa]\) by Proposition 2.4 (1), so the imaginary part of \(1+A_{jl}G_{lj}^{\infty}\) is also bounded strictly away from zero. Combining both cases, we can derive the following very useful estimate: there exists another constant \(C_{1}\) depending only on \(C_{*}\) and the constant \(C_{\kappa}\) in Proposition 2.4 (1) such that for any pair \((i,j)\) in case (C), \[|G_{ij}(z_{m})|\leq C_{1}\min(1,\frac{1}{|A_{jl}|}) \tag{5.17}\] with probability at least \(1-e^{-\nu\log N^{\log\log n}}-\mathbb{P}(R_{m-1}^{c})\). Now we consider the non-diagonal case \(j\neq l\), so that there are two terms in \(T^{2}_{ij}\). Then the resolvent expansion for \(G_{ij}\) and \(G_{il}\) are: \[\begin{cases}G_{ij}(1+A_{jl}G^{\infty}_{lj})+G_{il}A_{lj}G^{\infty}_{jj}=G^{ \infty}_{ij}+T^{1}_{ij}+T^{3}_{ij},\\ G_{il}(1+A_{lj}G^{\infty}_{jl})+G_{ij}A_{jl}G^{\infty}_{ll}=G^{\infty}_{il}+T^{1 }_{il}+T^{3}_{il}.\end{cases} \tag{5.18}\] To solve \(G_{ij}\) and \(G_{il}\), we have to invert the matrix \[\Lambda_{jl}:=\begin{pmatrix}1+A_{jl}G^{\infty}_{lj}&A_{lj}G^{\infty}_{jj}\\ A_{jl}G^{\infty}_{ll}&1+A_{lj}G^{\infty}_{jl}\end{pmatrix}. \tag{5.19}\] We compute its determinant: \[\begin{split}\det\Lambda_{jl}&=(1+A_{jl}G^{\infty}_{lj})^{2} -A_{jl}^{2}G^{\infty}_{ll}G^{\infty}_{jj}\\ &=(1+A_{jl}\Re G^{\infty}_{lj})^{2}-A_{jl}^{2}|\Im G^{\infty}_{lj}|^{2}+A _{jl}^{2}|\Im G^{\infty}_{ll}|\Im G^{\infty}_{jj}|(1+o(1))\\ &+2i(1+A_{jl}\Re G^{\infty}_{lj})A_{jl}\Im G^{\infty}_{lj},.\end{split} \tag{5.20}\] where in the second equality we used the fact that \(G^{\infty}_{ii}\) is very close to a purely imaginary number, thanks to Proposition 2.4, (1). Now thanks to Proposition 2.4, (2) and (3), and the fact that \(\Im G^{\infty}_{ll}(z)\) is sufficiently close to \(m^{as}(z)\), we can find constants \(D_{\kappa,1}>0\) and \(D_{\kappa,K,p}>0\) such that (if \(K=0,1\) use the constant \(D_{\kappa,1}\), otherwise use the constant \(D_{\kappa,K,p}\).) \[|\det\Lambda_{jl}|\geq\Re\det\Lambda_{jl}\geq D_{\kappa,K,P}|A_{jl}|^{2}|m^{as }(z)|^{2}. \tag{5.21}\] This bound will be used when \(|A_{jl}|\) is large. When \(|A_{jl}|\) is small, say \(|A_{jl}|\leq\frac{1}{10C_{*}}\) where \(C_{*}\) is the supremum of \(|G^{\infty}_{ij}|\) on \(\mathcal{S}\), then a simple computation shows that \[|\det\Lambda_{jl}|\geq\Re\det\Lambda_{jl}\geq 0.81. \tag{5.22}\] Now we can invert the matrix and solve the linear system (5.18). We do the computations separately in the two cases (\(|A_{jl}|\) small and \(|A_{jl}|\) large) and use the corresponding estimates on \(\det\Lambda_{jl}\). With high probability \(T^{1}_{ij},T^{3}_{ij},T^{1}_{il},T^{3}_{il}\) are all negligible, so that we can find \(C_{1}\) depending only on \(C_{*}\) and \(D_{\kappa,K,p}\) such that \[|G_{ij}(z_{m})|\leq C_{1}\min(1,\frac{1}{|A_{jl}|}),\quad|G_{il}(z_{m})|\leq C_ {1}\min(1,\frac{1}{|A_{jl}|}) \tag{5.23}\] with probability at least \(1-e^{-\nu\log N^{\log\log n}}-\mathbb{P}(R^{c}_{m-1})\). This finishes the inductive proof for case (C), Now we are ready for case (B). The terms involved in \(T^{1}_{ij}\) and \(T^{3}_{ij}\) can be bounded just as the previous two cases and are with probability \(1-e^{-\nu\log N^{\log\log N}}-\mathbb{P}(R^{c}_{m-1})\) smaller than \(N^{-0.001\epsilon}\). The term \(T^{2}_{ij}\) equals \(G_{ik}A_{kl}G^{\infty}_{lj}\) if \(k=l\), or \(G_{ik}A_{kl}G^{\infty}_{lj}+G_{il}A_{lk}G^{\infty}_{kj}\) if \(k\neq l\). We have proved in the previous paragraph that any such pair \((i,k)\) (that corresponds to a unique pair \((k\leq l)\) with \(|A_{kl}|>q^{-1}\)) must satisfy \(|G_{ik}|<C_{1}\min(1,\frac{1}{|A_{kl}|})\). Then we easily derive from the resolvent identity that with probability at least \(1-e^{-\nu\log N^{log\log N}}-\mathbb{P}(R^{c}_{m-1})\), for \((i,j)\) in case (B), \[|G_{ij}(z_{m})|\leq 2C_{1}C_{*}.\] It suffices to take \(C=\max(4C_{1}C_{*},\hat{C})\) in the definition (5.12) of \(R_{m}\). This finishes the inductive step from \(m-1\) to \(m\), and hence finishes the proof of the whole theorem. Finally we prove the second part of Theorem 1.21, that is, we show with positive probability there exists some entry \((i,j)\) such that \(G_{ij}(z)\) does not converge to \(G_{ij}^{\infty}(z)\). Proof of Theorem 1.21, (2).: The proof follows from analyzing (5.16). We have proved that with overwhelming probability \(T_{ij}^{1}\) and \(T_{ij}^{3}\) are smaller than \(N^{-0.001\epsilon}\). In the case \(l=j\), \(|G_{lj}^{\infty}|\) is uniformly bounded away from zero thanks to Proposition 2.4 (1), so that once \(A_{jl}\in[\frac{1}{2},\infty)\) we must be able to find a constant \(C_{0}\) with \(|G_{ij}-G_{ij}^{\infty}|\geq C_{0}\). With positive probability there exists some \(j=l\) with \(A_{jl}\in[\frac{1}{2},\infty)\). This completes the proof. #### 5.2.1. Eigenvector de-localization In the end we justify Corollary 1.23. Proof of Corollary 1.23.: Let \(u_{i}(k)\), \(i,k=1,\cdots,N\) denote the \(k\)-th coordinate of the normalized eigenfunction that corresponds to the \(i\)-th smallest eigenvalue \(\lambda_{i}\) of \(H_{N}^{\infty}+A_{N}\). We restrict to \(\lambda_{i}\in[-2+\kappa,2-\kappa]\setminus\Delta_{K}^{p}\). In the following we choose a random spectral parameter \(z:=\lambda_{i}+i\eta\) where \(\eta=N^{-1+\epsilon}\). Then from Theorem 1.21, (1), we can find \(c_{*}>0\) sufficiently small, such that with probability at least \(1-N^{-c_{*}\epsilon}\), the following holds for all \(k\in[1,N]\) and all \(\lambda_{i}\in[-2+\kappa,2-\kappa]\setminus\Delta_{K}^{p}\): \[C>\Im G_{kk}(z)=\sum_{j=1}^{N}\frac{\eta}{(\lambda_{j}-\lambda_{i})^{2}+\eta^ {2}}|u_{j}(k)|^{2}\geq\frac{1}{\eta}|u_{i}(k)|^{2}=N^{1-\epsilon}|u_{i}(k)|^{ 2}. \tag{5.24}\] This completes the proof. ### Green function estimates under different scaling In this section we prove Theorem 1.25. This will be a careful adaptation of the proof of Theorem 1.21. Proof of Theorem 1.25.: In the proof we take \(\eta\in[N^{-1+2\sigma\alpha+\epsilon},1]\). We fix a length scale \(L=N^{1-2\sigma\alpha-0.5\epsilon}\) and a cutoff value \(q=N^{\frac{0.01\epsilon}{\alpha}}\). We only give a sketch by mostly following the proof of Theorem 1.21, and only highlight where some changes are needed. Define as in the previous proof the event that atypical \(A_{ij}\) locations are not distant: \[D_{N}:= \{\text{there exists }i\leq j,i^{\prime}\leq j^{\prime}\in[1,N]^{4 }:N^{\sigma}|A_{ij}|>q^{-1},N^{\sigma}|A_{i^{\prime}j^{\prime}}|>q^{-1},|i-i^ {\prime}|\leq 2L\}\] \[\cup\{\text{ there exists }i\leq j\in[1,N]^{2}:N^{\sigma}|A_{ij}|>q^{-1},i\in[1,L]\cup[N-L,N].\} \tag{5.25}\] Recall the assumption \(A_{ij}=N^{\sigma}\frac{1}{N^{\frac{1}{\alpha}}}\xi_{ij}\), so we have \[\mathbb{P}(D_{N})\leq KN\cdot(2K-1)2L\cdot N^{2(-1+\sigma\alpha+0.01\epsilon)} +2KLN^{-1+\sigma\alpha+0.01\epsilon}\leq N^{-0.48\epsilon}, \tag{5.26}\] where we use that \(\mathbb{P}(N^{\sigma}|A_{ij}|>q^{-1})=\mathbb{P}(|\xi_{ij}|\geq N^{\frac{1-0.0 1\epsilon}{\alpha}-\sigma})\). We adopt the same resolvent expansion as in (5.14). This time we have the three terms \[T_{ij}^{1}=-\sum_{|l-j|\leq L,N^{\sigma}|A_{kl}|<q^{-1}}N^{\sigma}G_{ik}A_{kl} G_{lj}^{\infty},\] \[T_{ij}^{2}=-\sum_{|l-j|\leq L,N^{\sigma}|A_{kl}|>q^{-1}}N^{\sigma}G_{ik}A_{kl} G_{lj}^{\infty},\] and \[T_{ij}^{3}=-\sum_{|l-j|\geq L}N^{\sigma}G_{ik}A_{kl}G_{lj}^{\infty}.\] We first estimate \(T^{1}_{ij}\). To derive a sharp estimate we will use an estimate (6.14) derived in later chapters. Granted with (6.14), we apply concentration inequality just as in (5.15) to derive that, with probability at least \(1-e^{-\nu\log N^{\log\log N}}\), \[|T^{1}_{ij}|\leq N^{0.001\epsilon}\left[\frac{8C^{2}}{N^{\frac{0.01\epsilon}{ \alpha}}}+C^{2}\sqrt{\frac{1}{N^{1-\sigma\alpha-10^{-3}\epsilon}\eta}}\right] \leq C^{2}N^{-0.001\epsilon},\] thanks to the assumption that \(\eta>N^{-1+\sigma\alpha+\epsilon}\). (We have omitted the truncation procedure and the induction procedure on the length scale of \(\eta\): they are exactly the same as in the previous proofs.) To estimate \(T^{3}_{ij}\), note that with probability tending to \(1\), we may assume that \(|N^{\sigma}A_{ij}|\leq N^{1.1\sigma}\) for all pairs \((i,j)\), and we condition on this event. Thanks to the decay property (Proposition 2.3) of \(G^{\infty}\): \(|G^{\infty}_{ij}(z)|\leq Ce^{-0.5\epsilon}\) for all \(|i-j|>L\), and that \(\eta>N^{-1+2\sigma\alpha+\epsilon}\), we may bound \(|T^{3}_{ij}|\leq CN^{-c\epsilon\epsilon}\) just as in the previous proof, assuming that we have an a-priori bound \(|G_{ij}|\leq C\) for all \(i,j\). Granted with these estimates, we can categorize indices \((i,j)\) into the three different classes (A),(B) and (C) as in the previous proof, but now we say \((i,j)\) is atypical if \(N^{\sigma}|A_{ij}|>q^{-1}\). More precisely, each \((i,j)\) lies in one of the following three cases: in case (A), j is at distance at least \(L\) to any index \(k\) such that \(N^{\sigma}|A_{j^{\prime}k}|>q^{-1}\) for some \(j^{\prime}\). In case (B), within distance \(0<|j-k|<L\) there exists an index \(k\) and another index \(j^{\prime}\neq j\) such that \(N^{\sigma}|A_{j^{\prime}k}|>q^{-1}\) (on \(D^{c}_{N}\) there is at most one such index pair \((j^{\prime},k)\)). In case (C), one can find an index \(k\) with \(N^{\sigma}|A_{jk}|>q^{-1}\). Then we follow the induction argument (5.12) and (5.13) up until \(\eta=N^{-1+2\sigma\alpha+\epsilon}\). For \((i,j)\) in case (A), we follow exactly the same argument as in the previous proof, and the desired estimate on \(|G_{ij}|\) readily follows. The induction step from \(m\) to \(m+1\) is verified for these \(A_{ij}\)'s. For \((i,j)\) in case (C), we again use Proposition 2.4 to solve the linear system (5.18) and deduce that for all such pairs \((i,j)\) that correspond to a unique \((j,l)\) with \(N^{\sigma}|A_{jl}|>q^{-1}\), we can find \(C_{1}>0\) depending only on \(C_{*}\) and \(C_{\kappa,K,p}\) that \[|G_{ij}(z_{m})|\leq C_{1}\min(1,\frac{1}{N^{\sigma}|A_{jl}|}),\quad|G_{il}(z_{ m})|\leq C_{1}\min(1,\frac{1}{N^{\sigma}|A_{lj}|}). \tag{5.27}\] Finally for \((i,j)\) in class (B), as in the previous proof, the \(T^{1}_{ij}\) and \(T^{3}_{ij}\) terms are vanishing with high probability. The \(T^{2}_{ij}\) term \(-G_{ik}\cdot N^{\sigma}A_{kl}\cdot G^{\infty}_{lj}\) or \(-G_{ik}\cdot N^{\sigma}A_{kl}\cdot G^{\infty}_{lj}-G_{il}\cdot N^{\sigma}A_{lk }\cdot G^{\infty}_{kj}\) is bounded by \(4C_{1}C_{*}\) thanks to (5.27). Thus we have justified the inductive hypothesis at \(m\). For the remaining parts, we exactly follow the proof of Theorem 1.21. ## 6. Local law for the Stieltjes transform In this section we prove both Theorem 1.28 and Theorem 1.32. We begin with the first part, that is, \(H_{N}=H_{N}^{\infty}+A_{N}\). We first show that with overwhelming probability, most of the random variables in \(A_{N}\) have a small upper bound. Without loss of generality assume the function \(L(x)\) in (1.13) is bounded by \(1\) for all \(x\in[0,\infty)\). The general case proceeds after some minor change. **Lemma 6.1**.: _Assuming that Assumption 1.8 is satisfied. Let \(\Delta_{\epsilon}\) denote the event that there are at least \(2(K+1)N^{\frac{\epsilon}{4}}\) elements of the matrix \(A_{N}\) that have absolute value larger than \((K+1)^{\frac{1}{\alpha}}N^{-\frac{\epsilon}{10\alpha}}\). Then \(\mathbb{P}[\Delta_{\epsilon}]\leq e^{-CN^{\epsilon}\log N}\) for some universal constant \(C>0\)._ Proof.: Thanks to the fact that Assumption 1.8 is satisfied for \(L(x)\) bounded by \(1\), \[\mathbb{P}[\Delta_{\epsilon}] \leq\sum_{\ell=(K+1)N^{\frac{\epsilon}{4}}}^{(K+1)N}\binom{(K+1)N }{\ell}\mathbb{P}[|\xi_{ij}|\geq(K+1)^{\frac{1}{\alpha}}N^{\frac{1}{\alpha}- \frac{\epsilon}{10\alpha}}]^{\ell} \tag{6.1}\] \[\leq\sum_{\ell=(K+1)N^{\frac{\epsilon}{4}}}^{(K+1)N}\binom{(K+1)N }{\ell}((K+1)N)^{-\ell}N^{\frac{\epsilon}{10}\ell}.\] By a simple combinatorial estimate, \[\mathbb{P}[\Delta_{\epsilon}]\leq\sum_{\ell=((K+1)N^{\frac{\epsilon}{4}}}^{(K +1)N}\frac{1}{\ell!}N^{\frac{\epsilon}{10}\ell}.\] From the elementary inequality \(\ell!\geq(\lfloor\frac{\ell}{2}\rfloor)!N^{\frac{\epsilon}{10}(K+1)\ell}\) given \(\ell\geq(K+1)N^{\frac{\epsilon}{4}}\), we deduce that \[\mathbb{P}[\Delta_{\epsilon}]\leq\sum_{\ell=(K+1)N^{\epsilon/4}/2}^{\infty} \frac{1}{\ell!}\leq\frac{e}{((K+1)N^{\epsilon/4}/2)!}\leq e^{-CN^{\epsilon} \ln N}\] where the last step follows from Stirling's formula. Then we outline the following sampling procedure that will be used in both Theorem 1.28 and Theorem 1.32. We however use different notions of admissible labels. **Sampling 6.2**.: _We outline a procedure to sample and truncate the noisy matrix \(A_{N}\)._ _First, we introduce a label \(L=(L_{ij})\), \(i-K\leq j\leq i\leq N\), that has independent entries. For each site \((i,j)\), we assume that \(L_{ij}=T\) with probability \(P:=\mathbb{P}(|\xi_{ij}|<(K+1)^{\frac{1}{\alpha}}N^{\frac{1-\epsilon/10}{ \alpha}})\) and \(L_{ij}=F\) with probability \(1-P\)._ _Then thanks to Lemma 6.1, with probability at least \(1-e^{-cN^{\epsilon}\ln N}\), \(L\) has no more than \((K+1)N^{\frac{\epsilon}{4}}\) elements with label \(F\). We call such a label \(L\)**admissible**._ _Moreover, we say a label \(L\) is **separably admissible** if \(L_{ij}=F\), \(i\leq j\) implies that the label \(i\notin[1,N^{1-0.5\epsilon}]\cup[N-N^{1-0.5\epsilon},N]\), and that for any other \((k\leq l)\) such that \(L_{kl}=F\) one must have \(|i-k|>N^{1-0.5\epsilon}\). Then the computation in (5.26) shows that_ \[\mathbb{P}(L\text{ is {separably admissible}})\geq 1-N^{-0.48\epsilon}.\] _Let \(\xi\) be a random variable with symmetric law satisfying (1.13). Let \(\xi^{T}\) be the probability distribution of \(\xi\) conditioned to take value in \([-(K+1)^{\frac{1}{\alpha}}N^{\frac{1-\epsilon/10}{\alpha}},(K+1)^{\frac{1}{ \alpha}}N^{\frac{1-\epsilon/10}{\alpha}}]\), and let \(\xi^{F}\) be the probability distribution of \(\xi\) conditioned to take value in \(\mathbb{R}\backslash[-(K+1)^{\frac{1}{\alpha}}N^{\frac{1-\epsilon/10}{\alpha}},(K+1)^{\frac{1}{\alpha}}N^{\frac{1-\epsilon/10}{\alpha}}]\)._ _For any given label \(L\), we sample the \(N\times N\) matrix \(A_{N}(L)\) as follows. For any \(i-K\leq j\leq i\), if \(L_{ij}=T\) we sample \(A_{ij}\) from the law \(\xi^{T}\), and if \(L_{ij}=F\) we sample \(A_{ij}\) from the law \(\xi^{F}\). All these \(A_{ij}\)s are sampled independently. For \(i-K>j\) we set \(A_{ij}=0\). Then extend by symmetry via \(A_{ij}=A_{ji}\) for all \(i<j\) to define the whole matrix \(A_{N}\). It is not hard to check that the matrix obtained by first sampling \(L\), then sampling \(A_{N}\) from \(L\) has the same law as the original matrix \(A_{N}\) from Assumption 1.8. To stress the dependence, we use the symbol \(A_{N}(L)\) to mean a matrix \(A_{N}\) sampled from \(L\) via this procedure._ _Then we define the truncation procedure. For any label \(L\), let \(\mathbf{T}^{L}\) be the subset of \([1,N]\) consisting of indices \(i\) such that \(L_{ik}=F\) or \(L_{ki}=F\) for some \(k\in[1,N]\). Denote by \(A_{N}^{(\mathbf{T}^{L})}\) the matrix obtained by removing the rows and columns of \(A_{N}(L)\) with indices in \(\mathbf{T}^{L}\), and denote by \(H_{N}^{\infty,(\mathbf{T}^{L})}\) the matrix obtained by removing from \(H_{N}^{\infty}\) the rows and columns with indices in \(\mathbf{T}^{L}\). Note that \(A_{N}^{(\mathbf{T}^{L})}\) is independent of \(\mathbf{T}^{L}\)._ _In the following we will first prove a local law for \(H_{N}^{\infty,(\mathbf{T}^{L})}+A_{N}^{(\mathbf{T}^{L})}\) for any (separably) admissible label \(L\). Then we show how this implies a local law for \(H_{N}^{\infty}+A_{N}\)._ **Lemma 6.3**.: _for any **separably admissible** label \(L\), denote by \(m^{(\mathbf{T}^{L})}\) the trace of the Green function of \(H_{N}^{\infty,(\mathbf{T}^{L})}+A_{N}^{(\mathbf{T}^{L})}\), and denote by \(m^{\infty,(\mathbf{T}^{L})}\) that of \(H_{N}^{\infty,(\mathbf{T}^{L})}\). Then we can find positive constants \(C\), \(\nu\) depending on \(\kappa,\epsilon\) such that_ \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon,\kappa)}|m^{(\mathbf{T}^{L})}( z)-m^{\infty,(\mathbf{T}^{L})}(z)|\geq CN^{-\frac{\epsilon}{40}}\right)\leq e^{- \nu\log N^{\log\log N}} \tag{6.2}\] _and the constants can be chosen uniformly over all admissible labels \(L\)._ Proof.: We verify that the assumptions in Theorem 3.3 are satisfied by both \(H_{N}^{\infty,(\mathbf{T}^{L})}\) and \(A_{N}^{(\mathbf{T}^{L})}\). First, since \(L\) is **separably admissible**, denoting the elements of \(\mathbf{T}^{L}\) as \(\{k_{1}<k_{2}<\cdots<k_{s}\}\), then we must have \(k_{1}>N^{1-0.5\epsilon}\), and \(k_{r}-k_{r-1}>N^{1-0.5\epsilon}\) for each \(r=2,\cdots,s\). Writing out the matrix form of 1-d Laplacian, one can see that \(H_{N}^{\infty,(\mathbf{T}^{L})}\) is a block diagonal matrix with block size \(k_{1}-1,k_{2}-k_{1}-1,\cdots,k_{s}-k_{s-1}-1\) and each block has the form \(H_{k_{r}-k_{r-1}-1}^{\infty}\). Thanks to this block-diagonal form, when computing the Green function of \(H_{N}^{\infty,(\mathbf{T}^{L})}\) we need only compute the Green function of each block, which is already done in Section 2.1. Now each block has size at least \(N^{1-0.5\epsilon}\) and \(\eta>N^{-1+\epsilon}\), so that Proposition 1.1 is applicable and leads to \[\sup_{z\in\mathcal{S}}\max_{i,j}\left|G_{ij}^{\infty,(\mathbf{T}^{L})}(z) \right|\leq C, \tag{6.3}\] where \(C\) is uniform over any separably admissible label \(L\). We have verified that \(H_{N}^{\infty,(\mathbf{T}^{L})}\) satisfies the assumption of Theorem 3.3. For \(A_{ij}\), since \(\xi_{ij}\) is assumed to have a symmetric law, we have that \(\mathbb{E}[\xi^{T}]=0\). To compute higher moments we use Lemma B.1, which implies that for some slow varying function \(L_{0}\), \[\mathbb{E}[|N^{-\frac{1}{\alpha}}\xi^{T}|^{p}]\leq L_{0}(N^{\frac{1-\epsilon/1 0}{\alpha}})C_{\alpha}\frac{C}{Nq^{p-\alpha}},\quad p\geq 2,\] where \(q=N^{\frac{\epsilon}{100\alpha}}\). Since \(L_{0}(x)\) is slow-varying, we may assume \(L_{0}(N^{\frac{1-\epsilon/10}{\alpha}})\leq CN^{10^{-3}\epsilon}\), then \[\mathbb{E}[|N^{-\frac{1}{\alpha}}\xi^{T}|^{p}]\leq\frac{C}{N^{1-10^{-3} \epsilon}q^{p-\alpha}}. \tag{6.4}\] Then the result follows immediately from Theorem 3.3, noting that the square matrix \(H_{N}^{\infty,(\mathbf{T}^{L})}+A_{N}^{(\mathbf{T}^{L})}\)has size at least \(N-(2K+1)N^{\frac{\epsilon}{2}}\sim N\) for any separably admissible \(L\). Finally we show that \(m^{(\mathbf{T}^{L})}\) is very close to \(m\) for any admissible \(L\), i.e. the error induced by truncation is negligible. Inequalities concerning the trace of row and column removed Green function can be found for example in [19], Lemma 4.2 or [14]. The formula is as follows: given any \(N\times N\) matrix \(H\), for any index st \(\mathbf{T}=\{k_{1},\cdots,k_{s}\}\subset[1,N]\), let \(H^{(\mathbf{T})}\) denote the matrix \(T\) with the rows and columns labelled \(k_{1},\cdots,k_{s}\) removed, and let \(G^{(\mathbf{T})}\) denote the resolvent matrix of \(H^{(\mathbf{T})}\). Then for any distinct indices \(i,j,k\) that are not in \(\mathbf{T}\), we have \[G^{(\mathbf{T})}_{ij}-G^{(k\mathbf{T})}_{ij}=G^{(\mathbf{T})}_{ik}G^{(\mathbf{ T})}_{kj}(G^{(\mathbf{T})}_{kk})^{-1} \tag{6.5}\] where \(k\mathbf{T}\) means adjoining \(k\) to \(\mathbf{T}\). Denote by \(m^{(\mathbf{T})}\) the trace of \(G^{(\mathbf{T})}\), then \[m^{(\mathbf{T})}-m^{(\mathbf{kT})}=\frac{1}{N}\sum_{i}\frac{G^{(\mathbf{T})}_ {ik}G^{(\mathbf{T})}_{ki}}{G^{(\mathbf{T})}_{kk}}=\frac{1}{N}\frac{[(G^{( \mathbf{T})})^{2}]_{kk}}{G^{(\mathbf{T})}_{kk}} \tag{6.6}\] Thanks to the elementary inequality \(|(\lambda-\omega)^{-2}|=\eta^{-1}\Im[(\lambda-z)^{-1}]\), \(\lambda\in\mathbb{R}\), we deduce that \(|[G^{\mathbf{T}^{2}}_{ii}|=\frac{\Im G^{\mathbf{T}}_{ii}}{\eta}\) and we conclude with \[|m^{(\mathbf{T})}-m^{(\mathbf{kT})}|\leq\frac{1}{N\eta}, \tag{6.7}\] and more generally \[|m-m^{(\mathbf{T})}|\leq\frac{|\mathbf{T}|}{N\eta}. \tag{6.8}\] Now we are ready to give a proof of Theorem 1.28. Proof of Theorem 1.28: the first part.: We first sample the label \(L\) following the rule in Sampling 6.2. Then \[\mathbb{P}(L\text{ is separably admissible })\geq 1-N^{-c_{*}\epsilon}. \tag{6.9}\] By Lemma 6.3, for any separably admissible \(L\), \[\mathbb{P}\left(|m^{(\mathbf{T}^{L})}-m^{\infty,(\mathbf{T}^{L})}|\geq CN^{- \frac{\epsilon}{40}}\right)\leq e^{-\nu\log N^{\log\log N}}, \tag{6.10}\] where the constants \(C\) and \(\nu\) are independent of the separably admissible label \(L\). Combined with (6.8) applied to both \(m\) and \(m^{\infty}\), setting \(\mathbf{T}:=\mathbf{T}^{L}\), we deduce after taking expectation over all separably admissible \(L\), that \[\mathbb{P}\left(|m-m^{\infty}|\geq CN^{-\frac{\epsilon}{40}}+\frac{2(2K+1)N^{ \frac{\epsilon}{2}}}{N\eta}\mid L\text{ separably admissible}\right)\leq e^{-\nu \log N^{\log\log N}}, \tag{6.11}\] for some constant \(C\) depending only on \(\alpha\), \(\kappa\) and \(\epsilon\), where we use the fact that for any separably admissible \(L\), \(|\mathbf{T}^{L}|\leq(2K+1)N^{\frac{\epsilon}{2}}\). Combining with (6.9) and the fact that \(\eta>N^{-1+\epsilon}\), this completes the proof of (1.28) in the first part of Theorem 1.28. Finally replacing \(m^{\infty}(z)\) by \(m^{as}(z)\) leads to a vanishing error thanks to Lemma 2.2. ### Wigner matrix with banded perturbation In this section we prove Theorem 1.32. Before that, we need an extension of Theorem 3.3: **Theorem 6.4**.: _Let \(\overline{W}_{N}\) be a Wigner matrix satisfying the definitions in Theorem 1.32. Let \(\overline{A}_{N}\) be an \(N\times N\) real symmetric random matrix with independent elements \(\overline{A}_{ij}\) for all \(|i-j|\leq K\) and zero otherwise. Assume that we can find some \(\delta>1\) and \(\gamma>0\) such that_ \[\mathbb{E}[|A_{ij}|]\leq\frac{C}{N^{1+\delta}},\quad\mathbb{E}[|A_{ij}|^{p}] \leq\frac{C}{N^{\gamma}q^{p-\alpha}},\text{ for each }p\geq 2,\] _where \(q\) is \(N\)-dependent and \(q>>\log N^{\log\log N}\). Assume that \(\overline{W}_{N}\) and \(\overline{A}_{N}\) are independent._ _Then for any \(\epsilon>0\) we can find positive constants \(C\) and \(\nu\) depending on \(\epsilon\) such that_ \[\mathbb{P}\left(\max_{\begin{subarray}{c}z\in\mathcal{S}(\epsilon)\\ \eta\geq N^{-\gamma+0.5\epsilon}\end{subarray}}\max_{i,j}|\overline{G}_{ij}-m^ {sc}\delta_{ij}|(z)\geq CN^{-\frac{\epsilon}{40}}\right)\leq e^{-\nu\log N^{ \log\log N}}, \tag{6.12}\] _where \(\overline{G}\) is the Green function of \(\overline{W}_{N}+\overline{A_{N}}\) and \(\delta_{ij}\) satisfies \(\delta_{ij}=1\) if \(i=j\)._ Proof.: Let \(G^{\overline{W}}\) denote the Green function of \(\overline{W}_{N}\). Then \(\overline{W}_{N}\) satisfies the following local law [20], where \(m^{sc}(z)\) is the Stieltjes transform of semicircle law: \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\epsilon)}\sup_{1\leq i,j\leq N}|G^{ \overline{W}}_{ij}-\delta_{ij}m^{sc}(z)|\geq CN^{-\frac{\epsilon}{40}}\right) \leq e^{-\nu\log N^{\log\log N}}. \tag{6.13}\] Since \(\overline{W}_{N}\) and \(\overline{A}_{N}\) are independent, we may condition on this event with probability \(1-e^{-\nu\log N^{\log\log N}}\) (so that \(\overline{W}_{N}\) is now a deterministic matrix and satisfies the entry-wise Green function upper bound (1.7)) and use Theorem 3.3 to complete the proof. We now give the proof of the first part of Theorem 1.32. Proof of Theorem 1.32, the first part.: We follow the sampling procedure in Sampling 6.2. With probability at least \(1-e^{-N^{\epsilon}\log N}\), the sampled label \(L\) is **admissible**. Let \(W_{N}^{(\mathbf{T}^{L})}\) denote the matrix \(W_{N}\) with rows and columns indexed by indices in \(\mathbf{T}^{L}\) removed. Since \(W_{N}\) is a Wigner matrix, \(W_{N}^{(\mathbf{T}^{L})}\) is again a Wigner matrix with size \(N-|\mathbf{T}^{L}|\). Therefore we can apply Theorem 6.4 to derive a local law for \(W_{N}^{(\mathbf{T}^{L})}+A^{(\mathbf{T}^{L})}\), with the constants uniform over all choices of **admissible** label \(L\). Then we use (6.8) to complete the proof of the first part of Theorem 1.32. See next section for the proof of the second part. ### Trace of Green function under different scaling Now we prove the second part of Theorem 1.28 and Theorem 1.32, i.e. when we have an \(N^{\sigma}\) scaling in front of \(A_{ij}\). We begin with some moment computations and use again Lemma B.1. We choose \(q=N^{\frac{\epsilon}{10\alpha}}\) and choose \(x=N^{\frac{1}{\alpha}}N^{-\sigma}q^{-1}\), then for any \(p\geq 2\) : \[\mathbb{E}[|\xi_{ij}|^{p}1_{|\xi_{ij}|\leq x}]\leq C_{\alpha}L_{0}(x)N^{\frac {p}{\alpha}}\frac{1}{Nq^{p-\alpha}N^{\sigma(p-\alpha)}}.\] where \(C_{\alpha}=\frac{2}{2-\alpha}.\) Then \[\mathbb{E}[|N^{-\frac{1}{\alpha}}N^{\sigma}\xi_{ij}|^{p}1_{|\xi_{ij}|\leq x}] \leq C_{\alpha}L_{0}(x)\frac{1}{N^{1-\sigma\alpha}q^{p-\alpha}}.\] Since \(L_{0}\) is a slow-varying function, we can assume that \(L_{0}(x)\leq C_{\alpha}N^{10^{-3}\epsilon}\). Recalling the definition of \(A_{N}\), we can rewrite the last expression as \[\mathbb{E}[|N^{\sigma}A_{ij}|^{p}1_{N^{\sigma}|A_{ij}|\leq q^{-1}}]\leq C_{ \alpha}\frac{1}{N^{1-\sigma\alpha-10^{-3}\epsilon}q^{p-\alpha}}. \tag{6.14}\] We also need to prove the following analogue of Lemma 6.1: **Lemma 6.5**.: _Assuming that Assumption 1.8 is satisfied. Let \(\Delta_{\epsilon}^{\sigma}\) denote the event that there are at least \(2(K+1)N^{\sigma\alpha+\frac{\epsilon}{4}}\) elements of the matrix \(N^{\sigma}A_{N}\) that have absolute value larger than \((K+1)^{\frac{1}{\alpha}}N^{-\frac{\epsilon}{10\alpha}}\). Then \(\mathbb{P}[\Delta_{\epsilon}^{\sigma}]\leq e^{-CN^{\epsilon}\ln N}\) for some universal constant \(C>0\)._ Proof.: We work as in Lemma 6.1: \[\begin{split}\mathbb{P}[\Delta_{\epsilon}^{\sigma}]& \leq\sum_{\ell=(K+1)N^{\frac{\epsilon}{4}+\sigma\alpha}}^{(K+1)N}\binom{(K+1)N }{\ell}\mathbb{P}[|\xi_{ij}|\geq(K+1)^{\frac{1}{\alpha}}N^{\frac{1}{\alpha}- \sigma-\frac{\epsilon}{10\alpha}}]^{\ell}\\ &\leq\sum_{\ell=(K+1)N^{\frac{\epsilon}{4}+\sigma\alpha}}^{(K+1)N }\binom{(K+1)N}{\ell}((K+1)N)^{-\ell}N^{(\frac{\epsilon}{10}+\sigma\alpha)\ell}.\end{split} \tag{6.15}\] By a simple combinatorial estimate, \[\mathbb{P}[\Delta_{\epsilon}]\leq(K+1)\sum_{\ell=((K+1)N^{\frac{\epsilon}{4}+ \sigma\alpha}}^{(K+1)N}\frac{1}{\ell!}N^{\frac{\epsilon}{10}\ell+\sigma\alpha \ell}.\] At this point we expand the factorial \(\ell!\) and see that, arranging the \(\ell\) terms in the product from large to small, for all the terms that are larger than \(N^{\frac{\epsilon}{10}+\sigma\alpha}\), their product is at least \[N^{(\frac{\epsilon}{6}+\sigma\alpha)(\ell-N^{\frac{\epsilon}{10}+\sigma\alpha })},\] and, observing that \(N^{\frac{\epsilon}{10}+\sigma\alpha}\leq N^{-\frac{3\epsilon}{20}}\ell\), when \(N\) large we have \[\frac{N^{(\frac{\epsilon}{10}+\sigma\alpha)\ell}}{N^{(\frac{\epsilon}{6}+ \sigma\alpha)(\ell-N^{\frac{\epsilon}{10}+\sigma\alpha})}}\leq\frac{N^{N^{- \frac{3\epsilon}{20}}(\frac{\epsilon}{6}+\sigma\alpha)\ell}}{N^{\frac{\epsilon }{15}\ell}}\leq 1.\] Thus we conclude with \[\mathbb{P}[\Delta_{\epsilon}^{\sigma}]\leq\sum_{\ell=(K+1)N^{\frac{\epsilon}{1 0}}}^{\infty}\frac{1}{\ell!}\leq\frac{e}{((K+1)N^{\frac{\epsilon}{10}})!}\leq e ^{-CN^{\epsilon}\ln N}\] where the last step follows from Stirling's formula. Now we are ready to prove the second part of Theorem 1.28. Proof of Theorem 1.28, the second part.: We define as in Sampling 6.2 a sampling procedure as follows: first sample a label \(L\), independently on each entry \((i,j)\), via the following rule: \(L_{ij}=T\) with probability \(P^{\sigma}:=\mathbb{P}(|\xi_{ij}|<(K+1)^{\frac{1}{\alpha}}N^{\frac{1-\epsilon/ 10}{\alpha}-\sigma})\), and \(L_{ij}=F\) with probability \(1-P^{\sigma}\). Then thanks to Lemma 6.5, with probability at least \(1-e^{-CN^{\epsilon}\ln N}\), \(L\) has at most \((K+1)N^{\frac{\epsilon}{4}+\sigma\alpha}\) elements with label \(F\), and we call such label \(L\) an **admissible** label. Meanwhile, by the computation (5.26), we see that with probability at least \(1-N^{-0.48\epsilon}\), any label \((i\leq j)\) such that \(L_{ij}=F\) must satisfy \(i\notin[1,N^{1-2\sigma\alpha-0.5\epsilon};]\cup[N-N^{1-2\sigma\alpha-0.5 \epsilon};N]\) and that for any other \((k\leq l)\) with \(L_{kl}=F\) necessarily we have \(|i-k|\geq N^{1-2\sigma\alpha-0.5\epsilon}\), and we call such label \(L\)**separably admissible**. Let \(\xi\) be a probability distribution with a symmetric law that satisfies (1.13). Let \(\xi_{\sigma}^{T}\) be the probability distribution of \(\xi\) conditioned to take value in \([-(K+1)^{\frac{1}{\alpha}}N^{\frac{1-\epsilon/10}{\alpha}-\sigma},(K+1)^{\frac {1}{\alpha}}N^{\frac{1-\epsilon/10}{\alpha}-\sigma}]\), and let \(\xi_{\sigma}^{F}\) be the probability distribution of \(\xi\) conditioned to take value in \(\mathbb{R}\setminus[-(K+1)^{\frac{1}{\alpha}}N^{\frac{1-\epsilon/10}{\alpha}- \sigma},(K+1)^{\frac{1}{\alpha}}N^{\frac{1-\epsilon/10}{\alpha}-\sigma}]\). For any site \((i,j)\) with \(L_{ij}=T\), we sample \(A_{ij}\) from the law \(\xi_{\sigma}^{T}\); and for any site with \(L_{ij}=F\) we sample \(A_{ij}\) from the law \(\xi_{\sigma}^{F}\). These samplings are independent modulo the symmetry constraint. In the first step we prove, for any **separably admissible** label \(L\), a local law for the Stieltjes transform of \(H_{N}^{\infty,(\mathbf{T}^{L})}+N^{\sigma}A_{N}^{(\mathbf{T}^{L})}\). Since \(L\) is **separably admissible**, we must have that \(H_{N}^{\infty,(\mathbf{T}^{L})}\) is block diagonal with each block of size larger than \(N^{1-2\sigma\alpha-0.5\epsilon}\). Then using \(\eta>N^{-1+2\sigma\alpha+\epsilon}\), we can verify after a slight modification of Proposition 1.1 that \[\sup_{z\in\mathcal{S}(\epsilon,\kappa,2\sigma)}\max_{i,j}\left|G_{ij}^{\infty, (\mathbf{T}^{L})}(z)\right|\leq C, \tag{6.16}\] where \(C\) can be chosen independent of the separably admissible label. Then thanks to the moment estimate (6.14), we can apply Proposition 3.3 to deduce that there are positive constants \(C\), \(\nu\) depending on \(\kappa,\epsilon\) such that \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\kappa,\epsilon)}|m^{(\mathbf{T}^{L})}(z )-m^{\infty,(\mathbf{T}^{L})}(z)|\geq CN^{\frac{\epsilon}{800}}(\frac{1}{q}+ \sqrt{\frac{1}{N^{1-\sigma\alpha-10^{-3}\epsilon}\eta}})\right)\leq e^{-\nu \log N^{\log\log N}} \tag{6.17}\] and the constants can be chosen uniformly over all admissible labels \(L\), and \(q=N^{\frac{\epsilon}{10\alpha}}\). Assuming that \(\eta>N^{-1+\sigma\alpha+\epsilon}\), we deduce \[\mathbb{P}\left(\sup_{z\in\mathcal{S}(\kappa,\epsilon),\eta\geq N^{-1+\sigma \alpha+\epsilon}}|m^{(\mathbf{T}^{L})}(z)-m^{\infty,(\mathbf{T}^{L})}(z)| \geq CN^{\frac{\epsilon}{40}}\right)\leq e^{-\nu\log N^{\log\log N}}. \tag{6.18}\] In the second step, we use (6.8) to deduce that, similarly as in the previous proof, \[\mathbb{P}\left(|m-m^{\infty}|\geq CN^{-\frac{\epsilon}{40}}+\frac{2(2K+1)N^{ \sigma\alpha+\frac{\epsilon}{2}}}{N\eta}\mid L\text{ separably admissible} \right)\leq e^{-\nu\log N^{\log\log N}}, \tag{6.19}\] Then the proof is finished thanks to our assumption on \(\eta\). Proof of Theorem 1.32, the second part.: We follow the sampling procedure discussed in the previous paragraph (which is different from the sampling procedure in 6.2). Thanks to Lemma 6.5, with probability at least \(1-e^{-N^{\epsilon}\log N}\), the sampled label \(L\) is **admissible**. Let \(W_{N}^{(\mathbf{T}^{L})}\) denote the matrix \(W_{N}\) with rows and columns indexed by indices in \(\mathbf{T}^{L}\) removed. Since \(W_{N}\) is a Wigner matrix, \(W_{N}^{(\mathbf{T}^{L})}\) is again a Wigner matrix with size \(N-|\mathbf{T}^{L}|\). Therefore we can apply Theorem 6.4 to derive a local law for \(W_{N}^{(\mathbf{T}^{L})}+A^{(\mathbf{T}^{L})}\) that holds for all \(\eta=\Im(z)\in[N^{-1+\sigma\alpha+\epsilon},1]\), and the constants are chosen uniform over all **admissible** label \(L\). Then we use (6.8) to complete the proof of the second part of Theorem 1.32. ## Appendix A Concentration inequality via high moments In this appendix we briefly recall the proof of Proposition 3.2 from [21], to show that Proposition 3.2 remains true when \(\Psi_{i}\) and \(a_{i}\) may be dependent, but that \(\Psi_{i}=C_{i}D_{i}\) where \(D_{i}\) is deterministic and \(C_{i}\) is almost surely bounded by some constant. Proof sketch.: We compute high moments of the summation: for any positive integer \(r\), \[\mathbb{E}|\sum_{i}\Psi_{i}a_{i}|^{2r}=\mathbb{E}\sum_{i_{1},\cdots,i_{2r}} \bar{\Psi}_{i_{1}}\bar{\Psi}_{i_{2}}\cdots\bar{\Psi}_{i_{r}}\Psi_{i_{r+1}} \cdots\Psi_{i_{2r}}\bar{a}_{i_{1}},\cdots\bar{a}_{i_{r}}a_{i_{r+1}}\cdots a_{i _{2r}}\] Each factor in the summation defines a partition \((i_{1},\cdots,i_{2r})\) of indices \(1,2,\cdots,r\) by requiring that \(j\) and \(k\) belong to the same equivalent class if \(i_{j}=i_{k}\). Then we first prescribe a partition \(\Gamma\) of the indices, then sum over labels yielding configuration \(\Gamma\), then sum over all partitions. Since \(a_{i}\) are centered, each equivalence class has cardinality at least \(2\). Denote by \(r_{s}\) the size of equivalent class \(s\), then \(r_{1}+\cdots+r_{l}=2r\) where \(l\) is the number of equivalence classes of \(\Gamma\). The contribution from a partition \(\Gamma\) in the sum is bounded in absolute value by \[\sum_{i_{1},\cdots,i_{l}}\prod_{s=1}^{l}\mathbb{E}\left[|\Psi_{i_{s}}|^{r_{s}} |a_{i_{s}}|^{r_{s}}\right].\] Now assuming that for each \(i\), we can write \(\Psi_{i}=C_{i}D_{i}\) where \(D_{i}\) are deterministic, and \(|C_{i}|\leq C_{0}\) almost surely, then the above sum is bounded by \[(C_{0})^{2r}\sum_{i_{1},\cdots,i_{l}}\prod_{s=1}^{l}|D_{i_{s}}|^{r_{s}} \mathbb{E}[|a_{i_{s}}|^{r_{s}}].\] We have taken the \(C_{0}\) factor out, and we are now reduced to the case where \(D_{i}\) is deterministic, which is the case covered by [21], Lemma 3.8. The proof completes after rearranging the summation as in [21] and applying a high moment Markov inequality. ## Appendix B Moments of alpha-stable laws We quote the following very useful result: **Lemma B.1**.: _([11], Lemma 5.8) Given a non-negative random variable \(A\) such that for any \(x\geq 0\),_ \[\mathbb{P}(A\geq x)=\ell(x)x^{-\alpha},\] (B.1) _for a slowly varying function \(\ell\) and some \(\alpha>0\). Then we can find a slowly varying function \(L_{0}\) such that for any \(k\in\mathbb{N}_{+}\) and any \(x\geq 0\),_ \[\mathbb{E}[A^{k}1_{|A|\leq x}]\leq\begin{cases}L_{0}(x),&\text{ if }k\leq \alpha,\\ L_{0}(x)\frac{k}{k-\alpha}x^{k-\alpha},&\text{ if }k>\alpha.\end{cases}\] (B.2)
2309.09266
Conservation and breaking of pseudospin symmetry
Pseudospin symmetry (PSS) is a relativistic dynamical symmetry connected with the lower component of the Dirac spinor. Here, we investigate the conservation and breaking of PSS in the single-nucleon resonant states, as an example, using Green's function method that provides a novel way to precisely describe not only the resonant energies and widths but also the spacial density distributions for both narrow and wide resonances. The PSS restoration and breaking are perfectly displayed in the evolution of resonant parameters and density distributions with the potential depth: In the PSS limit, i.e., when the attractive scalar and repulsive vector potentials have the same magnitude but opposite sign, PSS is exactly conserved with strictly the same energy and width between the PS partners as well as identical density distributions of the lower components. As the potential depth increases, the PSS is broken gradually with energy and width splittings and a phase shift in the density distributions.
T. -T. Sun, Z. P. Li, P. Ring
2023-09-17T13:06:13Z
http://arxiv.org/abs/2309.09266v2
# Conservation and breaking of pseudospin symmetry ###### Abstract Pseudospin symmetry (PSS) is a relativistic dynamical symmetry connected with the lower component of the Dirac spinor. Here, we investigate the conservation and breaking of PSS in the single-nucleon resonant states, as an example, using Green's function method that provides a novel way to precisely describe not only the resonant energies and widths but also the spacial density distributions for both narrow and wide resonances. The PSS restoration and breaking are perfectly displayed in the evolution of resonant parameters and density distributions with the potential depth: In the PSS limit, i.e., when the attractive scalar and repulsive vector potentials have the same magnitude but opposite sign, PSS is exactly conserved with strictly the same energy and width between the PS partners as well as identical density distributions of the lower components. As the potential depth increases, the PSS is broken gradually with energy and width splittings and a phase shift in the density distributions. keywords: Pseudospin symmetry, Conservation and breaking, Resonant states, Green's function method + Footnote †: journal: Physics Letters B ## 1 Introduction Symmetries in the single-particle spectrum of atomic nuclei are of great importance on nuclear structures and have been extensively studied in the literature (see Refs. [1; 2] and references therein). More than 50 years ago, pseudospin symmetry (PSS) was found in atomic nuclei, i.e., the two single-particle states with quantum numbers (\(n,l,j=l+1/2\)) and (\(n-1,l+2,j=l+3/2\)) are quasi-degeneracy and can be redefined as the pseudospin (PS) doublets (\(\tilde{n}=n\), \(\tilde{l}=l+1\), \(j=\tilde{l}\pm 1/2\)) [3; 4]. The pseudospin symmetry (PSS) has been used to explain a number of phenomena in nuclear structures, such as deformation [5], superdeformation [6; 7], identical rotational bands [8; 9], magnetic moment [10], quantized alignment [11] and so on. In addition, PSS is also of great concern in atomic and molecular physics and has been discussed in some special atomic and molecular potentials [12; 13; 14]. Since the recognition of PSS in the nuclear spectrum, comprehensive efforts have been made to explore its origin until Ginocchio pointed out that PSS is a relativistic symmetry in the Dirac Hamiltonian, which is exactly conserved when the scalar and vector potentials satisfying \(\Sigma(r)\equiv S(r)+V(r)=0\)[15]. He also revealed that the pseudo-orbital angular momentum \(\tilde{l}\) is nothing but the orbital angular momentum of the lower component of the Dirac wave function [15], and there are certain similarities in the relativistic single-nucleon wave functions of the corresponding pseudospin doublets [16]. However, there is no bound state in the PSS limit. Later, Meng _et al._ pointed out a more general condition of \(d\Sigma(r)/dr=0\), which can be approximately satisfied in exotic nuclei with highly diffuse potentials [17; 18] and the onset of the pseudospin symmetry to a competition between the pseudo-centrifugal barrier (PCB) and the pseudospin-orbit (PSO) potential. Afterwards, PSS in nuclear spectra have been studied extensively, such as PSS in deformed nuclei [19; 20; 21; 22; 23; 24; 25], spin symmetry (SS) in anti-nucleon spectra [26; 27; 28; 29; 30], PSS and SS in hypernuclei [31; 32; 33], a perturbative interpretation of SS and PSS [34; 35; 30; 36; 37], and PSS in supersymmetric quantum mechanics [38; 39; 40; 41]. PSS in bound states is always broken according to the conservation condition. In contrast, the resonant states, which can be obtained in the PSS limit and in finite-depth potentials, provide us with a better platform for the studying PSS. In addition, resonant states play essential roles in exotic nuclei, where the neutron or the proton Fermi surface is very close to the continuum threshold. Here valence nucleons can be easily scattered to single-particle resonant states in the continuum due to pairing correlations, and the couplings between the bound states and the continuum become very important [42; 43; 44; 45]. Therefore, the study of PSS in resonant states has attracted increasing attention in recent years. Until now, there are already some investigations of the PSS in the single-particle resonant states. PSS and SS in nucleon-nucleus and nucleon-nucleon scattering have been investigated in Refs. [46; 47; 48; 49]. In 2004, Zhang _et al._ confirmed that the lower components of the Dirac wave functions for the resonant PS doublets also have similarity properties [50]. Guo _et al._ investigated the dependence of pseudospin breaking for the resonant states on the shape of the mean-field potential in a Woods-Saxon form [51; 52; 53] as well as on the ratio of neutron and proton numbers [54]. In 2012, great progress has been achieved by Lu _et al._ in Ref. [55], where they gave a rigorous justification of PSS in single-particle resonant states and shown that PSS in single-particle resonant states is also exactly conserved under the same conditions as PSS in bound states, i.e., \(\Sigma(r)=0\) or \(d\Sigma(r)/dr=0\)[55]. However, the wave functions of the PS partners in the PSS limit are still absent. And also their research is mainly based on a radial square-well potential [56]. Furthermore, a uniform description for the conservation and breaking of PSS, i.e., from the PSS limit to cases with finite-depth potentials is highly expected. In this work, we will illustrate the exact conservation and breaking of PSS in the nuclear single-particle states in spherical Woods-Saxon potentials. The Green's function (GF) method [57; 58; 59; 60; 61] is employed. This method has been confirmed to be one of the most efficient tools for studying the single-particle resonant states, because it has the following advantages: the bound single-particle and the resonant states are treated on the same footing, the energies and widths for all resonances are precisely determined regardless of their widths, and the spatial density distributions are properly described [62; 63; 64; 65]. Besides, this method can describe the resonant states in any potential without any requirement on the potential shape. This paper is organized as follows. The theoretical framework of the Green's function method is briefly presented in Section 2. Section 3 is devoted to the discussion of the numerical results, where the exact conservation and breaking of the PSS in single-particle resonant states are illustrated by analyzing the energy and width splittings and the density distributions. Finally, a summary is given in Section 4. ## 2 Theoretical framework In a relativistic framework, nucleons are Dirac spinors moving in a mean-field potential with an attractive scalar potential \(S(\mathbf{r})\) and a repulsive vector potential \(V(\mathbf{r})\)[66]. The Dirac equation for a nucleon reads \[[\mathbf{\alpha}\cdot\mathbf{p}+V(\mathbf{r})+\beta(M+S(\mathbf{r}))]\psi_{n}(\mathbf{r})=\varepsilon _{n}\psi_{n}(\mathbf{r}), \tag{1}\] where \(\mathbf{\alpha}\) and \(\beta\) are the Dirac matrices and \(M\) is the nucleon mass. Based on the Dirac Hamiltonian \(\hat{h}(\mathbf{r})\), a relativistic single-particle Green's function \(\mathcal{G}(\mathbf{r},\mathbf{r}^{\prime};\varepsilon)\) can be constructed, which obeys \[[\varepsilon-\hat{h}(\mathbf{r})]\mathcal{G}(\mathbf{r},\mathbf{r}^{\prime};\varepsilon)= \delta(\mathbf{r}-\mathbf{r}^{\prime}). \tag{2}\] With a complete set of eigenstates \(\psi_{n}(\mathbf{r})\) and eigenvalues \(\varepsilon_{n}\), the Green's function can be simply represented as \[\mathcal{G}(\mathbf{r},\mathbf{r}^{\prime};\varepsilon)=\sum_{n}\frac{\psi_{n}(\mathbf{r} )\psi_{n}^{\dagger}(\mathbf{r}^{\prime})}{\varepsilon-\varepsilon_{n}}, \tag{3}\] which is a \(2\times 2\) matrix because of the upper and lower components of the Dirac spinor \(\psi_{n}(\mathbf{r})\). Equation (3) is fully equivalent to Eq. (2). For a spherical nucleus, the Green's function can be expanded as \[\mathcal{G}(\mathbf{r},\mathbf{r}^{\prime};\varepsilon)=\sum_{nm}Y_{jm}^{l}(\theta, \phi)\frac{\mathcal{G}_{\kappa}(r,r^{\prime};\varepsilon)}{rr^{\prime}}Y_{jm }^{l*}(\theta^{\prime},\phi^{\prime}), \tag{4}\] where \(Y_{jm}^{l}(\theta,\phi)\) is the spin spherical harmonic, \(\mathcal{G}_{\kappa}(r,r^{\prime};\varepsilon)\) is the radial Green's function, and the quantum number \(\kappa=(-1)^{j+l+1/2}(j+1/2)\). The Eq. (2) can be reduced as \[\left[\varepsilon-\left(\begin{array}{cc}\Sigma(r)&-\frac{d}{dr}+\frac{ \kappa}{r}\\ \frac{d}{dr}+\frac{\kappa}{r}&\Delta(r)-2M\end{array}\right)\right]\mathcal{G }_{\kappa}(r,r^{\prime};\varepsilon)=\delta(r-r^{\prime})I, \tag{5}\] where \(\Sigma(r)\equiv V(r)+S(r)\), \(\Delta(r)\equiv V(r)-S(r)\), and \(I\) is a two-dimensional unit matrix. A radial Green's function \(\mathcal{G}_{\kappa}(r,r^{\prime};\varepsilon)\) could be constructed with exact asymptotic behaviors of the Dirac wave functions for bound states and continuum. For these details, please see Refs. [62; 61] To study the conservation and breaking of PSS in resonant states, radial Woods-Saxon potentials are considered both for \(\Sigma(r)\) and \(\Delta(r)\), \[\Sigma(r)=\frac{C}{1+e^{(r-R)/a}},\ \ \Delta(r)=\frac{D}{1+e^{(r-R)/a}}. \tag{6}\] Here, the potential depths \(C=-66\) MeV and \(D=650\) MeV, the width \(R=7\) fm, and the diffusivity parameter \(a=0.3\) fm are adopted. ## 3 Results and discussions On the single-particle complex energy plane, bound and resonant states are distributed on the negative real energy axis and in the fourth quadrant, respectively. The energy \(\varepsilon_{n}\) is real for bound states while complex for resonant states and in the latter case \(\varepsilon_{n}=E-i\Gamma/2\) with \(E\) and \(\Gamma\) being the resonant energy and the width respectively. As shown in Eq. (3) these eigenvalues are the poles of the Green's function. Thus, in Refs. [63; 64; 65] it has been proposed to determine the single-particle energies \(\varepsilon_{n}\) by searching for the poles of the Green's function. In practice, one can do this by calculating the integral function of the Green's function \(G_{\kappa}(\varepsilon)\) for each partial wave \(\kappa\) at different energies \(\varepsilon\)[65] \[G_{\kappa}(\varepsilon)=\int dr\left(|\mathcal{G}_{\kappa}^{(11)}(r,r; \varepsilon)|+|\mathcal{G}_{\kappa}^{(22)}(r,r;\varepsilon)|\right), \tag{7}\] where \(|\mathcal{G}_{\kappa}^{(11)}(r,r;\varepsilon)|\) and \(|\mathcal{G}_{\kappa}^{(22)}(r,r;\varepsilon)|\) are the moduli of the Green's functions respectively for the "11" and "22" matrix elements. To search for the bound and resonant states, Green's functions in a wide energy range are calculated by scanning the single-particle energy \(\varepsilon\). For the bound states, the energies \(\varepsilon\) are taken along the negative real energy axis. For the resonant states, the energies \(\varepsilon\) are complex \(\varepsilon=\varepsilon_{r}+ie_{i}\) which are scanned in the fourth quadrant of the complex energy plane \(\varepsilon\), both along the real and imaginary energy axes. In Fig. 1, the resonant parameters of the state \(3d_{5/2}\) are exactly determined to be \(E=2.2728\) MeV and \(\Gamma/2=1.9949\) MeV by searching for the poles of the Green's functions in the fourth quadrant of the complex energy plane \(\varepsilon\), where a sharp peak is observed at \(\varepsilon_{r}=2.2728\) MeV and \(\varepsilon_{i}=-1.9949\) MeV. Calculations are done with an energy step of 0.1 keV for the integral functions \(G_{\kappa}(\varepsilon)\) in a coordinate space with size \(R_{\rm max}=20\) fm and a step of \(dr=0.05\) fm. This approach has been certified to be highly effective for all resonant states regardless of whether they are wide or narrow [64; 65]. Besides, with the Green's function method, the density distributions in the coordinate space can also be examined by exploring \(\rho_{\kappa}(r,\varepsilon)\) defined at the energy \(\varepsilon=E\), \[\rho_{\kappa}(r,\varepsilon)\] \[= -\frac{1}{4\pi r^{2}}\frac{1}{\pi}{\rm Im}\left[\mathcal{G}_{ \kappa}^{(11)}(r,r;E)+\mathcal{G}_{\kappa}^{(22)}(r,r;E)\right],\] where the terms \(\mathcal{G}_{\kappa}^{(11)}(r,r,E)\) and \(\mathcal{G}_{\kappa}^{(22)}(r,r;E)\) are respectively related to the upper and lower components of the Dirac wave functions (c.f. Eq. (3)). In Fig. 2, the density distributions for the resonant state \(3d_{5/2}\) are also plotted, with the red and blue lines the contributions from the upper and lower components. To better display the density distribution, here and hereafter, we adjust the highest peak of \(\rho_{\kappa}(r,\varepsilon)\) to be 1.0 fm\({}^{-1}\)-MeV\({}^{-1}\) and ensure relative sizes of the different components remain unchanged. Since \(3d_{5/2}\) is a low-lying resonant state, the density distribution of the lower component is reduced to zero very soon while a very slight oscillation can be observed in the upper component. In the following, PSS in resonant states will be studied with the Green's function method. In Fig. 3, we show the solutions in different potential depths on the complex energy plane for the PS doublets with pseudospin angular momentum \(\vec{l}=3\), i.e., \(d_{5/2}\) with \(\kappa=-3\) and \(g_{7/2}\) with \(\kappa=4\). In the PSS limit, i.e., for the potential depth \(C=0\), all the roots are located in the lower half-plane, and there are no bound states. Three pairs of resonant PS doublets with exactly the same energy and width are obtained, indicating the exact conservation of PSS in resonant states. Besides, one single intruder state \(1d_{5/2}\) appears near the continuum threshold. With finite potential depths, one finds the breaking of the PSS with obvious energy and width splitting between the PS partners. More in details, for most PS partners, \(g_{7/2}\) with pseudospin \(\tilde{s}=+1/2\) has lower energy and smaller width compared with the PS partner \(d_{5/2}\) with \(\tilde{s}=-1/2\) due to the higher PCB potential of \(g_{7/2}\). One exception are the PS partners (\(3d_{5/2},2g_{7/2}\)) obtained for \(C=-66\) MeV with the energies \(\varepsilon(3d_{5/2})=2.2728-i1.9949\) MeV and \(\varepsilon(2g_{7/2})=2.5422-i0.1019\) MeV, respectively. Meanwhile, PS partners move down and some resonant PS partners evolve to be bound states. For PS doublets with other values of \(\tilde{l}\), similar behaviors concerning the exact conservation and the breaking of the PSS could be observed. To study the conservation and breaking of PSS, the similarities of the lower component of the Dirac wave functions for the PS doublets are also examined. In Fig. 4, the spacial density distributions \(\rho_{\kappa}(r,\varepsilon)\) of the PS partners \(3d_{5/2}\) and \(2g_{7/2}\) are Figure 1: (Color online) The single-particle resonant state \(3d_{5/2}\) located in the fourth quadrant of the complex energy plane \(\varepsilon=\varepsilon_{r}+i\varepsilon_{i}\) determined by searching for the poles of the Green’s function \(G_{\kappa}(\varepsilon)\). Figure 3: (Color online) The poles of Green’s functions on the complex energy plane in the Woods-Saxon potentials with depth \(C=0\) (solid symbols), \(C=-33\) MeV (half-filled symbols) and \(C=-66\) MeV (empty symbols) for the PS partners \(d_{5/2}\) (square) and \(g_{7/2}\) (diamond). Figure 2: (Color online) Density distributions \(\rho_{\kappa}(r,\varepsilon)\) of the resonant state \(3d_{5/2}\) with the contributions from the upper and lower components of the Dirac wave functions. plotted for different values of the potential depth \(C\). The left and right columns present the contributions from the upper and lower components of the Dirac wave functions. In the PSS limit, i.e., for \(C=0\), the density distributions for the PS partner are identical for the lower component while they differ by one node for the upper component. This provides direct evidence for the exact conservation of PSS in resonant states and also certifies that PSS is a symmetry of the Dirac Hamiltonian related to the lower component of the Dirac spinor. For potentials with finite depth, the lower components of the density distributions of the PS partners are no longer identical, but they still show a strong similarity. Their difference is manifested as an obvious phase shift, which increases with the growing of the potential depths. For example, when the potential depth \(C=-45\) MeV, we observe a phase shift of almost one \(\pi\) between the PS partners for the density distributions outside the potential, in the area of \(r>8.0\) fm. Possibly, this phase shift may be extracted and confirmed by low-energy neutron-nuclei scattering experiments. ## 4 Summary In summary, the conservation and breaking of PSS in nuclear single-particle states are investigated within a relativistic framework by exploring the poles of the Green's function in spherical Woods-Saxon potentials of different depths. The Green's function method allows a precise determination of the energies and the widths for all the resonances and a proper description of the spatial density distributions. Therefore, it provides an excellent platform for studying the breaking and the restoration of PSS. In the PSS limit, i.e., for \(\Sigma(r)\equiv V(r)+S(r)=0\), the PSS in resonant states is confirmed to be strictly conserved with exactly the same energy and width for the PS partners. Besides, we also find identical density distributions of the lower components for the first time. This provides direct evidence that the PSS is a relativistic dynamical symmetry connected with the lower component of the Dirac spinor. For potentials with finite depth, PSS is broken, combined with an apparent splitting of the energy and the width for the PS partners and a phase shift between the spatial density distributions of the lower components. ## Acknowledgments This work was partly supported by the National Natural Science Foundation of China (No. U2032141 and No. 12375126), the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology (No. NLK2022-02), the Central Government Guidance Funds for Local Scientific and Technological Development, China (No. Guike ZY22096024), the Fundamental Research Funds for the Central Universities, and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC-2094-390783311, ORIGINS.
2309.06155
Representation theorems for functions of vanishing mean oscillation
As a significant application of the duality theory between real Hardy spaces $H^1(\mathbb{R}^n)$ and $\mathrm{BMO}(\mathbb{R}^n)$, Fefferman and Stein developed a representation theorem for $\mathrm{BMO}(\mathbb{R}^n)$ by utilizing the Riesz transforms $(n\geq 1)$. L. Carleson provided a constructive proof for the case $n=1$. In this article, we propose a representation theorem for $\mathrm{VMO}(\mathbb{S})$ using Carleson's construction and demonstrate a representation theorem for $\mathrm{VMO}(\mathbb{R}^n)$ through an iterative process. Additionally, we provide a brand-new characterization of $\mathrm{VMO}$ as an application of our results.
Zheng-yi Lu, Fei Tao, Yaosong Yang
2023-09-12T11:58:17Z
http://arxiv.org/abs/2309.06155v1
# Representation theorems for functions of vanishing mean oscillation ###### Abstract. As a significant application of the duality theory between real Hardy spaces \(H^{1}(\mathbb{R}^{n})\) and \(\operatorname{BMO}(\mathbb{R}^{n})\), Fefferman and Stein developed a representation theorem for \(\operatorname{BMO}(\mathbb{R}^{n})\) by utilizing the Riesz transforms (\(n\geq 1\)). L. Carleson provided a constructive proof for the case \(n=1\). In this article, we propose a representation theorem for \(\operatorname{VMO}(\mathbb{S})\) using Carleson's construction and demonstrate a representation theorem for \(\operatorname{VMO}(\mathbb{R}^{n})\) through an iterative process. Additionally, we provide a brand-new characterization of \(\operatorname{VMO}\) as an application of our results. Key words and phrases:BMO spaces, \(\operatorname{VMO}\) spaces, Carleson measures, Riesz transforms, representation theorems 2020 Mathematics Subject Classification: 26B40, 30C40, 42B30, 42B35 The first author was supported in part by the NNSF of China (Nos. 11831007 and 12071125), China Postdoctoral Science Foundation (No. 2022TQ0358 and 2022M723330). Introduction The study of the \(H^{1}(\mathbb{R}^{n})\)-norm of the those with the additional property of having uniform small mean oscillations over small cubes or intervals. Let \(\mathrm{C}\) denote the space of continuous functions on \(\mathbb{S}\). The space \(H^{\infty}+\mathrm{C}\) is a closed subalgebra of \(L^{\infty}\) (of Lebesgue measure on \(\mathbb{S}\) ). The algebra of functions that belong together with their complex conjugates to \(H^{\infty}+\mathrm{C}\) is denoted by \(QC\). Sarason proved that \(QC=\mathrm{VMO}\cap L^{\infty}\)[15]. Motivated by influential earlier work of Sarason, Chang and Marshall proved a conjecture by Douglas that every closed algebra between \(H^{\infty}\) and \(L^{\infty}\) actually is a Douglas algebra [4]. Later on, applications arose in connection with parabolic and elliptic PDEs with VMO coefficients(see [10, 12, 13, 14]). One of our key purpose of the present paper is to give representations for \(\mathrm{VMO}(\mathbb{S})\) and \(\mathrm{VMO}(\mathbb{R}^{n})\) respectively. To drop the assumption that \(\varphi\) has compact support in Theorem 1.2 and to avoid using additional hypothesis at infinity, we first consider the compact case \(\mathbb{S}\). Let \(\mu\) be a finite signed measure on \(\Delta\). The _balayage_ or _sweep_ of \(\mu\) is the function \[S\mu(\zeta)=\int_{\Delta}P_{z}(\zeta)d\mu(z),\ \ \zeta\in e^{i\theta},\] where \[P_{z}(\zeta)=\frac{1}{2\pi}\frac{1-|z|^{2}}{|1-\zeta z|^{2}}=\frac{1}{2\pi} \frac{1-r^{2}}{1-2r\cos(\theta-\varphi)+r^{2}}:=P_{r}(\theta-\varphi),\ z=e^{i \varphi}.\] Fubini's theorem asserts that \(S\mu(\zeta)\) exists almost everywhere and \[\int_{\mathbb{S}}|S\mu(\zeta)||d\zeta|\leq\int_{\Delta}d|\mu|(x):=\left\|\mu \right\|.\] With Carleson's construction, we demonstrate a representation theorem for \(\mathrm{VMO}(\mathbb{S})\) as follows. **Theorem 1.3**.: _Let \(f\in\mathrm{VMO}(\mathbb{S})\). Then there exists a vanishing Carleson measure \(\mu\in CM_{0}(\Delta)\) and a continuous function \(g\in C(\mathbb{S})\) such that_ \[f(\zeta)=g(\zeta)+S\mu(\zeta),\ \zeta=e^{i\theta}\in\mathbb{S}, \tag{1.1}\] _where \(C(\mathbb{S})\) is the function space of all continuous functions defined on \(\mathbb{S}\), \(CM_{0}(\Delta)\) denotes vanishing Carleson measures on \(\Delta\)\((\)see Section 2 for precise definition\()\)._ It is natural to ask whether a representation theorem of VMO exists in a higher dimension. Comparing to Fefferman-Stein's decomposition in 1-dimension and applying the Riesz transforms, we obtain a representation theorem for \(\mathrm{VMO}(\mathbb{R}^{n})\). Let \(\mathrm{UC}(\mathbb{R}^{n})\) be the function space of uniformly continuous functions on \(\mathbb{R}^{n}\), and write \(\mathrm{BUC}(\mathbb{R}^{n})\) for \(L^{\infty}(\mathbb{R}^{n})\cap\mathrm{UC}(\mathbb{R}^{n}).\) We have **Theorem 1.4**.: _Assume \(\varphi\in\mathrm{BMO}\left(\mathbb{R}^{n}\right)\), the following conditions are equivalent._ 1. \(\varphi\in\mathrm{VMO}\left(\mathbb{R}^{n}\right)\)_._ 2. _there exists_ \(\varphi_{j}\in\mathrm{BUC}\left(\mathbb{R}^{n}\right)\)_,_ \(j=0,\cdots,n\)_, such that_ \[\varphi(x)=\varphi_{0}(x)+\sum_{j=1}^{n}R_{j}(\varphi_{j})(x).\] _where_ \(R_{j}\) _is the_ \(j\)_-th Riesz transform_ \((\)_see Definition_ 2.2\()\)_._ The paper is structured as follows: in Section 2, we provide some basic definitions and facts. In Section 3, We utilize an iteration argument to show a representation theorem for \(\mathrm{VMO}(\mathbb{S})\). Conversely, we also show that functions with such a representation must have vanishing mean oscillation. Section 4 is devoted to some applications of Theorem 1.3. With various equivalent descriptions of \(\mathrm{VMO}(\mathbb{R}^{n})\), we illustrate a representation theorem for \(\mathrm{VMO}(\mathbb{R}^{n})\) in the final section. ## 2. Preliminaries and lemmas Let \(Q\) be a cube in \(\mathbb{R}^{n}\) with sides parallel to the axes. A locally integrable function \(f\in L^{1}_{loc}\left(\mathbb{R}^{n}\right)\) is said to have _bounded mean oscillation_ (abbreviated to \(\mathrm{BMO}\)) if \[\left\|f\right\|_{*}:=\sup_{|Q|}\frac{1}{|Q|}\int_{Q}|f(x)-f_{Q}|\,dx<\infty,\] where the supremum is taken over all possible cubes of \(\mathbb{R}^{n}\), \(|\cdot|\) denotes the Lebesgue measure. And \(f_{Q}\) is the average of \(f\) over \(Q\), namely, \[f_{Q}=\frac{1}{|Q|}\int_{Q}f(x)dx.\] We write the \(\mathrm{BMO}\)_-norm_ of \(f\) by \(\left\|f\right\|_{*}\). The set of all \(\mathrm{BMO}\) functions on \(\mathbb{R}^{n}\) is denoted by \(\mathrm{BMO}(\mathbb{R}^{n})\). However, \(\|\cdot\|_{*}\) is not a true norm since it is trivial that the \(\mathrm{BMO}\)-norm of any constant function is \(0\). In fact, this is regarded as a Banach space with norm \(\|\cdot\|_{*}\) modulo constants. It is worthwhile to notice that \(f_{Q}\) can be substituted by any constant in the definition of \(\mathrm{BMO}\) (see [6, 7]). Moreover, if \(f\) also satisfies the condition \[\lim_{|Q|\to 0}\frac{1}{|Q|}\int_{Q}|f(x)-f_{Q}|\,dx=0,\] we say \(f\) has _vanishing mean oscillation_ (abbreviated to \(\mathrm{VMO}\)). Similarly, the set of all \(\mathrm{VMO}\) functions on \(\mathbb{R}^{n}\) is denoted by \(\mathrm{VMO}(\mathbb{R}^{n})\). Functions of bounded mean oscillation were first introduced by John and Nirenberg in [11]. If \(f\in L^{1}(\mathbb{S})\), replace the cubes with intervals on the unit circle we can define \(\mathrm{BMO}(\mathbb{S})\) and \(\mathrm{VMO}(\mathbb{S})\) in the same way as before. Earlier in the former introduction, we recall that \(\mathrm{VMO}\) is a naturally closed subspace of \(\mathrm{BMO}\). Let us recall that a measure \(\mu\) on \(\mathbb{R}^{n+1}_{+}\) is said to be a _Carleson measure_ (we denote by \(\mu\in CM(\mathbb{R}^{n+1}_{+})\)) or satisfy the Carleson measure condition if there exists a constant \(C>0\) such that \[\sup_{Q}\frac{|\mu|(\widetilde{Q})}{|Q|}<C,\] where \(Q\) is as before, \(|\mu|\) is the total variation of \(\mu\). And \[\widetilde{Q}=\{(x,y)\in\mathbb{R}^{n+1}_{+}:x\in Q,0<y<\ell(Q)\},\] where \(\ell(Q)\) is the length of the side of \(Q\), (see [1, 2, 17]). A measure \(\mu\) on \(\Delta\) is said to be a _Carleson measure_ (we denote by \(\mu\in CM(\Delta)\)) if \(\widetilde{Q}\) is replaced by the Carleson square \[\widetilde{I}=\{re^{i\theta}:\theta\in I\subset[0,2\pi],\ 1-|I|/2\pi<r<1\},\] where \(I\) is any subinterval of \([0,2\pi]\). Roughly speaking, a Carleson measure on a domain is a measure that does not vanish at the boundary when compared to the surface measure on the boundary. The infimum of the set of constants \(C>0\) for which the Carleson condition holds is known as the _Carleson norm_ of the measure \(\mu\), denoted by \(\left\|\cdot\right\|_{\mathcal{C}}\). Furthermore, if a Carleson measure also satisfies \[\lim_{\left|Q\right|\to 0}\frac{\left|\mu\right|(\widetilde{Q})}{\left|Q\right|}=0 \ \ \text{or}\ \ \lim_{\left|I\right|\to 0}\frac{\left|\mu\right|(\widetilde{I})}{ \left|I\right|}=0,\] we say \(\mu\) is a _vanishing Carleson measure_, denoted by \(CM_{0}(\mathbb{R}_{+}^{n+1})\) or \(CM_{0}(\Delta)\). Note that it is not essential if we replace the cube and the Carleson square with \(\Omega\cap B(x_{0},r)\) in the definition of Carleson measure, where \(x_{0}\in\partial\Omega\), \(\Omega=\Delta\) or \(\mathbb{R}_{+}^{n+1}\). Recall the _balayage_ or _sweep_ of a finite signed measure \(\mu\) is \[S\mu(\zeta)=\int_{\Delta}P_{z}(\zeta)d\mu(z),\ \ \zeta\in e^{i\theta}.\] It is true when \(\mu\) is a Carleson measure, in this case, \(S\mu\in\text{BMO}\). **Theorem 2.1** (Garnett, [6]).: _If \(\mu\) is a Carleson measure on \(\Delta\), then \(S\mu\in\text{BMO}(\mathbb{S})\) and_ \[\left\|S\mu\right\|_{*}\leq C\left\|\mu\right\|_{\mathcal{C}}\] _for some universal constant \(C\)._ It is more remarkable that Theorem 2.1 has a converse. That is to say, any \(f\in\text{BMO}(\mathbb{S})\) has a representation as follows. **Theorem 2.2** ([3, 18] ).: _Let \(f\in\text{BMO}(\mathbb{S})\). Then there exists a Carleson measure \(\mu\in CM(\Delta)\) and a function \(g\in L^{\infty}(\mathbb{S})\) such that_ \[f(\zeta)=g(\zeta)+S\mu(\zeta),\ \zeta=e^{i\theta}\in\mathbb{S}.\] **Remark 2.1**.: The theorem above is an immediate consequence of the duality between the real Banach space \(H^{1}\) and \(\text{BMO}\). It should be noted that conversely one can easily obtain the duality theorem between \(\text{BMO}\) and \(H^{1}\) from this representation of \(\text{BMO}\). A constructive proof of such a representation was investigated by Carleson in [3] and Fefferman (unpublished, see [6]). Precisely, for any given ascending sequence \(\{r_{n}\}\to 1\), there exists \(h_{n}\in L^{\infty}(r_{n}\mathbb{S})\), \(n=1,2,\cdots\) such that \[\mu(z)=\sum_{n=1}^{\infty}\delta_{r_{n}}(r)h_{n}(r_{n}e^{i\varphi}),\ z=re^{i \varphi},\] where \(\delta_{r_{n}}\) is the Dirac function supported on the point \(r_{n}\). Furthermore, we have \(\left\|g\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|h_{n}\right\|_{\infty}\leq C \left\|f\right\|_{*}\) (see page 152 in [5]), and \(C\) is independent on the function \(f\). For \(\varphi\in L^{p}(\mathbb{R}^{n})\), \((1\leq p<\infty)\), the Riesz transforms of \(\varphi\) are defined as follows. **Definition 2.1**.: Assume \(1\leq p<\infty\), \(\varphi\in L^{p}(\mathbb{R}^{n})\), \(j=1,\cdots,n\). We define \(n\) transforms: \[R_{j}(\varphi)(x)=P.V.\int_{\mathbb{R}^{n}}K(x-y)\varphi(y)dy,\] where \[K(x)=\frac{\Gamma\left(\frac{n+1}{2}\right)}{\pi^{\frac{n+1}{2}}}\frac{x_{j}}{ |x|^{n+1}}=c_{n}\frac{x_{j}}{|x|^{n+1}}.\] \(\{R_{j}\}\) is said to be the _the Riesz transform(family)_, \(j=1,\cdots,n\). In the context of harmonic analysis, the Riesz transform family is a natural generalization of the Hilbert transform to Euclidean spaces of dimension \(n\). Given by the convolution of one function with another function that has a singularity at the origin, it is an easy matter to verify that the Riesz transform \(R_{j}\) is a particular type of singular integral operator. **Remark 2.2**.: Let \(B_{r}\) denote the open ball in \(\mathbb{R}^{n}\) with radius \(r\) and centered at the origin. To well define the Riesz transform of \(\varphi\in L^{\infty}\), we have \[R_{j}(\varphi)(x)=P.V.\int_{\mathbb{R}^{n}}\left(K(x-y)-K(-y)\chi_{\mathbb{R}^ {n}\setminus\overline{B}_{1}}(y)\right)\varphi(y)dy.\] Stein and Fefferman established the following theorem, whose \((ii)\) is the well-known Stein-Fefferman's decomposition. **Theorem 2.3** (Theorem 3, [5]).: _The following three conditions on \(\varphi\) are equivalent:_ 1. \(\varphi\in\mathrm{BMO}(\mathbb{R}^{n})\)_._ 2. _there exist_ \(\varphi_{j}\in L^{\infty}(\mathbb{R}^{n})\)_,_ \(j=0,\cdots,n\)_, such that_ (2.1) \[\varphi(x)=\varphi_{0}(x)+\sum_{j=1}^{n}R_{j}(\varphi_{j})(x).\] 3. \(\varphi\) _satisfies_ (2.2) \[\int_{\mathbb{R}^{n}}\frac{|\varphi(x)|}{1+|x|^{n+1}}dx<\infty,\] _and_ \[\sup_{x_{0}\in\mathbb{R}^{n}}\int_{T(x_{0},h)}t|\nabla\varphi(x,t)|^{2}dxdt \leq Ah^{n},\ 0<h<\infty.\] _where_ \[|\nabla\varphi(x,t)|^{2}=\left|\frac{\partial\varphi}{\partial t}\right|^{2}+ \sum_{j=1}^{n}\left|\frac{\partial\varphi}{\partial x_{j}}\right|^{2},\] \[T\left(x_{0},h\right)=\left\{(x,t):0<t<h,|x-x_{0}|<h\right\}.\] **Remark 2.3**.: The function \(\varphi(x,t)\) is the Poisson integral of \(\varphi\). Similar to the \(1\)-dimension case, condition (2.2) suggests the existence of the Poisson integral of \(\varphi\). **Remark 2.4**.: By the way, the original proof by Stein and Fefferman in [5] also indicates that \((ii)\) in Theorem 2.3 satisfies \[\sum_{j=0}^{n}\left\|\varphi_{j}\right\|_{\infty}\leq C\left\|\varphi\right\|_ {*}, \tag{2.3}\] where the constant \(C\) is independent on \(\varphi\). **Theorem 2.4**.: \(R_{j}\) _is a bounded linear operator from \(L^{\infty}(\mathbb{R}^{n})\) to \(\mathrm{BMO}(\mathbb{R}^{n})\)._ Proof.: For the sake of simplicity, the author will not specifically point out the integrals in the sense of principal value. Readers shall identify them. Fix \(x_{0}\), set \(Q\) to be a cube in \(\mathbb{R}^{n}\) with center \(x_{0}\), \(Q^{\prime}\) be the concentric cube with triple side length. Denote \(\varphi=\varphi_{1}+\varphi_{2}\), where \(\varphi_{1}=\varphi\chi_{Q^{\prime}},\ \varphi_{2}=\varphi-\varphi\chi_{Q^{\prime}}\). Thus, \[R_{j}(\varphi)=R_{j}(\varphi_{1})+R_{j}(\varphi_{2}).\] Accordingly, we shall estimate \(R_{j}(\varphi_{1})\) and \(R_{j}(\varphi_{2})\) separately. For \(R_{j}(\varphi_{1})\), the integral \(\int_{\mathbb{R}^{n}}K(-y)\chi_{\mathbb{R}^{n}\setminus\overline{B}_{1}}(y) \varphi_{1}(y)dy\) is a constant bounded by \(c\left\|\varphi\right\|_{\infty}\) for some universal constant \(c\), we denote by \(C(\varphi)\). Then \[R_{j}(\varphi_{1})= \int_{\mathbb{R}^{n}}K(x-y)\varphi_{1}(y)dy+C(\varphi).\] and hence \[\frac{1}{|Q|}\int_{Q}\left|R_{j}(\varphi_{1})-\left(R_{j}(\varphi _{1})\right)_{Q}\right|dx\] \[= \frac{1}{|Q|}\int_{Q}\left|\int_{\mathbb{R}^{n}}K(x-y)\varphi_{1 }(y)dy-\left(\int_{\mathbb{R}^{n}}K(x-y)\varphi_{1}(y)dy\right)_{Q}\right|dx \tag{2.4}\] \[\leq \frac{2}{|Q|}\int_{Q}\left|\int_{\mathbb{R}^{n}}K(x-y)\varphi_{1 }(y)dy\right|dx.\] By Holder's inequality, we obtain then \[\frac{1}{|Q|}\int_{Q}\left|\int_{\mathbb{R}^{n}}K(x-y)\varphi_{1 }(y)dy\right|dx \leq\left(\frac{1}{|Q|}\int_{Q}\left|\int_{\mathbb{R}^{n}}K(x-y) \varphi_{1}(y)dy\right|^{2}dx\right)^{\frac{1}{2}}\] \[=\left(\frac{1}{|Q|}\int_{Q}\left|R_{j}(\varphi_{1})-C(\varphi) \right|^{2}dx\right)^{\frac{1}{2}} \tag{2.5}\] \[\leq\left(\frac{2}{|Q|}\int_{Q}\left|R_{j}(\varphi_{1})\right|^{2 }dx\right)^{\frac{1}{2}}+2c\left\|\varphi\right\|_{\infty}.\] Note that \(\varphi_{1}\in L^{2}\), the right-hand side of the above inequality is estimated as follows. \[\left(\int_{Q}\left|R_{j}(\varphi_{1})\right|^{2}dx\right)^{\frac{1}{2}}\leq \left\|R_{j}(\varphi_{1})\right\|_{2}\leq\left\|\varphi_{1}\right\|_{2}\leq \left|Q^{\prime}\right|^{\frac{1}{2}}\left\|\varphi\right\|_{\infty}. \tag{2.6}\] The following fact explains why the second inequality persists, if \(\varphi\in L^{2}\), for the corresponding Riesz transforms, we have \[\sum_{j=1}^{n}\left\|R_{j}(\varphi)\right\|_{2}^{2}=\left\|\varphi\right\|_{2 }^{2}.\] Combined with (2.4), (2.5) and (2.6), we conclude that \[\frac{1}{|Q|}\int_{Q}\left|R_{j}(\varphi_{1})-\left(R_{j}(\varphi_{1})\right)_ {Q}\right|dx\leq 4(c+3^{\frac{n}{2}})\left\|\varphi\right\|_{\infty}. \tag{2.7}\] On the other hand, it should be noticed that \[|R_{j}(\varphi_{2})(x)-R_{j}(\varphi_{2})(x_{0})|\leq c_{n}\int_{ \mathbb{R}^{n}\setminus\overline{Q^{\prime}}}\left|\frac{x_{j}-y_{j}}{|x-y|^{ n+1}}-\frac{x_{0,j}-y_{j}}{|x_{0}-y|^{n+1}}\right||\varphi(y)|dy. \tag{2.8}\] For all \(x\in Q\), \(y\in\mathbb{R}^{n}\setminus\overline{Q^{\prime}}\), it is verified that \(|x-y|>\ell(Q)\), \(|x_{0}-y|>\ell(Q)\). Therefore, \[|x_{0}-y| \leq|x-x_{0}|+|x-y|\leq\frac{\sqrt{n}}{2}\ell(Q)+|x-y|\leq\left(1 +\frac{\sqrt{n}}{2}\right)|x-y|,\] \[|x-y| \leq|x-x_{0}|+|x_{0}-y|\leq\frac{\sqrt{n}}{2}\ell(Q)+|x_{0}-y| \leq\left(1+\frac{\sqrt{n}}{2}\right)|x_{0}-y|.\] This of course yields that \(|x-y|\) and \(|x_{0}-y|\) are comparable, thus \[\left|\left|x-y\right|^{n+1}-|x_{0}-y|^{n+1}\right| \leq||x-y|-|x_{0}-y||\sum_{k=1}^{n}|x-y|^{k}|x_{0}-y|^{n-k} \tag{2.9}\] \[\leq C(n)|x-x_{0}||x_{0}-y|^{n}.\] From (2.9) and triangle inequality, we have \[\left|\frac{x_{j}-y_{j}}{|x-y|^{n+1}}-\frac{x_{0,j}-y_{j}}{|x_{0}- y|^{n+1}}\right|\] \[\leq \frac{|x_{j}-x_{0,j}||x_{0}-y|^{n+1}+|x_{0,j}-y_{j}|}{|x_{0}-y|^{ n+1}|x-y|^{n+1}} \tag{2.10}\] \[\leq \frac{C(n)\ell(Q)}{|x_{0}-y|^{n+1}}.\] We shall continue our estimate on (2.8), according to (2.10), we then have \[\left|R_{j}(\varphi_{2})(x)-R_{j}(\varphi_{2})(x_{0})\right| \leq C(n)\ell(Q)\int_{\mathbb{R}^{n}\setminus B_{\ell(Q)}}\frac{ |\varphi(y)|}{|x_{0}-y|^{n+1}}dy\] \[\leq C(n)\ell(Q)\left\|\varphi\right\|_{\infty}\int_{\mathbb{R}^ {n}\setminus B_{\ell(Q)}}^{\infty}\frac{1}{|x_{0}-y|^{n+1}}dy \tag{2.11}\] \[=C(n)\ell(Q)\left\|\varphi\right\|_{\infty}\int_{\ell(Q)}^{ \infty}\frac{1}{r^{2}}dr=C(n)\left\|\varphi\right\|_{\infty}.\] It is easy to see that \(R_{j}(\varphi_{2})(x_{0})\) is a constant depending only on \(\varphi\) and \(Q\). By (2.11), we know that for any cube \(Q\subset\mathbb{R}^{n}\), \[\frac{1}{|Q|}\int_{Q}\left|R_{j}(\varphi_{2})(x)-\left(R_{j}( \varphi_{2})\right)_{Q}\right|dx\] \[= \frac{1}{|Q|}\int_{Q}\left|\left(R_{j}(\varphi_{2})(x)-R_{j}( \varphi_{2})(x_{0})\right)-\left(R_{j}(\varphi_{2})-R_{j}(\varphi_{2})(x_{0}) \right)_{Q}\right|dx \tag{2.12}\] \[\leq \frac{2}{|Q|}\int_{Q}\left|R_{j}(\varphi_{2})(x)-R_{j}(\varphi_{2 })(x_{0})\right|dx\leq C(n)\left\|\varphi\right\|_{\infty},\] which together with (2.7) implies that \[\left\|R_{j}(\varphi)\right\|_{*}\leq\left\|R_{j}(\varphi_{1})\right\|_{*}+ \left\|R_{j}(\varphi_{2})\right\|_{*}\leq C(n)\left\|\varphi\right\|_{\infty}\] and completes the proof of the theorem, that is to say, \(R_{j}\) is a bounded linear operator from \(L^{\infty}(\mathbb{R}^{n})\) to \(\mathrm{BMO}(\mathbb{R}^{n})\). ## 3. Proof of Theorem 1.3 We give the proof by an iteration argument. Suppose \(f\in\mathrm{VMO}(\mathbb{S})\), since \(\mathrm{VMO}\) is the closed subspace of \(\mathrm{BMO}\), Remark 2.1 indicates that there exists \(g_{1}(\zeta)\in L^{\infty}(\mathbb{S})\), \(\mu_{1}(z)\in CM(\Delta)\) and a constant \(C>0\) (irrelevant to \(f\)) such that \[f(\zeta)=g_{1}(\zeta)+S\mu_{1}(\zeta),\] \[\left\|g_{1}\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|h_{n}^{(1)}\right\|_{ \infty}\leq C\left\|f\right\|_{*},\] where \[\mu_{1}(z)=\sum_{n=1}^{\infty}\delta_{r_{n}}(r)h_{n}^{(1)}(r_{n}e^{i\varphi}),\ h_{n}^{(1)} \in L^{\infty}(r_{n}\mathbb{S}),\ z=re^{i\varphi}.\] From the Theorem 5.1 in [6] and [15], if \(f\in\operatorname{VMO}(\mathbb{S})\), then there exists \(r^{(1)}>0\) such that \[\left\|f-P_{r^{(1)}}\ast f\right\|_{\ast}<\left\|f\right\|_{\ast}/2.\] Applying Fubini's theorem to the following equality, it is not hard to find that \[P_{r^{(1)}}\ast f=P_{r^{(1)}}\ast g_{1}+P_{r^{(1)}}\ast S\mu_{1}:=\tilde{g}_{1 }+S\tilde{\mu}_{1},\] where \(\tilde{\mu}_{1}(z)=\sum_{n=1}^{\infty}\delta_{r_{n}}(r)\tilde{h}_{n}^{(1)}(r_ {n}e^{i\varphi})\), \(\tilde{h}_{n}^{(1)}=P_{r^{(1)}}\ast h_{n}^{(1)}\). Since \(g_{1}\in L^{\infty}(\mathbb{S})\), \(h_{n}^{(1)}\in L^{\infty}(r_{n}\mathbb{S})\), it is easy to see from the properties of convolution that \(\tilde{g}_{1}\in C(\mathbb{S})\) and \(P_{r^{(1)}}\ast h_{n}^{(1)}\in C(r_{n}\mathbb{S})\). Moreover, we have \[\left\|\tilde{g}_{1}\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|\tilde{h}_{n} ^{(1)}\right\|_{\infty}\leq\left\|g_{1}\right\|_{\infty}+\sum_{n=1}^{\infty} \left\|h_{n}^{(1)}\right\|_{\infty}\leq C\left\|f\right\|_{\ast}.\] It must be made clear that the interchange of the order of integration without any explanation in the following proof can be verified by dedicated readers for the uniformity of the convergence. Set \(f^{(1)}=f-P_{r^{(1)}}\ast f\), \(\left\|f^{(1)}\right\|_{\ast}\leq\left\|f\right\|_{\ast}/2\). By a slight calculation, we claim that \(f^{(1)}\in\operatorname{VMO}(\mathbb{S})\). First, we can represent the mean integral of \(f^{(1)}\) over \(I\subset\mathbb{S}\) as follows, \[f_{I}^{(1)}=\frac{1}{\left|I\right|}\int_{I}f^{(1)}(e^{i\theta} )d\theta =\frac{1}{\left|I\right|}\int_{I}\left(\int_{0}^{2\pi}\left[f(e^{i \theta})-f(e^{i(\theta-\varphi)})\right]P_{r^{(1)}}(\varphi)d\varphi\right)d\theta\] \[=\int_{0}^{2\pi}P_{r^{(1)}}(\varphi)(f-f_{\varphi})_{I}d\varphi,\] where \(f_{\varphi}=f(e^{i(\theta-\varphi)})\) is the translation of \(f(e^{i\theta})\). Then, for any \(I\) with small enough arclength \(\delta\), we obtain that \[\frac{1}{\left|I\right|}\int_{I}\left|f^{(1)}(e^{i\theta})-f_{I}^ {(1)}\right|d\theta\] \[= \frac{1}{\left|I\right|}\int_{I}\left|\int_{0}^{2\pi}P_{r^{(1)}} (\varphi)\left(f(e^{i\theta})-f(e^{i(\theta-\varphi)})\right)d\varphi-\int_{0 }^{2\pi}P_{r^{(1)}}(\varphi)\left(f-f_{\varphi}\right)_{I}d\varphi\right|d\theta\] \[= \frac{1}{\left|I\right|}\int_{I}\left|\int_{0}^{2\pi}P_{r^{(1)}} (\varphi)\left[\left(f(e^{i\theta})-f_{I}\right)-\left(f_{\varphi}-(f_{ \varphi})_{I}\right)\right]d\varphi\right|d\theta\] Again by Fubini's theorem, we shall get \[\frac{1}{\left|I\right|}\int_{I}\left|f^{(1)}(e^{i\theta})-f_{I}^ {(1)}\right|d\theta\] \[\leq \int_{0}^{2\pi}P_{r^{(1)}}(\varphi)\left(\frac{1}{\left|I\right| }\int_{I}\left|f(e^{i\theta})-f_{I}\right|d\theta+\frac{1}{\left|I\right|}\int _{I}\left|f_{\varphi}-(f_{\varphi})_{I}\right|d\theta\right)d\varphi\] \[\leq 2\sup_{\left|I\right|=\delta}\frac{1}{\left|I\right|}\int_{I} \left|f-f_{I}\right|d\theta.\] Let \(\delta\to 0,f\in\operatorname{VMO}(\mathbb{S})\) implies that \(f^{(1)}\in\operatorname{VMO}(\mathbb{S})\). Now, we shall claim that \(\tilde{\mu}_{1}\) is a (vanishing) Carleson measure. It holds that \[\frac{1}{|I|}\int_{\widetilde{I}}d|\tilde{\mu}_{1}|(z) \leq\frac{1}{|I|}\int_{I}\sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}^{\infty}| \tilde{h}_{n}^{(1)}(r_{n}e^{i\varphi})|d\varphi\] \[\leq\sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}^{\infty}\left\|\tilde{h}_{n}^ {(1)}\right\|_{\infty}\leq\sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}^{\infty}\left\|h_{n}^ {(1)}\right\|_{\infty}, \tag{3.1}\] for any Carleson square \[\widetilde{I}=\{z=re^{i\varphi}:\varphi\in I,\ r\in(1-|I|/2\pi,1)\},\ I\subset[ 0,2\pi].\] It follows then \(\tilde{\mu}_{1}\) is a Carleson measure. Then repeating the above argument on \(f^{(1)}\), there exists \(r^{(2)}>0\) such that \[\left\|f^{(1)}-P_{r^{(2)}}*f^{(1)}\right\|_{*}\leq\frac{\left\|f^{(1)}\right\| _{*}}{2}\leq\frac{\left\|f\right\|_{*}}{4},\] and there exists \(g_{2}\in L^{\infty}(\mathbb{S}),\ \mu_{2}(z)=\sum_{n=1}^{\infty}\delta_{r_{n}}(r)h_{n }^{(2)}(r_{n}e^{i\varphi}),\)\(h_{n}^{(2)}\in L^{\infty}(r_{n}\mathbb{S})\) such that \[f^{(1)}=g_{2}+S\mu_{2},\] and \[\left\|g_{2}\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|h_{n}^{(2)}\right\|_{ \infty}\leq C\left\|f^{(1)}\right\|_{*}\leq\frac{C}{2}\left\|f\right\|_{*}.\] We get \(\tilde{g}_{2}\) and \(\tilde{\mu}_{2}\) as before, and we have \[\left\|\tilde{g}_{2}\right\|_{\infty}+\sum_{n=1}^{\infty}\left\|\tilde{h}_{n }^{(2)}\right\|_{\infty}\leq\left\|g_{2}\right\|_{\infty}+\sum_{n=1}^{\infty} \left\|h_{n}^{(2)}\right\|_{\infty}\leq C\|f\|_{*}.\] One does this infinitely often by iterating, obtaining the limit function \(g\) and the limit measure \(\mu\). Therefore, \[f=\sum_{k=1}^{\infty}\tilde{g}_{k}+\int_{\Delta}P_{z}(\zeta)\left(\sum_{k=1}^{ \infty}d\tilde{\mu}_{k}(z)\right):=g+S\mu.\] Since \(\tilde{g}_{k}\) is continuous on the unit circle \(\mathbb{S}\), and \[\left\|\sum_{k=1}^{\infty}\tilde{g}_{k}\right\|_{\infty}\leq\sum_{k=1}^{\infty }\left\|\tilde{g}_{k}\right\|_{\infty}\leq\sum_{k=1}^{\infty}\frac{C\left\|f \right\|_{*}}{2^{k-1}}=2C\left\|f\right\|_{*}.\] The convergence of \(\sum_{k=1}^{\infty}\tilde{g}_{k}\) is uniform, which indicates \(g\in C(\mathbb{S})\). Now it remains to prove that \(\mu\) is a vanishing Carleson measure. From the estimate (3.1), it follows therefore that \[\lim_{|I|\to 0}\frac{1}{|I|}\int_{\widetilde{I}}d|\mu|(z) \leq\lim_{|I|\to 0}\frac{1}{|I|}\int_{\widetilde{I}}\sum_{k=1}^{ \infty}d|\tilde{\mu}_{k}|(z)\] \[\leq\lim_{|I|\to 0}\sum_{k=1}^{\infty}\sum_{\{n:r_{n}\geq 1-|I|/2\pi\}} \left\|\tilde{h}_{n}^{(k)}\right\|_{\infty}.\] On the other hand, \[\sum_{k=1}^{\infty}\sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}\left\|\tilde{h}_{n}^{(k)} \right\|_{\infty}\leq\sum_{k=1}^{\infty}\sum_{n=1}^{\infty}\left\|\tilde{h}_{n} ^{(k)}\right\|_{\infty}\leq\sum_{k=1}^{\infty}\frac{C\left\|f\right\|_{*}}{2^{ k-1}}=2C\left\|f\right\|_{*}.\] It readily follows from Tannery's theorem that \[0\leq\lim_{|I|\to 0}\frac{1}{|I|}\int_{\widetilde{I}}d|\mu|(z)\leq\sum_{k=1}^{ \infty}\lim_{|I|\to 0}\sum_{\{n:r_{n}\geq 1-|I|/2\pi\}}\left\|\tilde{h}_{n}^{(k)} \right\|_{\infty}=0.\] That is to say, for any given \(\varepsilon>0\), there exists \(\delta>0\) such that for \(|I|<\delta\), \[\frac{1}{|I|}\int_{\widetilde{I}}d|\mu|(z)\leq\varepsilon,\] which gives the required result that \(\mu\) is a vanishing Carleson measure. Here we finished the proof of Theorem 1.3. Conversely, any function \(f\) which has a representation of the form (1.1) has vanishing mean oscillation. Since \(\mathrm{VMO}(\mathbb{S})\) is the closure of \(C(\mathbb{S})\) in \(\mathrm{BMO}(\mathbb{S})\), it suffices to show that **Proposition 3.1**.: _If \(\mu\) is a vanishing Carleson measure on \(\Delta\), then \(S\mu\in\mathrm{VMO}(\mathbb{S})\)._ Proof.: A slight modification of the proof of Theorem 2.1 can be used to show this proposition. A proof will now be given for the convenience of the reader. Writing \(\mu=\mu^{+}-\mu^{-}\), where \(|\mu|=\mu^{+}+\mu^{-}\), \(\mu^{+}\geq 0\), \(\mu^{-}\geq 0\). Without loss of generality, we can suppose \(\mu\geq 0\). Let \(I_{0}\) be an any fixed interval on \([0,2\pi]\) with \(|I_{0}|=h\), let \(I_{k}\) be the concentric interval with length \(2^{k}h\), \(I_{N_{0}+1}=2\pi\), where \(N_{0}\) is the natural number such that \(2^{N_{0}}\leq 2\pi<2^{N_{0}+1}\). Set \[c_{h}=\sup_{|I|\leq h}\frac{\mu(\widetilde{I})}{|I|},\] we have \(\lim_{h\to 0}c_{h}=0\) uniformly. For any \(e^{i\theta},e^{i\theta_{0}}\in e^{iI_{0}}\), \(z\in\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}\), \(2\leq k\leq N_{0}+1\), a simple geometric argument (for details, see [6, 7]) can be made to show that there exists a positive constant \(C\) (independent of \(I_{0}\)) such that \[|P_{z}(e^{i\theta})-P_{z}(e^{i\theta_{0}})|=\left|\frac{1-|z|^{2}}{|e^{i \theta}-z|^{2}}-\frac{1-|z|^{2}}{|e^{i\theta_{0}}-z|^{2}}\right|\leq\frac{C}{2 ^{2k}|I_{0}|}. \tag{3.2}\] By dividing the unit disk into the union of different Carleson squares, we obtain \[S\mu(e^{i\theta})=\int_{\Delta}P_{z}(e^{i\theta})d\mu(z)=\left(\int_{\widetilde {I}_{1}}+\sum_{k=2}^{N_{0}+1}\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{ k-1}}\right)P_{z}(e^{i\theta})d\mu(z). \tag{3.3}\] Replacing \((S\mu)_{I_{0}}\) with a constant \(\sum_{k=2}^{N_{0}+1}\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}P_{z }(e^{i\theta_{0}})d\mu(z)\) and combing with (3.3), we deduce that \[\frac{1}{|I_{0}|}\int_{I_{0}}\left|S\mu(e^{i\theta})-\sum_{k=2}^ {N_{0}+1}\int_{\widetilde{I}_{k}\backslash\widetilde{I}_{k-1}}P_{z}(e^{i \theta_{0}})d\mu(z)\right|d\theta\] \[\leq \frac{1}{|I_{0}|}\int_{I_{0}}\left(\int_{\widetilde{I}_{1}}P_{z}( e^{i\theta})d\mu(z)\right)d\theta\] \[+\sum_{k=2}^{N_{0}+1}\frac{1}{|I_{0}|}\int_{I_{0}}\left(\int_{\widetilde{I}_{k} \setminus\widetilde{I}_{k-1}}\big{|}P_{z}(e^{i\theta})-P_{z}(e^{i\theta_{0}}) \big{|}\,d\mu(z)\right)d\theta. \tag{3.4}\] On the one hand, Fubini's theorem suggests that \[\frac{1}{|I_{0}|}\int_{I_{0}}\left(\int_{\widetilde{I}_{1}}P_{z}\left(e^{i \theta}\right)d\mu(z)\right)d\theta\leq\frac{2}{\left|\widetilde{I}_{1}\right| }\int_{\widetilde{I}_{1}}d\mu(z)\leq 2c_{2h}. \tag{3.5}\] On the other hand, by (3.2) it is easy to see that \[\sum_{k=2}^{N_{0}+1}\frac{1}{|I_{0}|}\int_{I_{0}}\left(\int_{ \widetilde{I}_{k}\setminus\widetilde{I}_{k-1}}\big{|}P_{z}(e^{i\theta})-P_{z} (e^{i\theta_{0}})\big{|}\,d\mu(z)\right)d\theta\] \[\leq\sum_{k=2}^{N_{0}+1}\int_{\widetilde{I}_{k}\setminus \widetilde{I}_{k-1}}\frac{C}{2^{2k}|I_{0}|}d\mu(z)\leq C\sum_{k=2}^{N_{0}+1} \frac{\mu(\widetilde{I}_{k})}{2^{2k}|I_{0}|}\leq C\sum_{k=2}^{N_{0}+1}\frac{c_ {2^{k}h}}{2^{k}}. \tag{3.6}\] For any \(\varepsilon>0\) fixed, there exists \(N>0\) such that \(\sum_{k=N}^{\infty}\frac{\|\mu\|_{\mathcal{C}}}{2^{k}}<\varepsilon\). Then fix this \(N\), there exists \(\delta>0\) such that \(\forall\ \ h\leq\delta 2^{-N}\), we have \(c_{2^{k}h}<\varepsilon\) for any \(k<N\). Here we finished the proof of the proposition. More specifically speaking, for arbitrary \(\varepsilon>0\), let us assume that \(N<N_{0}+2\), (otherwise the following third term falls away). In combination with (3.4), (3.5) and (3.6), there exists \(\delta>0\) such that for any \(|I_{0}|=h<\delta\), we have then clearly: \[\frac{1}{|I_{0}|}\int_{I_{0}}\left|S\mu(e^{i\theta})-\sum_{k=2}^{ N_{0}+1}\int_{\widetilde{I}_{k}\setminus\widetilde{I}_{k-1}}P_{z}(e^{i \theta_{0}})d\mu(z)\right|d\theta\] \[\leq 2c_{2h}+\sum_{k=2}^{N-1}\frac{c_{2^{k}h}}{2^{k}}+\sum_{k=N}^{N_{0 }+1}\frac{c_{2^{k}h}}{2^{k}}\] \[\leq 2\varepsilon+\sum_{k=2}^{\infty}\frac{\varepsilon}{2^{k}}+\sum_ {k=N}^{\infty}\frac{\|\mu\|_{\mathcal{C}}}{2^{k}}<4\varepsilon,\] which yields that \(S\mu\in\mathrm{VMO}(\mathbb{S})\). Combined with Theorem 1.3 and Proposition 3.1, a representation theorem of \(\mathrm{VMO}(\mathbb{S})\) will be given. **Corollary 3.2**.: _Let \(f\in L^{1}(\mathbb{S})\). Then \(f\in\mathrm{VMO}(\mathbb{S})\) if and only if there exists a vanishing Carleson measure \(\mu\in CM_{0}(\Delta)\) and a continuous function \(g\in C(\mathbb{S})\) such that_ \[f(\zeta)=g(\zeta)+S\mu(\zeta),\ \zeta=e^{i\theta}\in\mathbb{S}.\] ## 4. Applications of Theorem 1.3 The proof of the duality theorem would have been considerably easier had it been the case that \(|\nabla\varphi|dxdy\) were a Carleson measure whenever \(\varphi\in\mathrm{BMO}(\mathbb{R})\). However, this is not the case even when \(\varphi\) is a Blaschke product. The Littlewood-Paley expression \(|\nabla\varphi|^{2}ydxdy\) can overcome this difficulty. Another approach was proposed by Varopoulos [19, 20] when he attempted to study the \(\bar{\partial}\)-equation associated with the Corona problem and the relation of that equation with BMO functions on the boundary. Varopoulos considered a smooth function with \(|\nabla F|dxdy\in CM(\mathbb{R}_{+}^{n+1})\) such that the boundary value of \(F\) deviated from \(\varphi\) by a bounded function \(g\). Inspired by his results, we derive the following proposition. **Proposition 4.1**.: _Let \(f\in\operatorname{VMO}(\mathbb{S})\). Then there exists a smooth function \(F\in C^{\infty}(\Delta)\) that satisfies the following conditions._ 1. \(\lim_{r\to 1}F(re^{i\theta})-f(e^{i\theta})\in C(\mathbb{S})\)_;_ 2. _The measure_ \(|\nabla F|dxdy\) _is a vanishing Carleson measure on_ \(\Delta\)_._ Before we prove this proposition, we now define an auxiliary function as follows: \[\widetilde{P}_{z}(u)=P_{z}(\zeta)\chi_{(r,1)}(\rho),\] where \[z=re^{i\varphi}\in\Delta,\ u=\rho\zeta\in\Delta,\ 0<r,\rho<1,\ \zeta=e^{i\theta} \in\mathbb{S},\] and \(\chi_{(r,1)}\) denotes the characteristic function of the interval \((r,1)\). Let \(z\in\Delta\) be fixed, and denote by \[\nu_{z}(u)=\frac{\partial\widetilde{P}_{z}}{\partial\rho}(u),\ \lambda_{z}(u)=\frac{ \partial\widetilde{P}_{z}}{\partial\theta}(u),\ u=\rho e^{i\theta},\ 0<\rho<1,\] It is perfectly clear that \(\nu_{z}\) is the Lebesgue linear measure on the circle \(\{u:|u|=|z|=r\}\) multiplied by the Poisson kernel \(P_{z}(\zeta)\) and \[\lambda_{z}(u)=\frac{dP_{z}(\zeta)}{d\theta}\chi_{(r,1)}(\rho).\] It follows from Lemma 1.3.2 and Lemma 1.3.3 in [19] that \(\|\nu_{z}\|\leq 1\) and \(\|\lambda_{z}\|\leq C\) for some numerical constant, here \(\|\nu_{z}\|=\int_{z\in\Delta}d|\nu_{z}|\) and \(\|\lambda_{z}\|\) is defined similarly. Let us now suppose that \(f\in\operatorname{VMO}(\mathbb{S})\) be some VMO function on the unit circle and let \(\mu\) be some vanishing Carleson measure that satisfies (1.1). We set \[F(u) =\int_{\Delta}\widetilde{P}_{z}(u)d\mu(z),\quad u\in\Delta,\] \[g(\zeta) =\int_{\Delta}P_{z}(\zeta)d|\mu|(z)\quad\zeta\in\mathbb{S}.\] Theorem 2.1 implies that \(g(\zeta)<+\infty\) almost everywhere. For any \(\zeta\in\mathbb{S}\) such that \(g(\zeta)<+\infty\), by Lebesgue's dominated theorem we easily see that \[\lim_{\rho\to 1}F(\rho\zeta)=\int_{\Delta}P_{z}(\zeta)d\mu(z).\] The definition of \(F\) immediately follows that \[\frac{\partial F}{\partial\rho}(u)=\int_{\Delta}\nu_{z}(u)d\mu(z),\ \frac{\partial F}{\partial\theta}(u)=\int_{\Delta}\lambda_{z}(u)d\mu(z). \tag{4.1}\] We continue our proof in the following: Proof of Proposition 4.1.: We shall first show that \(\partial F/\partial\rho\), \(\partial F/\partial\theta\) are vanishing Carleson measures. It follows from Lemma 1.3.1 in [19] that \(\partial F/\partial\rho\), \(\partial F/\partial\theta\) are Carleson measures. Let \(h\) be small enough, and \(I_{1}\) be a fixed interval on \([0,2\pi]\) with \(|I_{1}|=h\). As before, let \(I_{m}\) be the concentric interval with length \(mh\), \(I_{N_{1}+1}=2\pi\), where \(N_{1}\) is the natural number such that \(N_{1}h<2\pi\leq(N_{1}+1)h\). Let \(\zeta\) be any point on the arc \(e^{iI_{1}}\). Define \[\widehat{I}_{m}=\{z=re^{i\varphi}:\varphi\in I_{m},\ r\in(1-h/2\pi,1)\},\ I_{m} \subset[0,2\pi].\] For any \(z=e^{i\varphi}\in\widehat{I}_{m}\backslash\widehat{I}_{m-1}\) we observe that \[|z-\zeta|=|e^{i(\theta-\varphi)-1}|\asymp|\theta-\varphi|\asymp mh,3\leq m\leq N_{ 1}+1,\] where \(A\asymp B\) means that there exists a universal constant \(C\geq 1\) such that \(B/C\leq A\leq CB.\) Here we notice that \(\widehat{I}_{m}\backslash\widehat{I}_{m-1}\) consists of two Carleson squares with length \(h\) for each \(m.\) It is easily verified that there exists a universal constant \(C>0\) such that \[|\nu_{z}|(\widehat{I}_{1}) \leq Ch\frac{1-r^{2}}{|z-\zeta|^{2}}\leq\frac{C}{m^{2}}, \forall\ z=re^{i\varphi}\in\widehat{I}_{m}\backslash\widehat{I}_{m-1},\ 1-r\leq h;\] \[|\nu_{z}|(\widehat{I}_{1}) =0, \forall\ z=re^{i\varphi},\ 1-r>h.\] From this fact, we conclude that \[\left|\frac{\partial F}{\partial\rho}\right|(\widehat{I}_{1}) \leq\int_{\Delta}|\nu_{z}|(\widehat{I}_{1})d|\mu|(z)\] \[=\int_{\{z:|z|>1-h\}}|\nu_{z}|(\widehat{I}_{1})d|\mu|(z)\] \[\leq\int_{\widehat{I}_{2}}\|\nu_{z}\|\,d|\mu|(z)+\sum_{m=3}^{N_{ 1}+1}\int_{\widehat{I}_{m}\backslash\widehat{I}_{m-1}}\frac{C}{m^{2}}d|\mu|(z) \tag{4.2}\] \[\leq 2|\mu|(\widetilde{I}_{h})+2C\sum_{m=1}^{\infty}\frac{|\mu|( \widetilde{I}_{h})}{m^{2}}\leq Cc_{h}h.\] It should be pointed out that all Carleson squares of \(\widehat{I}_{m}\backslash\widehat{I}_{m-1}\) are denoted by \(\widetilde{I}_{h}\) for the simplicity of notations. Since \(\mu\) is a vanishing Carleson measure, it now follows that \(\partial F/\partial\rho\) is also a vanishing Carleson measure. Let us now deal with \(\partial F/\partial\theta\). We see that \[\left|\frac{\partial F}{\partial\theta}(u)\right|\leq\int_{\Delta}\left|\frac {\partial\widetilde{P}_{z}(u)}{\partial\theta}\right|d|\mu|(z)=\int_{\Delta} \left|\frac{dP_{z}(e^{i\theta})}{d\theta}\right|\chi_{(\rho,1)}d|\mu|(z),\ u= \rho e^{i\theta}. \tag{4.3}\] We have the estimate \[\left|\frac{dP_{z}(e^{i\theta})}{d\theta}\right|\leq\frac{C}{|z-e^{i\theta}|^ {2}} \tag{4.4}\] valid for all \(z=re^{i\varphi}\) and all \(\theta\in[0,2\pi]\) (see [19]). From estimates (4.1), (4.3) and (4.4), it is obtained that \[\left|\frac{\partial F}{\partial\theta}\right|(\widehat{I}_{1}) =\int_{\Delta}\int_{\widehat{I}_{1}}\left|\frac{dP_{z}(e^{i\theta })}{d\theta}\right|\chi_{(\rho,1)}dud|\mu|(z) \tag{4.5}\] \[=\int_{\{z:|z|>1-h\}}\int_{\widehat{I}_{1}}\left|\frac{dP_{z}(e^ {i\theta})}{d\theta}\right|\chi_{(\rho,1)}dud|\mu|(z)\] \[\leq\int_{\widehat{I}_{2}}\|\lambda_{z}\|\,d|\mu|(z)+\sum_{m=3}^ {N_{1}+1}\int_{\widehat{I}_{m}\backslash\widehat{I}_{m-1}}\int_{\widehat{I}_{ 1}}\frac{C}{|z-e^{i\theta}|^{2}}dud|\mu|(z)\] It follows again from the fact that \(|z-\zeta|\) is comparable to \(mh\) for any \(z\in\widehat{I}_{m}\backslash\widehat{I}_{m-1},\)\(3\leq m\leq N_{1}+1,\)\(\zeta=e^{i\theta}\in e^{iI_{1}},\) we can see that \[\int_{\widehat{I}_{1}}\frac{C}{|z-e^{i\theta}|^{2}}du\leq\frac{C}{m^{2}}.\] Therefore, let us continue the estimate of (4.5) \[\left|\frac{\partial F}{\partial\theta}\right|(\widetilde{I}_{1})\leq C|\mu|( \widetilde{I}_{h})+\sum_{m=3}^{N_{1}+1}\frac{|\mu|(\widetilde{I}_{h})}{m^{2}} \leq Cc_{h}h, \tag{4.6}\] which implies that \(\partial F/\partial\theta\) is a vanishing Carleson measure. From what we have shown above, the fact that \(|\nabla F|dxdy\) is a vanishing Carleson measure is an immediate consequence of (4.2) and (4.6). We shall explain it in detail. By computing the gradient in polar coordinates and using the Chain rule, we have \[|\nabla F|=\sqrt{\left(\frac{\partial F}{\partial\rho}\right)^{2}+\left(\frac {1}{\rho}\frac{\partial F}{\partial\theta}\right)^{2}}, \tag{4.7}\] and for small enough \(h\), \(\frac{1}{\rho}\) is bounded above \(2\). Thus \[|\nabla F|\leq 2\sqrt{\left(\frac{\partial F}{\partial\rho}\right)^{2}+\left( \frac{\partial F}{\partial\theta}\right)^{2}}\leq 2\left(\left|\frac{ \partial F}{\partial\rho}\right|+\left|\frac{\partial F}{\partial\theta} \right|\right).\] Finally, we will smooth the constructed function \(F\) by truncating \(P_{z}\) with a smooth function rather than the characteristic function: \[\widetilde{P}_{z}(u)=P_{z}(\zeta)\varphi\left(\frac{1-|u|}{1-|z|}\right)\ \forall\ u\in\Delta,\] where \(\varphi(t)\) is some nonnegative \(C^{\infty}\) function which is chosen such that \[\varphi(t)=\begin{cases}0,&t\geq 2;\\ 1,&0\leq t\leq 1.\end{cases}\] By a dedicated check, it is quite clear that \(F(u)\) satisfies all two conditions of Proposition 4.1, which gives us the required result. Hopefully, we wonder whether the converse statement of Proposition 4.1 is valid. Indeed it is. **Theorem 4.2**.: _Let \(F(x,y)=F(re^{i\theta})\in C^{1}(\Delta)\) be a once continuously differentiable function such that \(|\nabla F|dxdy\) is a vanishing Carleson measure on \(\Delta\) and such that the limit_ \[\lim_{r\to 1}F(re^{i\theta})=f(e^{i\theta})\] _exists for almost all \(e^{i\theta}\in\mathbb{S}\). Then \(f\in\mathrm{VMO}(\mathbb{S})\)._ Proof.: According to Theorem 1.1.2 in [19], \(f\in\mathrm{BMO}(\mathbb{S})\), it is sufficient to show that \[\lim_{|I|\to 0}\int_{I}|f(\zeta)-f_{I}|\left|dz\right|=0.\] Consider any Carleson square \(\widetilde{I}_{h}\) defined as before (\(0<h<1/10\)), \[\widetilde{I}_{h}=\{z=re^{i\varphi}:\varphi\in I_{h},\ r\in(1-h/2\pi,1)\},\ I _{h}=[\theta_{0},\theta_{0}+h]\subset[0,2\pi].\] By the formula (4.7), \(|\nabla F|(x,y)\) are comparable to \[\sqrt{|\partial F/\partial r|^{2}+|\partial F/\partial\theta|^{2}}=|\nabla_{r,\theta}F|(re^{i\theta}),\] thus \[\int_{\widetilde{I}_{h}}|\nabla F|(x,y)dxdy\asymp\int_{\widetilde{I}_{h}}| \nabla_{r,\theta}F|(re^{i\theta})drd\theta\leq c_{h}h,\] Then from Fubini's theorem, there exists some \(r_{0}\in(1-h/2\pi,1)\) such that \[\int_{I_{h}}|\nabla_{r,\theta}F|(r_{0}e^{i\theta})d\theta\leq c_{h}. \tag{4.8}\] Otherwise, for all \(1-h/2\pi<r<1\), we have \[\int_{I_{h}}|\nabla_{r,\theta}F|(re^{i\theta})d\theta>c_{h}.\] Hence, \[\int_{\bar{I}_{h}}|\nabla_{r,\theta}F|(re^{i\theta})drd\theta>c_{h}h.\] This is a contradiction by the fact that \(|\nabla_{r,\theta}F|(re^{i\theta})drd\theta\) is a Carleson measure. From (4.7) it follows that \[|F(r_{0}e^{i\theta})-F(r_{0}e^{i\theta_{0}})|\leq c_{h},\ \forall\ \theta\in I_{h}. \tag{4.9}\] It is an easy matter to see that we always have \[\left|\lim_{r\to 1}F(re^{i\theta})-F(r_{0}e^{i\theta})\right|\leq\int_{1-r_{0}} ^{1}|\nabla_{r,\theta}F|(re^{i\theta})dr \tag{4.10}\] Combing with (4.9) and (4.10), we conclude that \[\left|f(e^{i\theta})-F(r_{0}e^{i\theta_{0}})\right|\leq c_{h}+\int_{1-\frac{h }{2\pi}}^{1}|\nabla_{r,\theta}F|(re^{i\theta})dr \tag{4.11}\] Integrating (4.11) over \(I_{h}\), then \[\int_{I_{h}}\left|f(e^{i\theta})-F(r_{0}e^{i\theta_{0}})\right|d\theta\leq c _{h}h+\int_{\bar{I}_{h}}|\nabla_{r,\theta}F|(re^{i\theta})drd\theta\leq 2c_{h}h,\] which proves that \(f\in\operatorname{VMO}(\mathbb{S})\), and this completes the proof. ## 5. A representation theorem of \(\operatorname{VMO}(\mathbb{R}^{n})\) ### VMO of several variables To prove Theorem 1.4, we first give the following equivalent description of \(\varphi\in\operatorname{VMO}\left(\mathbb{R}^{n}\right)\). **Theorem 5.1**.: _Assume \(\varphi\in\operatorname{BMO}\left(\mathbb{R}^{n}\right)\), the following conditions are equivalent._ 1. \(\varphi\in\operatorname{VMO}\left(\mathbb{R}^{n}\right)\)_._ 2. \(\lim_{|y|\to 0}\left\|\varphi_{y}-\varphi\right\|_{{}_{*}}=0\)_, where_ \(\varphi_{y}(x)=\varphi(x-y)\) _is the translation of_ \(\varphi\) _by_ \(y\)_._ 3. \(\lim_{t\to 0}\left\|\varphi(x)-\varphi(x,t)\right\|_{{}_{*}}=0\)_, where_ \[\varphi(x,t)=(P_{t}*\varphi)(x)=\int_{\mathbb{R}^{n}}P_{t}(x-y)\varphi(y)dy,\] \[P_{t}(x)=\frac{c_{n}t}{(t^{2}+|x|^{2})^{\frac{n+1}{2}}},\ t>0.\] 4. \(\varphi\) _is in the_ \(\operatorname{BMO}\)_-closure of_ \(\operatorname{UC}\left(\mathbb{R}^{n}\right)\cap\operatorname{BMO}\left( \mathbb{R}^{n}\right)\)_._ **Remark 5.1**.: It is noticeable that if \(\varphi\in\operatorname{BMO}(\mathbb{R}^{n}),\varphi(x,t)\) is well-defined, and \[\int_{\mathbb{R}^{n}}\frac{|\varphi(x)|}{1+|x|^{n+1}}dx<\infty.\] Proof of Theorem 5.1.: We verify the circle of implications \[(1)\Rightarrow(2)\Rightarrow(3)\Rightarrow(4)\Rightarrow(1).\] First, we show \((1)\Rightarrow(2)\). Assume \(\varphi\in\operatorname{VMO}\left(\mathbb{R}^{n}\right)\), fix \(\delta>0\), set \[M_{\delta}(\varphi)=\sup_{\ell(Q)<\delta}\frac{1}{|Q|}\int_{Q}|\varphi(x)- \varphi_{Q}|dx.\] let \(\mathcal{L}\) be the natural subdivision of \(\mathbb{R}^{n}\) into half closed cubes of side length \(\delta\) such that the set of vertices of the cubes of \(\mathcal{L}\) is \(\mathbb{Z}^{n}\delta\). Set \(\mathcal{L}=\bigcup_{j\in\mathbb{Z}}Q_{j}\), where \(Q_{j}\) is half closed cube of side length \(\delta\). Define \[h(x)=\sum_{j\in\mathbb{Z}}\varphi_{Q_{j}}\chi_{Q_{j}}(x),\] where \(\varphi_{Q_{j}}=\frac{1}{|Q_{j}|}\int_{Q_{j}}\varphi(x)dx\). Ideally, we require the existence of a constant \(C(n)\) that depends only on \(n\) to provide us with the following estimate: \[\left\|\varphi-h\right\|_{*}\leq C(n)M_{2\delta}(\varphi).\] For the case that \(\ell(Q)\leq\delta\), it follows \(Q\) is contained in a cube \(Q^{\prime}\) of length \(2\delta\) which is the union of \(2^{n}Q_{i_{k}}s,\ k=1,\cdots,2^{n}\). Note that \[\left|\varphi_{Q_{j_{k}}}-\varphi_{Q^{\prime}}\right| \leq\frac{1}{|Q_{j_{k}}|}\int_{Q_{j_{k}}}\left|\varphi(x)-\varphi_ {Q^{\prime}}\right|dx\] \[\leq\frac{2^{n}}{|Q^{\prime}|}\int_{Q^{\prime}}\left|\varphi(x)- \varphi_{Q^{\prime}}\right|dx\leq 2^{n}M_{2\delta}(\varphi).\] Therefore, \[\left|\varphi_{Q_{j_{k_{1}}}}-\varphi_{Q_{j_{k_{2}}}}\right|\leq\left|\varphi _{Q_{j_{k_{1}}}}-\varphi_{Q^{\prime}}\right|+\left|\varphi_{Q_{j_{k_{2}}}}- \varphi_{Q^{\prime}}\right|\leq 2^{n+1}M_{2\delta}(\varphi). \tag{5.1}\] Compute the mean value integral \(h_{Q}\) directly, we have \[h_{Q}=\frac{1}{|Q|}\int_{Q}h(x)dx=\frac{1}{|Q|}\sum_{k=1}^{2^{n}}\int_{Q_{j_{ k}}\cap Q}\varphi_{Q_{j_{k}}}dx=\sum_{k=1}^{2^{n}}\frac{|Q_{j_{k}}\cap Q|}{|Q|} \varphi_{Q_{j_{k}}}. \tag{5.2}\] Using (5.2) we deduce the mean oscillation of \(h\) as follows. \[\frac{1}{|Q|}\int_{Q}\left|h(x)-h_{Q}\right|dx= \frac{1}{|Q|}\sum_{k=1}^{2^{n}}\int_{Q_{j_{k}}\cap Q}\left|\varphi _{Q_{j_{k}}}-\sum_{m=1}^{2^{n}}\frac{|Q_{j_{m}}\cap Q|}{|Q|}\varphi_{Q_{j_{m} }}\right|dx\] \[= \sum_{k=1}^{2^{n}}\frac{|Q_{j_{k}}\cap Q|}{|Q|}\left|\varphi_{Q_{ j_{k}}}-\sum_{m=1}^{2^{n}}\frac{|Q_{j_{m}}\cap Q|}{|Q|}\varphi_{Q_{j_{m}}} \right|.\] From the identity \[\sum_{m=1}^{2^{n}}\frac{|Q_{j_{m}}\cap Q|}{|Q|}=1\] and inequality (5.1), it then follows that \[\frac{1}{|Q|}\int_{Q}\left|h(x)-h_{Q}\right|dx\leq\sum_{k=1}^{2^{n}}\frac{|Q_ {j_{k}}\cap Q|}{|Q|}\sum_{m=1}^{2^{n}}\left|\varphi_{Q_{j_{k}}}-\varphi_{Q_{j_{ m}}}\right|\frac{|Q_{j_{m}}\cap Q|}{|Q|}\] \[\leq 2^{n+1}M_{2\delta}(\varphi)\quad\sum_{k=1}^{2^{n}}\frac{|Q_{j_{k} }\cap Q|}{|Q|}\sum_{m=1}^{2^{n}}\frac{|Q_{j_{m}}\cap Q|}{|Q|} \tag{5.3}\] \[=2^{n+1}M_{2\delta}(\varphi).\] It is clear from (5.3) we obtain the mean oscillation of \(\varphi-h\) as follows. \[\frac{1}{|Q|}\int_{Q}|(\varphi(x)-h(x))-(\varphi-h)_{Q}|\,dx\] \[\leq \frac{1}{|Q|}\int_{Q}|\varphi(x)-\varphi_{Q}|\,dx+\frac{1}{|Q|} \int_{Q}|h(x)-h_{Q}|\,dx \tag{5.4}\] \[\leq M_{2\delta}(\varphi)+2^{n+1}M_{2\delta}(\varphi)\leq 2^{n+2}M_{ 2\delta}(\varphi).\] For the second case \(\ell(Q)>\delta\), set \(Q^{\prime\prime}=\{\bigcup Q_{j}:Q_{j}\cap Q\neq\phi\}\), it is perfectly clear that \(Q^{\prime\prime}\) is a cube such that \[\ell\left(Q^{\prime\prime}\right)\leq\ell(Q)+2\delta<3\ell(Q).\] It indicates that \[\frac{1}{\ell(Q)}\leq\frac{3}{\ell\left(Q^{\prime\prime}\right)} \tag{5.5}\] Write \(Q^{\prime\prime}=\bigcup_{k=1}^{2^{N}}Q_{j_{k}}\), note that \(|Q^{\prime\prime}|=2^{N}|Q_{j_{k}}|\) for any \(k=1,\cdots,2^{N}\). To estimate the mean oscillation of \(\varphi-h\), we have then \[\frac{1}{|Q|}\int_{Q}|(\varphi(x)-h(x))-(\varphi-h)_{Q}|\,dx\leq\frac{2}{|Q|} \int_{Q}|\varphi(x)-h(x)|dx\] Inequality (5.5) gives that \[\frac{2}{|Q|}\int_{Q}|\varphi(x)-h(x)|dx \leq\frac{2\cdot 3^{n}}{|Q^{\prime\prime}|}\int_{Q^{\prime\prime}}| \varphi(x)-h(x)|dx \tag{5.7}\] \[\leq\frac{2\cdot 3^{n}}{|Q^{\prime\prime}|}\sum_{k=1}^{2^{N}}\int_{ Q_{j_{k}}}\Big{|}\varphi(x)-\varphi_{Q_{j_{k}}}\Big{|}\,dx\] (5.8) \[=\frac{2\cdot 3^{n}}{2^{N}}\sum_{k=1}^{2^{N}}\frac{1}{|Q_{j_{k}}|} \int_{Q_{j_{k}}}\Big{|}\varphi(x)-\varphi_{Q_{j_{k}}}\Big{|}\,dx, \tag{5.6}\] from which it follows \[\frac{1}{|Q|}\int_{Q}|(\varphi(x)-h(x))-(\varphi-h)_{Q}|\,dx\leq 2\cdot 3^{n}M_ {\delta}(\varphi)\leq 2\cdot 3^{n}M_{2\delta}(\varphi).\] which together with (5.4), we know that \[\left\|\varphi-h\right\|_{*}\leq\left(2\cdot 3^{n}+2^{n+2}\right)M_{2\delta}( \varphi).\] If \(|y|<\delta\), for each \(x\) in some \(Q_{j}\), we have \(|h(x)-h(x-y)|=|\varphi_{Qj}-\varphi_{Q_{m}}|\leq 2^{n+1}M_{2\delta}(\varphi)\) for some \(m\), where \(Q_{j}\) and \(Q_{m}\) are adjacent (or coincide). Therefore, \[\left\|h(x)-h(x-y)\right\|_{\infty}\leq 2^{n+1}M_{2\delta}(\varphi).\] By the inequalities above, it then follows that \[\left\|\varphi_{y}-\varphi\right\|_{*} \leq\left\|\varphi-h\right\|_{*}+\left\|h_{y}-h\right\|_{*}+\left\| \varphi_{y}-h_{y}\right\|_{*}\] \[\leq 2\left\|\varphi-h\right\|_{*}+\left\|h_{y}-h\right\|_{\infty}\] \[\leq\left(2^{n+1}+2\left(2\cdot 3^{n}+2^{n+2}\right)\right)M_{2\delta}(\varphi)\] \[=C(n)M_{2\delta}(\varphi).\] Send \(\delta\) to \(0\), we obtain (2) holds. Next, we shall show that (2) \(\Rightarrow\) (3). By a similar discussion as the case \(\mathbb{S}\), we have \[\frac{1}{|Q|}\int_{Q}\left|(\varphi(x)-\varphi(x,t))-(\varphi(x)- \varphi(x,t))_{Q}\right|dx\] \[\leq \int_{\mathbb{R}^{n}}P_{t}(y)\left(\frac{1}{|Q|}\int_{Q}\left| \varphi(x)-\varphi_{y}(x)-(\varphi-\varphi_{y})_{Q}\right|dx\right)dy.\] For any given \(\varepsilon>0\), there exists \(\delta>0\) such that \(\left\|\varphi-\varphi_{y}\right\|_{{}_{*}}<\varepsilon\) for any \(|y|<\delta\). Then fix this \(\delta\), the above inequality is bounded above as follows \[\int_{|y|<\delta}P_{t}(y)\left(\frac{1}{|Q|}\int_{Q}\left|\varphi (x)-\varphi_{y}(x)-\left(\varphi-\varphi_{y}\right)_{Q}\right|dx\right)dy\] \[+\int_{|y|>\delta}P_{t}(y)\left(\frac{1}{|Q|}\int_{Q}\left|( \varphi(x)-\varphi_{Q})-\left(\varphi_{y}-\left(\varphi_{y}\right)_{Q}\right) \right|dx\right)dy\] \[\leq \int_{|y|<\delta}\left\|\varphi-\varphi_{y}\right\|_{{}_{*}}P_{t }(y)dy+2\left\|\varphi\right\|_{{}_{*}}\int_{|y|>\delta}P_{t}(y)dy.\] The first term is clearly less than \(\varepsilon\). For the second term, a simple calculation yields that \[\int_{|y|>\delta}P_{t}(y)dy =\int_{|y|>\delta}\frac{c_{n}t}{(t^{2}+|y|^{2})^{\frac{n+1}{2}}} dy\leq 2^{\frac{n+1}{2}}\int_{|y|>\delta}\frac{c_{n}t}{(t+|y|)^{n+1}}dy\] \[=2^{\frac{n+1}{2}}c_{n}t\int_{\delta}^{\infty}\frac{r^{n-1}}{(t+r )^{n+1}}dr=2^{\frac{n+1}{2}}c_{n}\left(1-\delta^{n}(\delta+t)^{-n}\right)/n.\] There exists \(t_{0}>0\) such that for this fixed \(\delta\), \(2^{\frac{n+1}{2}}c_{n}\left(1-\delta^{n}(\delta+t)^{-n}\right)/n<\varepsilon\) for all \(0<t<t_{0}\). Therefore, for any \(\varepsilon>0\), there exists \(t_{0}>0\) such that for any \(0<t<t_{0}\), \[\left\|\varphi(x)-\varphi(x,t)\right\|_{{}_{*}}<\varepsilon+2\left\|\varphi \right\|_{{}_{*}}\varepsilon.\] Now, we prove (3) \(\Rightarrow\) (4). Do some elementary computation, we get the identity \[\left|\nabla_{x}P_{t}(x-y)\right|=\frac{(n+1)P_{t}(x-y)|x-y|}{t^{2}+|x-y|^{2}}\] Thanks to the mean value inequality, we have \[t\left|\nabla_{x}P_{t}(x-y)\right|\leq\frac{n+1}{2}P_{t}(x-y). \tag{5.9}\] It is not hard to see that \[\nabla_{x}\varphi(x,t)=\int_{\mathbb{R}^{n}}(\varphi(y)-\varphi(x,t))\nabla_ {x}P_{t}(x-y)dy.\] By this and inequality (5.9), we obtain \[t\left|\nabla_{x}\varphi(x,t)\right| \leq\int_{\mathbb{R}^{n}}t\left|\varphi(y)-\varphi(x,t)\right| \left|\nabla_{x}P_{t}(x-y)\right|dy \tag{5.10}\] \[\leq\frac{n+1}{2}\int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi(x, t)\right|P_{t}(x-y)dy\] In order that (4) hold, we need the following estimate. \[\int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi(x,t)\right|P_{t}(x-y)dy\leq C(n) \left\|\varphi\right\|_{*}.\] Indeed, set \(B_{k}=\left\{y:|x-y|<2^{k}t\right\}\), \(k=0,1,\cdots\), then \[P_{t}(x-y) \leq\frac{c_{n}}{t^{n}},\quad y\in B_{0}\] \[P_{t}(x-y) \leq\frac{c_{n}}{2^{k(n+1)}t^{n}},\quad y\in B_{k+1}\backslash B_ {k},\ k=0,1,\cdots.\] We handle the integral into two parts and get \[\int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi(x,t)\right|P_{t}(x- y)dy\] \[\leq \int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi_{B_{0}}\right|P_{t }(x-y)dy+\left|\varphi(x,t)-\varphi_{B_{0}}\right|\] \[\leq 2\int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi_{B_{0}}\right|P_ {t}(x-y)dy.\] Here \(\varphi_{B_{k}}=\frac{1}{|B_{k}|}\int_{B_{k}}\varphi(y)dy\). Divide \(\mathbb{R}^{n}\) into the union of \(B_{k+1}\backslash B_{k}\), the integrand is nonnegative, it thus follows \[\int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi_{B_{0}}\right|P_{t} (x-y)dy\] \[\leq \frac{c_{n}}{t^{n}}\int_{B_{0}}\left|\varphi(y)-\varphi_{B_{0}} \right|dy+\sum_{k=0}^{\infty}\frac{c_{n}}{2^{k(n+1)}t^{n}}\int_{B_{k+1} \backslash B_{k}}\left|\varphi(y)-\varphi_{B_{0}}\right|dy\] \[\leq Cc_{n}\|\varphi\|_{*}+Cc_{n}\sum_{k=0}^{\infty}\frac{2^{(k+1)n} }{2^{k(n+1)}}\frac{1}{|B_{k+1}|}\int_{B_{k+1}}\left|\varphi(y)-\varphi_{B_{k+1 }}\right|dy \tag{5.11}\] \[+c_{n}\sum_{k=0}^{\infty}\frac{2^{(k+1)n}}{2^{k(n+1)}}\left| \varphi_{B_{k+1}}-\varphi_{B_{0}}\right|,\] where \(C\) is a universal constant. Note that \[\left|\varphi_{B_{k+1}}-\varphi_{B_{k}}\right| =\frac{1}{|B_{k}|}\int_{B_{k}}\left|\varphi(y)-\varphi_{B_{k+1}} \right|dy\] \[\leq\frac{2^{n}}{|B_{k+1}|}\int_{B_{k+1}}\left|\varphi(y)-\varphi _{B_{k+1}}\right|dy\] \[\leq 2^{n}\left\|\varphi\right\|_{*},\] which implies that \[\left|\varphi_{B_{k+1}}-\varphi_{B_{0}}\right|\leq\sum_{j=0}^{k}\left|\varphi _{B_{j+1}}-\varphi_{B_{j}}\right|\leq(k+1)2^{n}\left\|\varphi\right\|_{*}. \tag{5.12}\] From inequalities (5.11) and (5.12), we obtain \[\int_{\mathbb{R}^{n}}\left|\varphi(y)-\varphi_{B_{0}}\right|P_{t} (x-y)dy\] \[\leq Cc_{n}\left\|\varphi\right\|_{*}+Cc_{n}\sum_{k=0}^{\infty}\frac{ 2^{n}}{2^{k}}\left\|\varphi\right\|_{*}+c_{n}\sum_{k=0}^{\infty}\frac{(k+1)2^ {2n}}{2^{k}}\left\|\varphi\right\|_{*}\leq C(n)\left\|\varphi\right\|_{*},\] which gives us the desired estimate. Combine this and inequality (5.10), we know that \[t\left|\nabla_{x}\varphi(x,t)\right|\leq C(n)\left\|\varphi\right\|_{*}.\] Therefore, for any fixed \(t>0\), \(\varphi(x,t)\) is uniformly continuous with respect to \(x\). It is obvious from the proof of Theorem 1.3 that if \(\varphi\in\mathrm{BMO}(\mathbb{R}^{n})\), \(\varphi-\varphi(x,t)\in\mathrm{BMO}(\mathbb{R}^{n})\), and hence \(\varphi(x,t)\in\mathrm{BMO}(\mathbb{R}^{n})\). Accordingly, for any given fixed \(t\), \(\varphi(x,t)\in\mathrm{BMO}(\mathbb{R}^{n})\cap\mathrm{UC}(\mathbb{R}^{n})\). Since \(\lim\limits_{t\to 0}\left\|\varphi(x)-\varphi(x,t)\right\|_{*}=0\), then \(\varphi(x)\in\overline{\mathrm{BMO}(\mathbb{R}^{n})\cap\mathrm{UC}(\mathbb{R} ^{n})}\). So we have proved \((3)\Rightarrow(4)\). Finally, we shall show that \((4)\Rightarrow(1)\). Assume \(\varphi\in\mathrm{BMO}(\mathbb{R}^{n})\cap\mathrm{UC}(\mathbb{R}^{n})\), that is to say, for any given \(\varepsilon>0\), there exists \(\delta>0\) such that for all \(y\in Q=\left\{y:\left|x-y\right|<\delta\right\}\), we have \(\left|\varphi\left(y\right)-\varphi\left(x\right)\right|<\varepsilon\). It clearly follows that \[\left|\varphi_{Q}-\varphi(x)\right|=\left|\frac{1}{\left|Q\right|}\int_{Q} \varphi(y)dy-\varphi\left(x\right)\right|\leq\frac{1}{\left|Q\right|}\int_{Q} \left|\varphi(y)-\varphi(x)\right|dy<\varepsilon.\] Thus \[\frac{1}{\left|Q\right|}\int_{Q}\left|\varphi(x)-\varphi_{Q}\right|dx<\varepsilon,\] which implies \(\varphi\in\mathrm{VMO}(\mathbb{R}^{n})\) and hence \(\mathrm{BMO}(\mathbb{R}^{n})\cap\mathrm{UC}(\mathbb{R}^{n})\subset\mathrm{ VMO}(\mathbb{R}^{n})\). Since \(\mathrm{VMO}(\mathbb{R}^{n})\) is a closed subspace of \(\mathrm{BMO}(\mathbb{R}^{n})\) as we mentioned before, we finished the proof here. ### Proof of Theorem 1.4 This section mainly focuses on the proof of Theorem 1.4. We assume first that \(\varphi\in\mathrm{VMO}(\mathbb{R}^{n})\). By Theorem 2.3, there exist \(\varphi_{j}^{(1)}\in L^{\infty}(\mathbb{R}^{n})\) satisfying \[\varphi(x):=\varphi^{(1)}(x)=\varphi_{0}^{(1)}(x)+\sum_{j=1}^{n}R_{j}\left( \varphi_{j}^{(1)}\right)(x),\ j=0,\cdots,n.\] According to Remark 2.4, there also exists a constant \(C\) independent on \(\varphi\) such that \[\left\|\varphi_{0}^{(1)}\right\|_{\infty}+\sum_{j=1}^{n}\left\|\varphi_{j}^{( 1)}\right\|_{\infty}\leq C\left\|\varphi^{(1)}\right\|_{*}=C\left\|\varphi \right\|_{*}.\] With the help of Theorem 5.1, there exists \(t_{1}>0\) such that \[\left\|\varphi-P_{t_{1}}*\varphi\right\|_{*}<\frac{\left\|\varphi\right\|_{*}} {2}.\] Set \(\tilde{\varphi}^{(1)}(x)=P_{t_{1}}*\varphi(x)\), \(\tilde{\varphi}_{j}^{(1)}(x)=P_{t_{1}}*\varphi_{j}^{(1)}(x)\), we have \(\tilde{\varphi}_{j}^{(1)}\in\mathrm{BUC}(\mathbb{R}^{n})\), \(j=0,\cdots,n\). Since the Riesz transform is invariant under translation, it commutes with convolution operators. Hence \[\tilde{\varphi}^{(1)}(x)=\tilde{\varphi}_{0}^{(1)}(x)+\sum_{j=1}^{n}R_{j} \left(\tilde{\varphi}_{j}^{(1)}\right)(x).\] Similarly, let \(\varphi^{(2)}(x)=\varphi(x)-P_{t_{1}}*\varphi(x)=\varphi(x)-\tilde{\varphi}^{ (1)}(x)\), we have \(\left\|\varphi^{(2)}(x)\right\|_{*}<\frac{\left\|\varphi\right\|_{*}}{2}\). Due to Theorem 1.3, we have every reason to believe that \(\varphi^{(2)}\in\mathrm{VMO}(\mathbb{R}^{n})\). Thus there exist \(\varphi_{j}^{(2)}\in L^{\infty}(\mathbb{R}^{n})\) satisfying \[\varphi^{(2)}(x)=\varphi_{0}^{(2)}(x)+\sum_{j=1}^{n}R_{j}\left(\varphi_{j}^{(2) }\right)(x),\ j=0,\cdots,n.\] Similarly, we can obtain \[\left\|\varphi_{0}^{(2)}\right\|_{\infty}+\sum_{j=1}^{n}\left\|\varphi_{j}^{(2 )}\right\|_{\infty}\leq C\left\|\varphi^{(2)}\right\|_{*}<\frac{\left\|\varphi \right\|_{*}}{2}.\] and there exists \(t_{2}>0\) such that \[\left\|\varphi^{(2)}-P_{t_{2}}*\varphi^{(2)}\right\|_{*}<\frac{\left\|\varphi ^{(2)}\right\|_{*}}{2}<\frac{\left\|\varphi\right\|_{*}}{2^{2}}.\] Define \(\tilde{\varphi}^{(2)}(x)=P_{t_{2}}*\varphi^{(2)}(x)\), it follows \(\tilde{\varphi}_{j}^{(2)}(x)=P_{t_{2}}*\varphi_{j}^{(2)}(x)\in\mathrm{BUC}( \mathbb{R}^{n})\), \(j=0,\cdots,n\). From the above argument, we know that \[\tilde{\varphi}^{(2)}(x)=\tilde{\varphi}_{0}^{(2)}(x)+\sum_{j=1}^{n}R_{j} \left(\tilde{\varphi}_{j}^{(2)}\right)(x).\] Repeating this process, it is perfectly clear that \(\tilde{\varphi}_{j}^{(k)}\in\mathrm{BUC}(\mathbb{R}^{n})\), \(j=0,\cdots,n\), \(k=1,2,\cdots\), and \[\left\|\tilde{\varphi}_{0}^{(k)}\right\|_{\infty}+\sum_{j=1}^{n}\left\|\tilde {\varphi}_{j}^{(k)}\right\|_{\infty}\leq\left\|\varphi_{0}^{(k)}\right\|_{ \infty}+\sum_{j=1}^{n}\left\|\varphi_{j}^{(k)}\right\|_{\infty}\leq\frac{C}{2^ {k-1}}\left\|\varphi\right\|_{*}. \tag{5.13}\] Therefore, \[\varphi(x) =\sum_{k=1}^{\infty}\tilde{\varphi}_{0}^{(k)}(x)+\sum_{j=1}^{n}R _{j}\left(\sum_{k=1}^{\infty}\tilde{\varphi}_{j}^{(k)}\right)(x)\] \[: =\varphi_{0}(x)+\sum_{j=1}^{n}R_{j}\left(\varphi_{j}\right)(x).\] Combined with Remark 2.4 and the fact (5.13), we easily see that \[\left\|\sum_{k=1}^{\infty}\tilde{\varphi}_{j}^{(k)}\right\|_{\infty}\leq\sum _{k=1}^{\infty}\left\|\tilde{\varphi}_{j}^{(k)}\right\|_{\infty}\leq C\sum_{ k=1}^{\infty}\frac{\left\|\varphi\right\|_{*}}{2^{k-1}},\ j=0,1,\cdots,n. \tag{5.14}\] By inequality (5.14), it is clear to us that the convergence is uniform and \(\varphi_{j}(x)\) is bounded. So we get \[\varphi_{j}(x)=\sum_{k=1}^{\infty}\tilde{\varphi}_{j}^{(k)}(x)\in\mathrm{BUC }(\mathbb{R}^{n}),\ j=0,1,\cdots,n.\] This concludes the statement (2). Next, we are going to show \((2)\Rightarrow(1)\). Assume that there exists \(\varphi_{j}\in\mathrm{BUC}(\mathbb{R}^{n})\) such that \[\varphi(x)=\varphi_{0}(x)+\sum_{j=1}^{n}R_{j}(\varphi_{j})(x),\ j=0,\cdots,n.\] From Theorem 5.1, it is easy to see \(\mathrm{BUC}(\mathbb{R}^{n})\subset\mathrm{VMO}(\mathbb{R}^{n})\). So what remains to show is that for any \(\varphi\in\mathrm{BUC}(\mathbb{R}^{n})\), we have \(R_{j}(\varphi)\in\mathrm{VMO}(\mathbb{R}^{n})\). For any \(\varepsilon>0\), there exists \(\delta>0\) such that for any \(|y|<\delta\), we then have \[\left\|\varphi-\varphi_{y}\right\|_{\infty}<\varepsilon.\] By Theorem 2.4, \(R^{j}\) is a bounded linear operator from \(L^{\infty}\) to BMO, thus \[\left\|R_{j}(\varphi)-R_{j}(\varphi_{y})\right\|_{*}=\left\|R_{j}(\varphi- \varphi_{y})\right\|_{*}\leq C\left\|\varphi-\varphi_{y}\right\|_{\infty}<C\varepsilon.\] According to Theorem 5.1 again, we obtain that \(R_{j}(\varphi)\in\mathrm{VMO}(\mathbb{R}^{n})\), which completes the proof.
2307.00088
Redeeming Data Science by Decision Modelling
With the explosion of applications of Data Science, the field is has come loose from its foundations. This article argues for a new program of applied research in areas familiar to researchers in Bayesian methods in AI that are needed to ground the practice of Data Science by borrowing from AI techniques for model formulation that we term ``Decision Modelling.'' This article briefly reviews the formulation process as building a causal graphical model, then discusses the process in terms of six principles that comprise \emph{Decision Quality}, a framework from the popular business literature. We claim that any successful applied ML modelling effort must include these six principles. We explain how Decision Modelling combines a conventional machine learning model with an explicit value model. To give a specific example we show how this is done by integrating a model's ROC curve with a utility model.
John Mark Agosta, Robert Horton
2023-06-30T19:00:04Z
http://arxiv.org/abs/2307.00088v1
# Redeeming Data Science by Decision Modelling ###### Abstract With the explosion of applications of Data Science, the field is has come loose from its foundations. This article argues for a new program of applied research in areas familiar to researchers in Bayesian methods in AI that are needed to ground the practice of Data Science by borrowing from AI techniques for model formulation that we term "Decision Modelling." This article briefly reviews the formulation process as building a causal graphical model, then discusses the process in terms of six principles that comprise _Decision Quality_, a framework from the popular business literature. We claim that any successful applied ML modelling effort must include these six principles. We explain how Decision Modelling combines a conventional machine learning model with an explicit value model. To give a specific example we show how this is done by integrating a model's ROC curve with a utility model. ## 1 Introduction Data Science suffers from its own success, having seen such rapid adoption across so many fields, in so many different ways that it has lost its principled theoretical foundation. Rational choice as studied in Decision Theory forms a foundation for all analytic fields, which applies no less to Data Science. Many of the ways that Data Science practice falls short should be apparent to anyone versed in this Theory, especially to anyone in our field of Bayesian AI. The field of Data Science is fluid and evolving rapidly, and defies a concise definition. In contrast mathematical modeling techniques, especially as they have been adopted in Artificial Intelligence, are mature, and can put Data Science on a firm footing. The imperative to better merge Data Science with well understood concepts from Decision Theory should expand the power and scope of its methods. At the same time the disruptive ubiquity of software and the scale of data generation in combination with networked hardware platforms--"The Cloud"--creates a new opportunity for AI. This fits into a larger social concern such as _the future of work_(Acemoglu [2002]) that is drawn into stark relief by the transformation organizations are undergoing due these software innovations are popularly referred to by the term "Digital Transformation." This article argues for a new program of applied research in areas familiar to researchers in Bayesian methods in AI that are needed to ground the practice of Data Science. The article organizes theoretical principles using a framework from the business literature around the list of concepts that comprise _Decision Quality_. This framework has proved useful in Decision Analysis practice, to connect the practice to the principles that support it.(Spetzler et al. [2016]) We build on these six concepts to clarify the process of building a decision model. This article briefly reviews the formulation process as one of building a causal model in Section 2, then discusses the process in terms of Decision Quality, in Section 3, with an example that shows how conventional ROC analysis fits within this framework. For the causal model we use a directed acyclic graph (DAG)--a Bayes network with added decision and value nodes, that goes by various names; "Influence Diagrams" (Koller and Friedman [2009]), or "Decision Graphs" (Jensen [2001]), among others. We claim that any applied ML modelling effort must include these six principles. In some cases this is obvious, but often it reveals insights into flaws in the model. Notably those building Data Science models often pay homage to the need to "understand the business context" but rarely can explain how to go about it. As a specific example of integrating a value model with a predictive model, we show an example of how this can be applied to a predictive model's ROC curve. ## 2 Formulating a decision model In simple terms, a machine learning (ML) model predicts an event (Did the customer churn?) or a quantity (What will be product demand next month?) conditioned on a set of observed features. Designate the outcome--the predictive model's target variable \(S\), and the vector of features, \(\boldsymbol{F}\), the learned model can be written \(\mathsf{P}(S\,|\,\boldsymbol{F})\). The model provides a distribution over the outcome that "informs" a decision made when knowing the features. The way this is applied is to look up the features \(\boldsymbol{f}\) for one case, say the data collected for a customer's purchase history, and compute the probability of outcome \(\mathsf{P}(S\,|\,\boldsymbol{f})\), say their next purchase, as conditional on \(\boldsymbol{f}\). Although referred to as a "prediction", an ML model may just as well infer a current unobserved state, such as the root-cause of a failure. In all cases the prediction of the model is an uncertain variable. _One needs to be careful by noting that the model does not predict the value of the outcome; value and probability need to be distinguished._ A value model would express preferences over predicted outcomes and hence it needs to be combined with the prediction to come up with a value. ### Decision modelling Once having created a set of alternatives that comprise a decision, the combination of a value model with a predictive ML model create the _decision model_. A necessary and often overlooked step that precedes the data engineering and model training tasks in Data Science is to properly formulate the model as derived from the decision it is intended to support. We argue for the primacy of starting by identifying the decision in terms of how alternatives interact with values, as opposed to the conventional approach of starting with available data. A relevant model implies there is an identified action from a set of choices that are predicted to have a desired effect. If this is not the case then the from an applied point of view, what is the point? A _decision_ refers to making a choice from a set of alternatives, evident as a tangible change at a point in time, in anticipation of the outcomes it precedes. Colloquially one may speak of "deciding on one's values", or of thinking of a personal resolution as a "decision" to reform one's behavior. That's not the sense with which we use the word. However, incidentally, to resolve one's behavior in such a sense, one may well engage in decision modelling. We are most interested when there is uncertainty in the outcomes, that the outcomes of interest by which the best choice will be determined are linked by a chain of cause and effect from the decision to the eventual outcome. ### The decision-maker Having abstracted the modeling task as one around modeling a decision, there is another abstraction--the question of the _decision-maker_. We apply this term to anchor the model to an individual's choice. "Individual" may refer to the person for whom the model is built, or to a class of users, for a decision automated by software, or even for a choice made by an organization. It is with the _decision-maker_ in mind that one identifies alternatives to be modelled, how the uncertain dynamics play out (the model's predictions), and determines the values of a relevant set of outcomes. Once we consider automation, it's no longer a solitary decision, but we are making changes to a decision-making process. The decision could be the response to a recommendation made by the model, as is typical of e-commerce applications. Or it could be a automation of an existing business process. This uncovers a third dimension--of organizational improvements that follow necessarily from the model. ### Causal models as the canonical formulation tool Of course decision variables are not the only ones that make up a model. The definition of the set of variables that make up the model determine its scope. The modeling process begins by setting the scope to ascertain which variables--quantities susceptible to measurement--to include. We partition these into 3 types; _choice_ variables that make up a decision, _uncertainties_ that describe the world, and _values_ that quantify outcomes. There is a first-class distinction between variables that represent 1.) uncertainties as probabilities, 2.) decisions as sets of alternatives, and 3.) outcomes as valued by a utility measure. This partition is both necessary and sufficient to formulate a model. A glaring lack of most ""plug and play" ML approaches is that they only deal with the probabilistic aspect, and sometimes not even that. By formulating influence diagram one creates a structural, causal prior for the model, and defines the inputs and outputs for both the ML and value models. The decision model can be formulated as an influence diagram from these nodes: * The causal network describing the unobserved state \(\boldsymbol{S}\). These are variables that describe uncertainties relevant to the outcome. * Variables \(\boldsymbol{F}\) that the condition other variables in the model. We partition them into: * Those that convey information, \(\boldsymbol{I}\) meaning they are known when the decision is made, and * Decision variables \(\boldsymbol{d}\); those that are those controlled by the decision-maker. Since \(\mathbf{F}=\mathbf{I}\cup\mathbf{d}\) both can be inputs to the prediction model. * The value \(v(\mathbf{S},\mathbf{d})\) is a function of the outcome \(\mathbf{S}\) and \(\mathbf{d}\) variables. An example of a decision model, Figure 1 shows an influence diagram for the case where the information used to make the prediction is known when the decision is made. A typical example is making a purchase recommendation by knowing a description of the recipient, one makes an informed decision using the prediction \(\mathsf{P}(S\,|\,i)\). As shown here for a discriminative ML model the conditioning are goes from \(i\) to \(s\). A Bayesian may prefer to learn a generative model with the arc reversed, then apply Bayes rule to solve the diagram. In practice the state \(\mathbf{s}\) may be a network of possibly hundreds of uncertain nodes, connected with a sequence of decision and value nodes, (the decision nodes required to be totally ordered) to form a DAG. Any influence diagram can be unrolled into a tree, by assuming a total ordering of the DAG, but this soon becomes unwieldy, and the causal claims in the diagram are lost. For an application of this influence diagram to an operational example, consider the binary-valued acceptance test of a widget on a production line.1 The decision is to accept widgets inferred to be good and reject those bad. One or more test measurements are made of the widget; this information is used to predict a value by which to classify the state of the widget. The decision is made by minimizing the cost of the relative errors of rejecting a good widget (a false negative) and passing a bad one (a false positive), by thresholding the output of the classifier. The decision rule is simply to reject if the predicted value is less than the threshold and accept otherwise. The rule is determined by ROC ("receiver operating characteristic") analysis. Footnote 1: This example is borrowed from Kruchten (2016). #### 2.3.1 Causal Models Discovery One of the powerful tools in the Bayesian toolbox are network structure learning tools, originating with early work by Heckerman et al. (1999) and Spirtes et al. (1991). Current advances such as "no tears" makes it possible to extend this to continuous variables with non-linear effects (Geffner et al. (2022)). However the placement of the decision and value nodes in the graph are up to the formulation of the decision and not discoverable by learning network structure. #### 2.3.2 Optimizing a Decision model The influence diagram implies the sequence of computational steps to determine the decision policy \(d(\mathbf{I})\). "Solving" the decision model of the form shown in Figure 1 reduces to an optimization (here written as maximization), \[V(\mathbf{I})^{*}=\max_{d(\mathbf{I})}E_{\mathsf{P}(\mathbf{S}\,|\,\mathbf{I})}[V(d(\mathbf{I}), \mathbf{S})] \tag{1}\] where each variable may represent a set of nodes in the causal model. Since the decision is made with knowledge of \(\mathbf{I}\) the decision policy becomes \(d(\mathbf{I})\) and the value function becomes \(V(d(\mathbf{I}),\mathbf{S})\), which becomes a function of the observed features (think a lookup table). Note how machine learning model \(\mathsf{P}(\mathbf{S}\,|\,\mathbf{I})\) is embedded in the decision model. Written out, the equation says to take expectation over the predictive distribution of the ML model, conditional on the observed features, then for each combination of features make the choice with the highest expected value. 'Solving" the decision model to determine the best choice means finding the policy \(d(\mathbf{I})\) that maximizes this expression, given the prediction and value models. This equation applies generally for any data science application. ## 3 Decision Quality Principles We discuss each of these principles as they apply to decision models in data science. * **Create Alternatives** Distinguish decision variables under one's control. * **Appropriate Frame** Formulate the right problem. * **Relevant and Reliable Information** Determine the information structure. * **Clear Values and Tradeoffs** Quantify the utility of outcomes that determine decisions. * **Sound Reasoning** Apply a valid calculus to solve the model. * **Commitment to Action** Give ownership to the decision-maker. Figure 1: Influence diagram with a predictive model for making an informed decision. In the following sections we define each principle, relate it to the theory that supports it, explain why it is needed, and relate it to an example of the decision to invest in an ML model. ### Identify Decisions In machine learning as in statistics some features are under the decision maker's control - called _treatments_--others are uncertain characteristics of the environment--sometimes, confusingly called "controls". Both make up the features that are inputs to the predictive model. Confusion of these two kinds of inputs in machine learning models can lead to perverse policies. The naive use of predictive models tends to confuse the decision recommendation with the model prediction. An obvious example arises in sales "propensity scoring" applications; those that attempt to predict the success of completing a sale based on the product, customer, and economic features. Viewing the prediction as a recommendation confuses the probability generated by the model with the salesperson's decision. As mentioned, in a decision model, the salesperson's decision a variable, in this case an action the salesperson takes to influence the outcome. Consequently in use, salespeople were confused--Does a high "propensity to close" mean the sale can be left to its own devices since it's success is inevitable, or does it mean that it needs more to have more effort applied to it? The confusion arises because a propensity model does not include the decision explicitly. The example we present demonstrates how the choice of which ML model to apply--one often relegated to irrelevant measures of model performance--can be framed as an investment decision using an influence diagram. We consider the off-line analysis from which two candidate models are built, each described by its ROC curve. These together with the default option, to not use an ML model, make up the three options shown in the summary tree in Figure 2. ### Formulation Solving the right problem means not confusing the model and the data with actual phenomenon. Aside from the obvious question of data quality, by virtue of the causal model one can check if the causal structure, decisions, and value model correspond to the real world. The data often have a physical origin, but the other aspects are derived from subjective factors, often elicited from "domain experts" or other problem stakeholders, using techniques borrowed from Human-centered design. In our example, we extend the influence diagram in Figure 1 by pre-pending the model choice investment node \(m\) to the diagram shown in Figure 2, to create the model investment choice diagram in Figure 3. The model choice changes the predictive model \(\mathsf{P}(\boldsymbol{S}\,|\,\boldsymbol{I},m)\), by design, includes an investment cost term in the value function. The model choice is an input and hence is known at the time of the operating decision. These three influences are shown by the three arcs that emanate from the investment node. Essentially there are three replicates of the previous influence diagram in Figure 1, each returning a value \(V(\boldsymbol{I},m)^{*}\). The optimal model choice is simply the maximum over these values: \[V^{*}=\max_{m}E_{\mathsf{P}(\boldsymbol{I})}[V(\boldsymbol{I},m)^{*}] \tag{2}\] As a Bayesian representation, the model choice influence diagram requires also that we impute a distribution over the distribution of information to be observed \(\mathsf{P}(\boldsymbol{I})\). In common practice this distribution is simply given by the empirical distribution of the model test set, but a complete Bayesian approach allows one to adjust this if its distribution is different in the domain where it is applied. ### Relevant Information The structure of the decision model can be modified to answer Value of Information questions. Information value is Figure 3: Influence diagram with the added “offline” model choice decision. Figure 2: The initial investment node with a choice of two alternative ML models and the default to do nothing. Option 2 is a simple, low investment choice. Option 3 is more advanced and correspondingly higher investment. simply the difference in expected value between the model with and without an arc that conditions a decision variable. If in the ML model the variable's feature importance is negligible, then that conditioning arc will create no information value, however information value brings in an additional consideration: Is the value function sensitive to the state affected by the feature? We may consider the model choice variable analogous to a Value of Information choice where the "information" is the quality of the ML model employed. Hence we have a method that chooses a model directly on its expected value instead of an indirect measure of accuracy. We show here how this is done in practice in this example of ROC curve analysis. ### Outcome Values There is an inescapable duality between probability and value. Every predicted outcome has two aspects, a predicted probability of occurrence and its value. A value model maps outcomes into quantifiable values. Utility theory shows us that any consistent set of preferences can be expressed by a utility function. Utility theory, and applications to value modelling as it applies in different domains is a field complementary to probability modelling. In the basic case, outcomes can be valued in monetary units to which risk and time preference can be applied. But what if there are multiple outcome variables, each valued differently? For example, I can keep sick patients in the hospital longer to assure their cure, but at the risk of running out of hospital beds if hospital admissions increase. Perhaps the hardest part of building a value model is the necessity to model trade-offs between competing outcomes, and coming up with a weighting that reduces multiple values to a common scale. The key point is that it is more important to include all factors that determine value, including "intangibles" that require judgment and can not be measured with high accuracy. Better a model that is inclusive instead of a model that avoids important factors presuming they are too hard or subjective to measure. Our example illustrates how trade-offs are made in the context of the "second order" choice of deciding which predictive model to incorporate in a decision model. The decision maker is the data scientist formulating a decision model; we are applying Decision Quality to the modelling process itself. A flawed practice is to choose just the model with the best test accuracy. This only makes sense in the limit of a perfect model. Otherwise the choice of the model in our example is carried out by ROC analysis. It should depend on the model error rates, which in a binary classfier are two; a _false positive rate_ (FPR), and _false negative rate_ (FNR). In addition, the model choice cannot be made without considering the utility tradeoff between the two error rates. This in turn determines a threshold; the operating point with the optimal tradeoff between error rates. Once one estimates the value gained by employing a model (or not), it can be compared with the investment cost for that option, to determine if the option makes sense in total. The model choice is made by deriving each model's ROC curves. In a few words, an ROC curve plots the FPR versus FNR for a binary classifier as parameterized by a threshold. The ROC curve is built by running the trained classifier on supervised test data. To determine the optimal operating point one needs a value function expressed as a unit cost for both FPR and FNR errors (and possibly also the costs for correct classification, if not zero). The optimal point--the point with highest utility--occurs where the iso-utility line meets a tangent to the ROC curve. Assuming the ROC curve is convex upward, this point is unique. The two panels in Figure 4 show the same ROC curve for the two different predictive models. The colored background shows the utility for each point in the background. The point on the ROC curve where the utility is highest is indicated by a green spot, and the payoff at that point is shown above the plot, as are the model score threshold, the true positive rate (TPR) and false positive rate (FPR). The image in panels (a) and (b) are screenshots from our R Shiny app2, made using the following inputs: Footnote 2: [https://ml4managers.shinyapps.io/ML_utility/](https://ml4managers.shinyapps.io/ML_utility/) Under these conditions, the highest value on the diagonal is zero, at the origin. This means that without a way to select Figure 4: ROC curves that express the value of using each of the two models. widgets with above average probability of being good, the expected value of selling widgets is negative, and your best bet is to not sell any (giving an expected value of zero for the'status quo', Option 1). The line of indifference for the value at the origin is indicated by the dashed green and black line on the left edge of the figure. Only ML models whose ROC curves cross this line have value greater than not using an ML model at all; note that in these example our ROC curves do cross the line, but just barely. Panel (a) shows the ROC curve for a simple binary rule based on a single data feature (Option 2); cases meeting this rule have a high proportion of positives, but only a small fraction of the good widgets are detected this way. Such rules may have low implementation costs, however. Panel (b) shows the ROC curve for a more sophisticated ML model (Option 3, in this case a Random Forest model). The value is slightly higher than the simple rule of option 2, but be aware that because this model depends on multiple data features, its cost of implementation and operation are expected to be higher as well. The choice of model is simply the model--possibly among several-- with the highest utility at it's optimal threshold as determined by the ROC analysis. This is equivalent to computing the expected value of a model by Equation (1). Then the choice of the model follows by Equation (2). This is the expected utility of a decision using that model in the investment decision model. **The important thing to note is that the evaluation and thus choice of the predictive model depends strongly on the utility function that applies to its errors as well as on any intrinsic property of the model.** For anything short of a perfect model there is no one best model; one model may be better when its FPR is less costly and vice versa. Furthermore, even a model with a higher expected value must be compared based on it's development and operational costs. ROC model analysis is part of the off-line predictive model development task, preceding the implementation of the operational decision model, however the formulation of the operational model, specifically its value function must be known for the off-line analysis. Since the best model choice depends on factors that are not intrinsic to the model, i.e. the utilities, and also the base rate "prevalence" of the condition to be classified, one could imagine automating the predictive model selection as conditions change in the primary model. This model selection framework can be extended to classification and regression problems in general. An ROC value model has a natural generalization to multi-valued outcomes as presented in Landgrebe and Duin (2008). The optimization and inference steps will change as the problems change, but their combination as shown by the influence diagram 3 applies uniformly--the influence diagram that expresses the model investment decision problem does not change. ### Sound Reasoning As Bayesians we tend to have a good handle on sound solution methods. Much of the Bayesian literature among statisticians argues for coherent probabilistic reasoning. By the nature of the Bayesian program we are well equipped to assure that a model's claims are valid. One point of contention may be the idea that "truth" in conventional ML practice derives from the testing on a holdout set of data from the set trained on--implicitly this assumes that the model prior distributions are given by the test set empirical distribution. The Bayesian approach offers a way to adjust priors should these domain distributions shift over time. ### commitment Behavioral psychology (Kahneman (2011)) explains why people make irrational choices when outcomes are uncertain and far off. The harder question is often how to create commitment despite people's natural tendencies. Behavioral aspects that determine whether a model is put into practice and its recommendations accepted are of course necessary for its value to be realized. How such commitment is assured, or equivalently what does it take for a model to be accepted brings in a host of concerns outside the Bayesian program. Making a decision has a human, emotional side. Often what is lacking is the decision-maker's understanding of the model; its interpretability. Having a causal basis, as expressed by the influence diagram structure of the model not only makes the model more interpretable, but extends the interpretation to explanations of what in the real world is modelled, not just an interpretation of how the model functions. ## 4 Conclusion The field of decision modelling as an outgrowth of data science and decision theory suggests a program of research that is in its early days. We have proposed an approach that extends conventional ML practice by using influence diagrams to create integrated predictive and value models. We gave an example of making a model selection choice for a binary classifier with a linear utility function. Future work will extend this to other predictive models, and their integration with more general utility models. Data science is now beset by a host of thorny ethical questions about "responsible AI". Perhaps the field would be advanced by a "reverse" modeling approach, with models that recover the decision-maker's true preferences, as proposed in Stuart Russell's book _Human Compatible_.Russell (2019)
2307.16832
Metric@CustomerN: Evaluating Metrics at a Customer Level in E-Commerce
Accuracy measures such as Recall, Precision, and Hit Rate have been a standard way of evaluating Recommendation Systems. The assumption is to use a fixed Top-N to represent them. We propose that median impressions viewed from historical sessions per diner be used as a personalized value for N. We present preliminary exploratory results and list future steps to improve upon and evaluate the efficacy of these personalized metrics.
Mayank Singh, Emily Ray, Marc Ferradou, Andrea Barraza-Urbina
2023-07-31T16:56:08Z
http://arxiv.org/abs/2307.16832v1
# Metric\(\mathbf{\alpha}\)CustomerN: Evaluating Metrics at a Customer Level in E-Commerce ###### Abstract Accuracy measures such as Recall, Precision, and Hit Rate have been a standard way of evaluating Recommendation Systems. The assumption is to use a fixed Top-N to represent them. We propose that median impressions viewed from historical sessions per diner be used as a personalized value for N. We present preliminary exploratory results and list future steps to improve upon and evaluate the efficacy of these personalized metrics. Recommender Systems, Personalization, Fair Evaluation + Footnote †: 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 40 International (CC BY 4.0). Discussion Paper ## 1 Introduction Recommender Systems (RS) are ubiquitous in e-commerce, from serving relevant ads to customers to helping them pick their favorite food. We have been evaluating these RS in the same manner for more than a decade using Metric@N[1, 2]; e.g. _Recall@N_ and N takes a numeric value such as: 1, 5, 100. Evaluating the performance of the system using a static N for all customers misses important nuances in their behavior on the platform[3]. Customer A might only look at the first 5 results on average but Customer B's average is 25. The prevailing industry assumption is that displaying "best" results on top is the optimal solution for an online RS, but this may not hold universally[3]. Some customers might not click on the first result even if it is the most relevant, because they want to "explore" additional results before making a decision. In line with the goal of EvalRS2023[4]; we propose calculating a personalized evaluation metric at _CustomerN_ instead of a static \(N\) termed: _Metric@CustomerN_. One way of calculating _CustomerN_ is to take the median of maximum impression ranks scrolled to in past sessions on the platform. ## 2 Related Work To the best of our knowledge, there are no other texts that discuss the use of a dynamic \(N\) value while calculating accuracy metrics to evaluate a RS. Giobergia[5] introduces "variance agreement" to account for different user interests on a music streaming platform. Chia et al.[6] introduced _RecList_, to standardize behavioral metrics testing, and also introduce data slice-based evaluation. Similarly, Ekstrand et al.[7] break down users by demographic groups to understand if users from different groups obtain the same utility from RS. Kaminskas et al.[8] expand beyond accuracy measures and study the non-accuracy measures such as Diversity, Serendipity, Novelty, and Coverage and discuss their calculation. Sun[9] and Verachtert et al.[10] detail the importance of observing a global timeline while evaluating recommender models. Using impression data in RS improved the relevance of recommended results in [11, 12], we propose incorporating impression data in RS evaluation as well. ## 3 Metric@CustomerN The methodology to calculate _Recall@CustomerN1_ is detailed in the steps below: Footnote 1: Recall is used as an example metric for representation. The same steps can be followed to calculate other similar metrics: Precision, Accuracy, Hit Rate, NDCG, etc. 1. For a customer \(C_{i}\) in a set of customers \(S\) we capture the max impression rank, \(R_{ij}\), scrolled-to in each session \(j\). 2. We calculate the median impression position for a customer for sessions browsed in the last \(X\) days: \[N_{i}=\text{median}(R_{ij}),\hskip 14.226378pti\in\{1,S\},\hskip 14.226378pti \in\{1,p_{i}\}\] (1) where \(p_{i}\) denotes the number of sessions browsed by customer \(C_{i}\) and \(X\) is decided based on platform and analysis goals. 3. Now we can calculate the recall value for each customer denoted by: _Recall@\(N_{i}\)_. 4. For a summarized view of how the recommendation algorithm performs, we use average \(Recall@N_{i}\) for all customers on the platform: \[\frac{1}{S}\sum_{i=1}^{S}Recall@N_{i},\hskip 14.226378pti\in\{1,S\}\] (2) ## 4 Preliminary Analysis **Figure 0(a)** shows significant variation in CustomerN, supporting the need for diner-specific N. In **Figure 0(b)** and **Figure 0(c)** we observe that as the median impressions go up for a customer, so does the variance of their impressions viewed across sessions. Additionally, the CV value is higher for smaller _CustomerN_ and stabilizes for diners with higher median impression views. ## 5 Discussion Based on the findings from [7, 13, 8] and other research on improving the evaluation of RS, it is clear that we are trying to understand how to better explain variability in customer behavior on e-commerce platforms. As a future undertaking we would first compare the performance of popular RS algorithms on public[14, 15, 16] and proprietary datasets using _Metric@CustomerN_. Secondly, using median impressions viewed across all sessions as _CustomerN_ has its limitations because it cannot account for additional variability within the same customer's sessions as seen in Figure 1. So we would like to segment customer sessions based on their mindset per session using same-session variables, historical activity, demographics, and geographical variables as detailed in [17, 18] and subsequently calculate _CustomerN_ as median impressions viewed at the Customer-Segment level. Lastly, we will monitor long-term KPIs to validate if improved _Metric@CustomerN_ correlates with customer satisfaction and lifetime value. ## 6 Conclusion Recent research [11, 12] has shown us that it's extremely valuable to incorporate customer impression data into an RS. Similarly, we propose using impression data to enhance the effectiveness of accuracy-based metrics. In our opinion, this approach has merit and warrants additional work to understand the implication of developing personalized calculations like _Metrics@CustomerN_ for RS evaluation. The preliminary analysis we did points to the existing variability in customer behavior and to a need for a customer-centric evaluation of accuracy metrics. The methodology described in this paper is just the first step toward building a more personalized evaluation outlook for RS, we look forward to testing it out at EvalRS2023[4]. ## Acknowledgments We would like to thank Ruonan Ding for laying the foundation for this work and Fan Gong for their advice and feedback on the paper. Figure 1: CustomerN variability across customers with 3+ sessions in the last 90 days on Grubhub. All Axes have been normalized by max CustomerN. Bars represent the inter-quartile range of y-axis values in (b) and (c)
2309.17058
Imagery Dataset for Condition Monitoring of Synthetic Fibre Ropes
Automatic visual inspection of synthetic fibre ropes (SFRs) is a challenging task in the field of offshore, wind turbine industries, etc. The presence of any defect in SFRs can compromise their structural integrity and pose significant safety risks. Due to the large size and weight of these ropes, it is often impractical to detach and inspect them frequently. Therefore, there is a critical need to develop efficient defect detection methods to assess their remaining useful life (RUL). To address this challenge, a comprehensive dataset has been generated, comprising a total of 6,942 raw images representing both normal and defective SFRs. The dataset encompasses a wide array of defect scenarios which may occur throughout their operational lifespan, including but not limited to placking defects, cut strands, chafings, compressions, core outs and normal. This dataset serves as a resource to support computer vision applications, including object detection, classification, and segmentation, aimed at detecting and analyzing defects in SFRs. The availability of this dataset will facilitate the development and evaluation of robust defect detection algorithms. The aim of generating this dataset is to assist in the development of automated defect detection systems that outperform traditional visual inspection methods, thereby paving the way for safer and more efficient utilization of SFRs across a wide range of applications.
Anju Rani, Daniel O. Arroyo, Petar Durdevic
2023-09-29T08:42:44Z
http://arxiv.org/abs/2309.17058v1
# Imagery Dataset for Condition Monitoring of Synthetic Fibre Ropes ###### Abstract Automatic visual inspection of synthetic fibre ropes (SFRs) is a challenging task in the field of offshore, wind turbine industries, etc. The presence of any defect in SFRs can compromise their structural integrity and pose significant safety risks. Due to the large size and weight of these ropes, it is often impractical to detach and inspect them frequently. Therefore, there is a critical need to develop efficient defect detection methods to assess their remaining useful life (RUL). To address this challenge, a comprehensive dataset has been generated, comprising a total of 6,942 raw images representing both normal and defective SFRs. The dataset encompasses a wide array of defect scenarios which may occur throughout their operational lifespan, including but not limited to placing defects, cut strands, chafings, compressions, core outs and normal. This dataset serves as a resource to support computer vision applications, including object detection, classification, and segmentation, aimed at detecting and analyzing defects in SFRs. The availability of this dataset will facilitate the development and evaluation of robust defect detection algorithms. The aim of generating this dataset is to assist in the development of automated defect detection systems that outperform traditional visual inspection methods, thereby paving the way for safer and more efficient utilization of SFRs across a wide range of applications. Synthetic fibre ropes Condition monitoring Defect detection Remaining useful life Damage diagnostics ## Specifications Table \begin{table} \begin{tabular}{l l c} \hline \hline Subject & Computer Vision and Pattern Recognition \\ Specific subject area & Computer vision methods (CVM) for detection of anomalies in synthetic fibre \\ & ropes (SFRs) \\ Type of data & Image (png) \\ How data were acquired & Data was collected using a Basler acA2000 camera with a Basler C11-5020-12M-P Premium 12-megapixel lens. The experimental setup consists of a motor, three sheaves (one sheave for holding weight, two rotation pulley blocks, two wire guide wheels), four Aputure AL-MC RGBWW LED lights, NVIDIA Jetson Nano P3450, and ten SFRs each of length 8 cm subjected to a weight of 50 kg. To ensure a comprehensive dataset, images were captured at different motor speeds, enabling the capture of a maximum number of defects. This variation in motor speed helps simulate different operating conditions that the SFRs may experience. The dataset images were captured at a frame rate of 165 frames per second (FPS) and had a resolution of 2000 x 1080 pixels. \\ Data format & Raw \\ Description of data collection & Ten different SFRs each having a length of 8 m have been used for collecting datasets. Nine ropes used for the experiment contain multiple defects of each class ranging between high, medium and low while one rope is a non-defective rope. The dataset includes various classes such as normal, compression, core out, placking (high, low, medium), chafing (high, low, medium), and cut strands (high, low, medium). These classes encompass a wide range of defects commonly found in SFRs. The primary objective of collecting the dataset is to predict and analyze the defects in SFRs to provide enhanced efficiency and reliability in various industries. \\ Data source location & Titsition: Department of Energy, Aalborg University \\ & City/Region: Esbjerg \\ & Country: Denmark \\ & Repository name: Mendeley Data \\ Data accessibility & Data identification number: 10.17632/by9wy6fxsr (Version 1, Version 2) \\ & Direct URL to data: [https://data.mendeley.com/datasets/by9wy6fxsr](https://data.mendeley.com/datasets/by9wy6fxsr) \\ \hline \hline \end{tabular} \end{table} Table 1: Description of defects and defect class in SFRs dataset ### Value of the Data * SFRs are extensively used in ocean engineering, offshore, wind turbine industries, etc., where safety is of paramount importance. The dataset enables the development of more efficient and reliable defect detection methods, leading to improved safety and risk mitigation. Timely identification of defects can help to prevent potential failures and ensure the ropes are in optimal condition. * Researchers and practitioners can utilize the dataset to develop and evaluate algorithms that can accurately identify and classify various types of defects, such as blackings, cut strands, chafing, compression, core out, and normal. The dataset contains all possible defect scenarios to the best of our knowledge. Also, this is among the only publicly accessible datasets on SFRs. * This dataset aims to serve as a valuable resource for computer vision applications such as object detection, classification, and segmentation. Researchers can utilize the dataset to train and test algorithms for tasks related to defect detection and CM applications. This promotes the development of more accurate and efficient computer vision techniques tailored to the unique challenges of inspecting large, heavy ropes. * This dataset may also encourage the adoption of standardized testing protocols and best practices for the inspection of SFRs across industries. ## 1 Objective Synthetic fibre ropes, such as HMPE (high modulus polyethylene) and UHMWPE (ultra-high-molecular-weight polyethylene), offer advantageous properties as alternatives to steel wire ropes (SWRs) in underwater equipment and heavy-load handling applications Rani et al. (2023); McKenna et al. (2004), lif (2023). The high resistance to frictional wear, high tensile strength, lightweight nature, and flexibility make them a desirable choice. Effective CM of these SFRs is crucial for diagnosing and preventing system malfunctions, as well as forecasting reliability and determining the RUL of the ropes. To support the development of robust defect detection algorithms and methodologies, this imagery dataset has been generated, encompassing various possible defect scenarios in SFRs. The dataset serves as a benchmark for evaluating the performance of different defect detection techniques. This dataset aims to foster collaboration and knowledge sharing among industry experts, researchers, and stakeholders involved in SFR's applications. Also, the availability of this dataset can encourage the adoption of standardized testing protocols and best practices for SFR's Figure 1: Experimental setup for collecting SFRs dataset. inspection across industries. By utilizing the dataset's annotations, researchers can explore a wide range of deep learning methodologies, including classification and defect detection tasks. The dataset promotes collaboration, standardization, and innovation in the inspection and condition monitoring of SFRs, facilitating the adoption of advanced techniques and best practices across industries. ## 2 Data Description The dataset has a total of 6,942 labeled images having a dimension of 2000 x 1080 pixels in PNG format indicating the defective and normal condition of the SFRs. The defective ropes are compiled based on the defect types and their severity (high, medium and low). In the repository, data has been compiled into six separate folders; one folder is Figure 2: Different defects scenarios considered in the SFRs dataset. for non-defective ropes in zip format while the other five folders are in zip format for blackings, cut strands, chafing, compression, and core out defects respectively. Each defective folder is further divided based on their defect type or their severity level if any named as high, medium and low respectively. **Placking** folder contains all images of the defect class named as _placking high, placking medium and placking low_. The same criteria have been performed for the **cut strands** and **chafing** defects. **Compression** folder contains images of the ropes with irregular rope diameters with three separate classes named _compression, compression with chafing and compression with cut strands_. The **core out** folder contains images of the SFRs with the centre core on the rope surface with two classes _core out and core out with cut strands_. Lastly, the **Normal** folder contains images of the class _normal_. A detailed description of defects and defect class in the SFRs dataset has been mentioned in Table 1. ## 3 Experimental Design, Materials and Methods The experimental setup involved a motor, four LED lights, three sheaves (two rotation pulley blocks, two wire guide wheels and one sheave for holding weight), and ten SFRs each of length 8 cm subjected to a weight of 50 kg while rotating through a pulley system. To ensure proper lighting for image capture, four Aputure AL-MC RGBWW LED lights were used, each providing an illumination of 1000 lux. These lights were strategically positioned at an angle of approximately 45 \({}^{\circ}\) to ensure uniform illumination during image capture. During the data collection process, the SFRs were rotated on the sheaves, which were supported by rotation pulleys and wheels designed to guide the rope used for lifting. This rotational movement aimed to simulate real-world scenarios where ropes are subjected to rotational forces and movement during lifting operations. By replicating these conditions, the data collection process aimed to capture a more accurate representation of the rope's behaviour and the potential defects that may arise during lifting and operational activities. The experimental setup used for collecting the dataset has been illustrated in Figure 1. ### Rope Description Dyneema is a multi-filament fibre made through a gel-spinning process, derived from materials like HMPE or UHMWPE. It boasts a range of remarkable qualities including impressive strength, lightweight properties, minimal elongation at break, and resilience against various chemicals and harsh environmental conditions. These outstanding mechanical properties, coupled with its low density, give Dyneema an exceptional performance-to-weight ratio. This makes it a valuable resource for both researchers and industry professionals. It allows them to closely monitor and assess the conditions of fibre ropes, allowing the possibility of substituting traditional SWRs with Dyneema-based alternatives. The SFRs used in the experiment possess the following characteristics: * Fibers: Dyneema SK 75/78 * Nominal Diameter: 8 mm * Construction: 12 strands / 12 braided rope * Torsional neutral: The rope is designed in a way to resists twisting or torsional forces. * Pitch/stitch length: Approximately 11mm * Braiding period: Approximately 66mm Figure 3: Sample of the annotated SFRs dataset. ### Defects Description The experiment has been performed on ten SFRs where artificial defects have been introduced by an expert roper from Dynamica Aps, Denmark dyn (2023) to create a diverse range of defect scenarios to facilitate defect detection and evaluation algorithms. Out of the ten ropes, nine ropes are considered defective, while one rope served as a non-defective or normal reference. The defects were distributed among the nine defective ropes, following a pattern. These defects included placing defects with varying severity levels (high to medium) in three ropes, cut strands in another three ropes, and chafing defects in the remaining three ropes. Additionally, each of the nine defective ropes also featured compression and core-out defects. By introducing these specific defects in the experiment, the goal was to provide a realistic representation of the types and severity levels of defects that can occur in SFRs during their operational lifespan. This approach allowed for the development and evaluation of defect detection algorithms in real-world scenarios. Furthermore, the ISO standard 9554:2019, titled "Fibre rope - General specifications," was referred for information about potential damages on SFRs (2019). This ISO standard served as a valuable reference, providing insights into the types of defects that can manifest in these ropes, aligning with the experiment's objectives. Figure 2 depicts the artificially introduced defects in the SFRs offering a clear illustration of the various defect types encountered during the experiment. ### Data Collection The image dataset has been collected using a Basler acA2000 camera with a Basler C11-5020-12M-P Premium 12-megapixel lens. To read the images from the camera, an NVIDIA Jetson Nano P3450 was used as the processing (for reading and analyzing the captured data) platform. A total of 6,942 images having a resolution of 2000 x 1080 pixels have been collected to apply the defect detection algorithms. The collected images were then organized and renamed according to their respective classes before being uploaded to an open-access repository. The dataset includes various classes such as normal, compression, core out, placing (high, medium, low), cut strands (high, medium, low), and chafing (high, medium, low) respectively. The labeling of each image was performed using the VGG image annotator (VIA) Dutta et al. (2016, 2019), an open-source tool. Figure 3 depicts the annotated images of the SFRs obtained from the VIA. The final dataset has been deposited in the Mendeley repository, ensuring its availability for further research and development. The dataset provides high-quality images of SFRs suitable for object detection and segmentation tasks Ridge et al. (2001). Researchers can utilize this dataset to develop and enhance algorithms and methodologies in the field of SFR analysis, enabling more accurate and efficient detection and segmentation of defects Debeleac et al. (2020). ## Ethics Statement This research does not involve experiments, observations, or data collection related to human or animal subjects. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data Availability Imagery Dataset for Condition Monitoring of Synthetic Fibre Ropes (Mendeley Data). ## Acknowledgement This research was supported by Aalborg University, Liftra ApS (Liftra), and Dynamica Ropes ApS (Dynamica) in Denmark under the EUDP program through project grant number 64021-2048.
2309.04742
Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in Neural Networks
We consider the problem of performing Bayesian inference for logistic regression using appropriate extensions of the ensemble Kalman filter. Two interacting particle systems are proposed that sample from an approximate posterior and prove quantitative convergence rates of these interacting particle systems to their mean-field limit as the number of particles tends to infinity. Furthermore, we apply these techniques and examine their effectiveness as methods of Bayesian approximation for quantifying predictive uncertainty in neural networks.
Diksha Bhandari, Jakiw Pidstrigach, Sebastian Reich
2023-09-09T10:01:51Z
http://arxiv.org/abs/2309.04742v2
# Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in ReLU Networks # Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in ReLU Networks Diksha Bhandari, Jakiw Pidstrigach, Sebastian Reich **Abstract** We consider the problem of performing Bayesian inference for logistic regression using appropriate extensions of the ensemble Kalman filter. Two interacting particle systems are proposed that sample from an approximate posterior and prove quantitative convergence rates of these interacting particle systems to their mean-field limit as the number of particles tends to infinity. Furthermore, we apply these techniques and examine their effectiveness as methods of Bayesian approximation for quantifying predictive uncertainty in ReLU networks. ## 1 Introduction The task in inverse problems is the inference of an unknown parameter \(\theta\in\mathbb{R}^{D}\) from noisy observations \(d\in\mathbb{R}^{N}\), which are generated through \[d=G(\theta)+\eta, \tag{1}\] where \(G\) denotes a forward map from model parameters to observable output data \(d\) and \(\eta\) denotes observational noise, which is commonly assumed to be Gaussian, that is, \(\eta\sim\mathcal{N}(0,P_{\eta})\). A Bayesian inverse problem can be formulated as producing samples from a random variable \(\theta\) conditioned on \(d\). Given the prior \(\pi_{\text{prior}}(\theta)\) on \(\theta\), the inverse problem is formulated as finding the posterior \(\pi_{\text{post}}(\theta)\) on \(\theta\) given \(d\). By Bayes theorem, the posterior distribution can be written in terms of the prior density \(\pi_{\text{prior}}\) and the negative log-likelihood or loss function \(\Psi:\mathbb{R}^{D}\to\mathbb{R}\) as \[\mathrm{d}\pi_{\text{post}}(\theta)\propto\exp(-\Psi(\theta))\mathrm{d}\pi_{ \text{prior}}(\theta). \tag{2}\] The problem of sampling from a target distribution is fundamental to Bayesian statistics, machine learning, and data assimilation. Considerable research has been done on sampling methods for Bayesian inverse problems, mainly in the case of \(l_{2}\)-loss functions [8, 21, 23, 24]. In this work, we will focus on \(\Psi\) being the cross-entropy loss instead, which is used in logistic regression problems. We will introduce two methods, closely related to the algorithms introduced in [34] to approximate the posterior distribution for Bayesian logistic regression. We will prove that the methods are well-defined and provide quantitative bounds on their convergence in the large-ensemble (mean-field) limit. We will then apply these methods to Bayesian neural networks. This gives us the possibility to quantify uncertainty in the model weights and the network outputs. Our numerical experiments show that our methods outperform the state-of-the-art methods for this problem. ### Classification and logistic regression In this work, we focus on classification problems, that is, the problems arising from classifying objects into classes \(C_{1},\ldots,C_{k}\). For notational convenience, we will focus on the case of binary classification, i.e., we only have classes \(C_{1}\) and \(C_{2}\). However, all methods can be generalized to \(k\) classes, see [34, Section 7.2]. Consider the data set \[\mathcal{D}=\{(\phi^{n},d^{n})\}_{n=1}^{N},\] where \(d^{n}\in\{0,1\}\) are targets for the binary classification case with input features \(\phi^{n}\in\mathbb{R}^{D}\) for \(n=1,...,N\). The \(\phi^{n}\)'s can either be the representation of the data in any numerical form, or stem from a feature map. In Section 5, we will train a neural network for the use as feature map. We also introduce the shorthand notation \[\Phi=(\phi^{1},...,\phi^{N})\in\mathbb{R}^{D\times N} \tag{3}\] and define a parametric family of models as follows. Given a parameter \(\theta\), the probability of an example belonging to class \(C_{1}\) will be given by \[\mathbb{P}_{\theta}[\phi\in C_{1}]=\sigma(\langle\theta,\phi\rangle),\] where \(\sigma\) is the sigmoid function \[\sigma(z)=\frac{1}{1+\exp{(-z)}}. \tag{4}\] The negative log-likelihood of the given dataset \(\mathcal{D}\) under our parametric model is given by the cross-entropy loss function \[\Psi(\theta)=-\sum_{n=1}^{N}\{d^{n}\log(y_{n}(\theta))+(1-d^{n})\log(1-y_{n}( \theta))\}, \tag{5}\] where \[y_{n}(\theta)=\sigma(\langle\theta,\phi^{n}\rangle)=\mathbb{P}_{\theta}[\phi^ {n}\in C_{1}]. \tag{6}\] For \(n=1,\ldots,N\), we introduce the vector \(y(\theta)\in\mathbb{R}^{N}\) as \[y(\theta)=(y_{1}(\theta),\ldots,y_{n}(\theta))^{\mathrm{T}}, \tag{7}\] and the vector of target data labels \(d\in\mathbb{R}^{N}\) as \[d=(d^{1},\ldots,d^{n})^{\mathrm{T}}. \tag{8}\] We investigate sampling methods based on the ensemble Kalman filter (EnKF) [11] and its extension to Bayesian logistic regression as already considered in [34]. As discussed in the previous section, the Bayesian inverse problem now consists of sampling from \(\pi_{\mathrm{post}}\) given by (2). Note that the likelihood in this section stems from linear logistic regression, since \(y_{n}=\sigma(\langle\theta,\phi^{n}\rangle)\). However, the methods can be generalized to nonlinear logistic regression, i.e., \(y_{n}=\sigma(f^{n}(\theta))\), with \(f^{n}(\theta)\) being nonlinear, see [34, Section 7.1]. Furthermore, the method proposed in Section 2.2 can be implemented in a derivative-free manner, as discussed in [34, Section 7.1]. ### Literature review Since its introduction in [10], the EnKF has been a popular choice for performing data assimilation tasks due to its robustness and wide applicability. In particular, the EnKF has shown promise as a derivative-free Bayesian inference technique [11, 34, 23]. More recently, the EnKF has been combined with sampling methods for Bayesian inference [8, 9] in order to transform random samples at \(s=0\) into samples from the posterior as \(s\to\infty\). Studying the theoretical aspects of homotopy methods based on the EnKF has also been an active area of research [38, 39, 34, 35, 4]. Furthermore, it should also be noted that both the EnKF and ensemble Kalman inversion (EKI) [24] can be cast within an interactive particle setting. However, most of the work is done in the case of a quadratic likelihood \(\Psi(\theta)\) or a perturbation of a quadratic likelihood. The work [24, 34] introduces multiple EnKF-type methods to study Bayesian logistic regression, i.e., \(\Psi\) being the negative log-likelihood of a logistic regression model. In this paper, we further develop two of the methods proposed in [34] by taking inspiration from [17, 23, 15, 37] and deploy them for uncertainty quantification in Bayesian logistic regression. As neural networks have become widely used in critical applications, the need for accurate uncertainty quantification has grown [13]. To address this, a popular technique is using Bayesian inference [36], which allows machine learning algorithms to assign low confidence to test points that deviate significantly from the training data or prior information. Bayesian neural networks are commonly used for this purpose [43, 1, 30, 3, 44, 14, 33, 29]. Popular approximate Bayesian approaches include traditional variational inference [3, 19], Laplace approximation [28], Sampling-based approaches like Hamiltonian Monte Carlo (HMC) [32] and stochastic gradient Langevin dynamics [42]. More recently, in an attempt to make these Bayesian approaches more computationally feasible, many methods of partial stochastic networks have been considered [40, 25, 6, 41]. Moreover, many non-Bayesian methods [26, 22, 27] for uncertainty quantification have also been proposed. These methods aim to provide usable estimates of predictive uncertainty even in the presence of limited data. ### Outline and our contribution The fundamental idea underlying this study is to develop efficient Bayesian inference methods for logistic regression based on the EnKF. Furthermore, we apply these methods to uncertainty quantification in ReLU networks and compare them to the state-of-the-art. To that end we will derive two interacting particle systems (IPSs) which sample from an approximate posterior. Moreover, we prove quantitative convergence rates of the IPSs to their mean-field-limit as the number of particles tends to infinity. We demonstrate the efficacy of the proposed methods for estimating uncertainty in ReLU networks through numerical experiments for binary classification in Section 6. The remainder of this paper has been organized as follows. The dynamical system formulations for ensemble transform methods based on the homotopy approach and the infinite time second-order dynamical sampler for Bayesian inference in logistic regression are described in Section 2. Section 3 analyzes the proposed ensemble transform methods and derives their mean-field limits. Therein, we quantitatively bound the distance of the \(J\)-particle system to its infinite-particle limit in Theorem 1. Efficient methods of time-stepping of interacting particle systems for robust implementations and pseudocode describing the algorithms that we employ in this paper are discussed in Section 4. Section 5 introduces the framework for application of proposed methods to Bayesian logistic regression for uncertainty quantification in ReLU networks. Experimental results on predictive uncertainty of ReLU networks using proposed Bayesian methods for inference are shown in Section 6. These experiments demonstrate how reliable the uncertainty estimates are for different methods considered. ## 2 Dynamical system formulations We denote by \(\theta_{s}^{1},\ldots\theta_{s}^{J}\) an ensemble of \(J\) particles, indexed by a time parameter \(s\geq 0\). We denote by \(m_{\theta_{s}}\) the empirical mean of the ensemble, \[m_{\theta_{s}}=\frac{1}{J}\sum_{j=1}^{J}\theta_{s}^{j}, \tag{9}\] and by \(P_{\theta_{s}}\) the empirical covariance matrix, \[P_{\theta_{s}}=\frac{1}{J}\sum_{j=1}^{J}(\theta_{s}^{j}-m_{s})(\theta_{s}^{j}- m_{s})^{\mathrm{T}}. \tag{10}\] We introduce the matrix of ensemble deviations \(\Theta_{s}\in\mathbb{R}^{D\times J}\) as \[\Theta_{s}=\left(\theta_{s}^{1}-m_{\theta_{s}},\theta_{s}^{2}-m_{\theta_{s}}, \ldots,\theta_{s}^{J}-m_{\theta_{s}}\right). \tag{11}\] We will adapt the convention that upper indices stand for the ensemble index, while lower indices stand for the time index or the \(i\)th entry of an \(N\)-dimensional vector. Associated to the particle system is the empirical measure \[\mu_{\theta_{s}}=\frac{1}{J}\sum_{j=1}^{J}\delta_{\theta_{s}^{j}}, \tag{12}\] where \(\delta_{\theta}\) stands for the Dirac delta at \(\theta\). We denote the expectation with respect to this measure by \(\mu_{\theta_{s}}[f]\), i.e. for a function \(f\), \[\mu_{\theta_{s}}[f]=\frac{1}{J}\sum_{j=1}^{J}f(\theta_{s}^{j}).\] Finally, we assume that the prior measure \(\pi_{\mathrm{prior}}\) is a Gaussian and given as \[\pi_{\mathrm{prior}}=\mathcal{N}(m_{\mathrm{prior}},P_{\mathrm{prior}}).\] In each of the following two subsections, we introduce an IPS to sample from the posterior in a logistic regression problem. ### Homotopy using moment matching In homotopy approaches to Bayesian inference, we assume that the initial ensemble \(\{\theta_{0}^{j}\}\) is distributed according to the prior \(\pi_{\mathrm{prior}}\). One then evolves the ensemble such that at some fixed terminal time \(s\), often \(s=1\), the ensemble is distributed according to the posterior \(\pi_{\mathrm{post}}\). To derive such an evolution, one starts by defining a path \(\pi_{s}\) in the space of measures, which starts at a prior distribution \(\pi_{0}=\pi_{\mathrm{prior}}\) and ends at \(\pi_{1}=\pi_{\mathrm{post}}\)[5, 35]. We will study a homotopy approach introduced in [34]. In this section, we will shortly summarize the resulting differential equations. See [34, Section 4.2] for more details and the derivation of the equations. We will need the gradient and Hessian of the negative log-likelihood \(\Psi\), defined in (5). The gradient is given by \[\nabla_{\theta}\Psi(\theta)=\sum_{n=1}^{N}(y_{n}(\theta)-d^{n})\phi^{n}=\Phi(y (\theta)-d), \tag{13}\] while the Hessian is \[D_{\theta}^{2}\Psi(\theta)=\Phi R(\theta)\Phi^{\mathrm{T}}. \tag{14}\] Here \(R(\theta)\in\mathbb{R}^{N\times N}\) is a diagonal matrix with diagonal entries \[r_{nn}=y_{n}(\theta)(1-y_{n}(\theta)). \tag{15}\] For \(s\in[0,1]\), the dynamical system to transform the prior is given by \[\begin{split}\frac{\mathrm{d}}{\mathrm{d}s}\theta_{s}^{j}& =-\frac{1}{2}P_{\theta_{s}}\left(\mu_{\theta_{s}}[D_{\theta}^{2} \Psi(\theta)](\theta_{s}^{j}-m_{\theta_{s}})+2\mu_{\theta_{s}}[\nabla_{\theta }\Psi(\theta)]\right)\\ &=-\frac{1}{2}P_{\theta_{s}}\Phi\Big{(}\mu_{\theta_{s}}[R]\Phi^{ \mathrm{T}}(\theta_{s}^{j}-m_{\theta_{s}})+2(\mu_{\theta_{s}}[y]-d)\Big{)} \end{split} \tag{16}\] with random initial conditions \(\theta_{0}^{j}\sim\mathcal{N}(m_{\mathrm{prior}},P_{\mathrm{prior}})\) i.i.d. for \(j=1,\ldots,J\). One can also evolve the mean \(m_{\theta_{s}}\) and ensemble deviations \(\Theta_{s}\) instead of the ensemble \(\{\theta_{s}^{j}\}_{j=1}^{J}\). Their evolution equations are given by \[\frac{\mathrm{d}}{\mathrm{d}s}m_{\theta_{s}}=-P_{\theta_{s}}\Phi\Big{(}\mu_{ \theta_{s}}[y]-d\Big{)}, \tag{17}\] and \[\frac{\mathrm{d}}{\mathrm{d}s}\Theta_{s}=-\frac{1}{2}P_{\theta_{s}}\Phi\mu_{ \theta_{s}}[R]\Phi^{\mathrm{T}}\Theta_{s} \tag{18}\] respectively, where \(P_{\theta_{s}}\) is the empirical covariance matrix given in (10). However, the full ensemble is still required to compute the empirical expectation value of \(R(\theta)\) and \(y(\theta)\). ### Deterministic second-order dynamical sampler Alternatively to transporting from the prior to the posterior in a fixed time interval, one can also construct systems that sample the target distribution as \(s\to\infty\). Markov Chain Monte Carlo algorithms are the most famous family of algorithms with this property. However, they normally work on a single sample trajectory \(\theta_{s}\) instead of an ensemble. The algorithm introduced in [34, Section 5] combines the homotopy approaches with overdamped Langevin dynamics to motivate an IPS that approximates the posterior as \(s\to\infty\). The system of equations is given by \[\begin{split}\frac{\mathrm{d}\theta_{s}^{j}}{\mathrm{d}s}& =-\frac{1}{2}P_{\theta_{s}}\Bigg{(}\Phi\Big{(}\mu_{\theta_{s}}[R] \Phi^{\mathrm{T}}(\theta_{s}^{j}-m_{\theta_{s}})+2(\mu_{\theta_{s}}[y]-d) \Big{)}\\ &\qquad+P_{\mathrm{prior}}^{-1}(\theta_{s}^{j}+m_{\theta_{s}}-2m _{\mathrm{prior}})\Bigg{)}+P_{\theta_{s}}^{1/2}\mathrm{d}W_{s}^{j},\end{split} \tag{19}\] where \(W_{s}^{j}\) denotes \(D\)-dimensional standard Brownian motion. For details on the derivation see [34, Section 5]. Note, that similar systems are also introduced in [17, 23, 18]. We now modify (19) by replacing the stochastic driving term \(P_{\theta_{s}}^{1/2}\mathrm{d}W_{s}^{j}\) by \(\frac{1}{2}(\theta_{s}^{j}-m_{\theta_{s}})\mathrm{d}s\), rendering the system deterministic except for the choice of the initial ensemble \(\{\theta_{0}^{j}\}_{j=1}^{J}\). The advantage of making it deterministic is that we can perfectly assess convergence of the algorithm, since the particles will stop moving when they reach equilibrium. The replacement of \(P_{\theta_{s}}^{1/2}\) by \(\frac{1}{2}(\theta_{s}-m_{s})\) is motivated by the fact, that in the mean-field case, for Gaussian densities, they have the same distributional effect, which we prove in Proposition 6 (for more details, see Section D). Furthermore, we will prove that the Gaussian assumption is well-founded in Section 3. Therefore, the IPS ODE we will study is given by \[\begin{split}\frac{\mathrm{d}\theta_{s}^{j}}{\mathrm{d}s}& =-\frac{1}{2}P_{\theta_{s}}\Bigg{(}\Phi\Big{(}\mu_{\theta_{s}}[R] \Phi^{\mathrm{T}}(\theta_{s}^{j}-m_{\theta_{s}})+2(\mu_{\theta_{s}}[y]-d) \Big{)}\\ &\qquad+P_{\mathrm{prior}}^{-1}(\theta_{s}^{j}+m_{\theta_{s}}-2m _{\mathrm{prior}})\Bigg{)}+\frac{1}{2}(\theta_{s}^{j}-m_{\theta_{s}})\end{split} \tag{20}\] with random initial conditions \(\theta_{0}^{j}\sim\mathcal{N}(m_{0},P_{0})\) i.i.d. for \(j=1,\ldots,J\). As already utilized in Section 2.1, to propagate the ensemble \(\{\theta_{s}^{j}\}_{j=1}^{J}\) in the interacting particle system (20), we can equivalently evolve the mean \(m_{\theta_{s}}\) and ensemble deviations \(\Theta_{s}\) by \[\frac{\mathrm{d}}{\mathrm{d}s}m_{\theta_{s}}=-P_{\theta_{s}}\left(\Phi(\mu_{ \theta_{s}}[y]-d)+P_{\mathrm{prior}}^{-1}(m_{\theta_{s}}-m_{\mathrm{prior}}) \right), \tag{21}\] and \[\frac{\mathrm{d}}{\mathrm{d}s}\Theta_{s}=-\frac{1}{2}P_{\theta_{s}}\left(\Phi \mu_{\theta_{s}}[R]\Phi^{\mathrm{T}}\Theta_{s}+P_{\mathrm{prior}}^{-1}\Theta_ {s}\right)+\frac{1}{2}\Theta_{s} \tag{22}\] respectively, where \(P_{\theta_{s}}\) is the empirical covariance. The method reaches an equilibrium solution as \(s\to\infty\), which gives the deterministic dynamical sampler a robustness that the homotopy methods might not possess. ## 3 Theoretical results on mean-field limits In this section, we study what happens to the equations (16) and (20) when we let the number of particles go to infinity. As we will see, the interaction between the particles decreases, and in the limit \(J\to\infty\) all particles follow a deterministic mean-field ODE, independently of the other particles. To that end, we introduce some notation. Both of the IPS in Section 2 only depend on the other particles through their empirical measure. Therefore, we can rewrite (16) and (20) as \[\mathrm{d}\theta_{s}^{j}=b(\mu_{\theta_{s}})(\theta^{j}), \tag{23}\] where \(b(\mu_{\theta})\) is defined implicitly by (16) and (20). By the continuity equation, we know that if the particles are evolved using the drift (23), then their empirical measure is a weak solution to the partial differential equation (PDE) \[\partial_{s}\mu_{\theta_{s}}(\theta) = -\operatorname{div}(\mu_{\theta_{s}}b(\mu_{\theta_{s}})(\theta)), \qquad\mu_{\theta_{0}}=\frac{1}{J}\sum_{j=1}^{J}\delta_{\theta_{i}},\] where \(\delta_{\theta}\) is the Dirac-delta distribution at \(\theta\). Due to the dependence of \(b\) on \(\mu_{\theta_{s}}\), this PDE is typically nonlinear. However, since \(b\) only depends on the other particles through the empirical measure, the above PDE forms a closed system. Therefore, we abstract the PDE \[\partial_{s}\mu_{s}(\theta)=-\operatorname{div}(\mu_{s}b(\mu_{s})(\theta)) \tag{24}\] which, given an initial condition \(\mu_{0}\), can be solved at the level of measures or densities directly. Given such a solution \((\mu_{s})_{s}\) for a fixed initial \(\mu_{0}\), we plug it into (23) and get the differential equation \[\frac{\mathrm{d}}{\mathrm{d}t}\eta_{s}=b(\mu_{s})(\eta_{s}).\] Note that this equation does not constitute an IPS anymore but a mean-field ODE instead; i.e., the particle evolution depends on its own distribution \(\mu_{s}\), which we obtain as the solution to (24). Furthermore, we find from (24) that \(\eta_{0}\sim\mu_{0}\) implies \(\eta_{s}\sim\mu_{s}\) for all \(s>0\). The proofs in the following subsections work now as follows. We fix an initial condition \[\mu_{0}=\mathcal{N}(m_{0},P_{0}),\] for which we obtain the solution \(\mu_{s}\) to (24). Then, we define an intermediate mean-field particle system: \[\frac{\mathrm{d}}{\mathrm{d}t}\eta_{s}^{j} = b(\mu_{s})(\eta_{s}^{j}),\quad\eta_{0}^{j}\sim\mu_{0}. \tag{25}\] Note that the \(\eta_{s}^{j}\), \(j=1,\ldots,J\), are all independent. We then couple the IPS (16) or (20) to (25) by choosing the same initial conditions, i.e., \(\theta_{0}^{j}=\eta_{0}^{j}\) for all \(j=1,\ldots,J\). We then prove that the expected Wasserstein-distance of the empirical measures \[\mathbb{E}[\mathcal{W}(\mu_{\theta_{s}},\mu_{\eta_{s}})]\to 0 \tag{26}\] converges to \(0\) as \(J\to\infty\). Let us briefly discuss the expectation value in (26). Note that although (23) and (25) are deterministic, \(\mu_{\theta_{s}}\) as well as \(\mu_{\eta_{s}}\) are random probability measures. However, the randomness comes solely from the initial conditions. Therefore, the expectation in (26) is with respect to i.i.d. initial conditions \(\eta_{0}^{j}=\theta_{0}^{j}\sim\pi_{0}\). We not only prove (26) but are able to obtain a quantitative convergence rate. Since \(\mu_{\eta_{s}}\) is the \(J\)-fold product of \(\mu_{s}\) with itself, this shows that for large \(J\) the \(\theta_{s}^{j}\) approximate independent samples from \(\mu_{s}\). One can make the last statement precise by using known rates at which \(\mu_{\eta_{s}}\) converges to \(\mu_{s}\), see [12]. ### Analysing the mean-field systems As already discussed, we will prove that the empirical measure \(\mu_{\theta_{s}}\) approximates the solution of the mean-field PDE \(\mu_{s}\) for large ensemble sizes. Therefore, it is instructive to briefly study \(\mu_{s}\). Since \(\mu_{0}\) is Gaussian in our case and the drift terms \(b(\mu_{\theta_{s}})(\theta)\) are linear in \(\theta\), \(\mu_{s}\) will remain Gaussian for all times \(s\geq 0\). We denote its mean and covariance by \(m_{s}\) and \(P_{s}\): \[\mu_{s}=\mathcal{N}(m_{s},P_{s}).\] Therefore, solving the mean-field PDE (24) corresponds to solving an ODE for the mean and covariance of the Gaussian distribution. We will next work out these mean-field ODEs for our two IPS. #### 3.1.1 Homotopy using moment matching The mean-field limit PDE, corresponding to (24), is given by \[\partial_{s}\mu_{s}(\eta) = \frac{1}{2}\operatorname{div}\left(\mu_{s}\left(P_{s}(\Phi\mu_{s }[R]\Phi^{\mathrm{T}}(\eta-m_{s})+2\Phi(\mu_{s}[y]-d)\right)\right).\] In the case of Gaussian initial conditions, the above PDE is equivalent to solving the following ODEs for the mean and covariance: \[\begin{array}{rcl}\frac{\mathrm{d}}{\mathrm{d}s}m_{s}&=&-P_{s}\Phi(\mu_{s}[y]-d )=-P_{s}\mu_{s}[\nabla_{\theta}\Psi(\theta)],\\ \frac{\mathrm{d}}{\mathrm{d}s}P_{s}&=&-P_{s}\Phi\mu_{s}[R]\Phi^{\mathrm{T}}P_{ s}=-P_{s}\mu_{s}[D_{\theta}^{2}\Psi(\theta)]P_{s},\end{array} \tag{27}\] where \(\nabla_{\theta}\Psi\) and \(D_{\theta}^{2}\Psi\) are the gradient and Hessian of the negative log-likelihood function as defined in (13) and (14), respectively. #### 3.1.2 Deterministic second-order dynamical sampler In this case, the mean-field limit PDE is given by \[\begin{array}{rcl}\partial_{s}\mu_{s}(\eta)=&\frac{1}{2}\operatorname{div} \left(\mu_{s}\left(P_{s}(\Phi\mu_{s}[R]\Phi^{\mathrm{T}}(\eta-m_{s})+2\Phi( \mu_{s}[y]-d)\right)\right)\\ &\quad+\frac{1}{2}\operatorname{div}\left(\mu_{s}\left(P_{\mathrm{ prior}}^{-1}(\eta+m_{s}-2m_{\mathrm{prior}})\right)+(\eta-m_{s})\right) \right),\end{array} \tag{28}\] where \(\mu_{0}=\mathcal{N}(m_{0},P_{0})\). As in the previous subsection, we again derive the mean-field ODEs for the mean and covariance: \[\begin{array}{rcl}\frac{\mathrm{d}}{\mathrm{d}s}m_{s}&=&-P_{s}\Phi(\mu_{s}[y ]-d)-P_{s}P_{\mathrm{prior}}^{-1}(m_{s}-m_{\mathrm{prior}}),\\ \frac{\mathrm{d}}{\mathrm{d}s}P_{s}&=&-P_{s}\Phi\mu_{s}[R]\Phi^{\mathrm{T}}P_{ s}-P_{s}P_{\mathrm{prior}}^{-1}P_{s}+P_{s}.\end{array} \tag{29}\] The mean-field ODE (25) becomes \[\begin{array}{rcl}\frac{\mathrm{d}}{\mathrm{d}s}\eta_{s}^{j}&=&-\frac{1}{2} P_{s}\left(\Phi(\mu_{s}[R]\Phi^{\mathrm{T}}(\eta^{j}-m_{s}))+2\Phi(\mu_{s}[y]-d)\right) \\ &+&\frac{1}{2}P_{s}P_{\mathrm{prior}}^{-1}(\eta^{j}+m_{s}-2m_{\mathrm{prior}} ))+\frac{1}{2}(\eta^{j}-m_{s}),\end{array} \tag{30}\] where \(\eta_{0}^{j}=\theta_{0}^{j}\sim\mathcal{N}(m_{0},P_{0})\). When the system (29) stops evolving, we have reached the equilibrium distribution \[\mu_{*}=\mathcal{N}(m_{*},P_{*}).\] To derive equations for \(m_{*}\) and \(P_{*}\), we set the right-hand sides of (29) to zero and obtain: \[m_{*} =m_{\mathrm{prior}}-P_{\mathrm{prior}}\Phi(\mu_{*}[y]-d)=m_{ \mathrm{prior}}-P_{\mathrm{prior}}\mu_{*}[\nabla\Psi(\theta)], \tag{31}\] \[P_{*} =(\Phi\mu_{*}[R]\Phi^{\mathrm{T}}+P_{\mathrm{prior}}^{-1})^{-1}= (\mu_{*}[D_{\theta}^{2}\Psi(\theta)]+P_{\mathrm{prior}}^{-1})^{-1}. \tag{32}\] These are implicit equations in \(m_{*}\) and \(P_{*}\) and the evolution equations (32) can be seen as means to find \(m_{*}\) and \(P_{*}\). Therefore, in the many-particle and large time limit, we are approximating a Gaussian with mean and covariance given by (31) and (32) respectively. Approximating a distribution by a Gaussian is also an important topic in variational inference [16]. However, in contrast to popular methods for Gaussian variational inference (see the discussion in [16]), which are based on taking gradients with respect to the mean and covariance of the approximating Gaussian, we do not need to invert the state space (\(D\times D\)) covariance matrix for our method to work. **Remark 1**: _The work [16] also proposes a deterministic IPS for Gaussian variational inference. While their evolution equations differ from ours, the equilibrium state agrees with (31)-(32). Furthermore, in contrast to our formulation, the IPS proposed in [16] is not affine-invariant. See [34] for a discussion of affine invariance._ ### Statement of results Since the IPS (16) and (20) are similar, differing only in additional terms for (20), the proofs of the following results are quite similar too. Due to (20) having more terms, in particular also terms that increase the ensemble spread, the proofs are more technical. We concentrate on that case. The analogous results for (16) follow by performing very similar, often nearly identical, calculations, but for fewer terms. First of all, we prove in the following proposition that the objects of interest, \(\theta_{s}^{j}\), \(\mu_{\theta_{s}}\), \(\eta_{s}^{j}\), and \(\mu_{s}\) are well-defined. **Proposition 1**: _The mean-field PDE (28) has a unique global solution \(\mu_{s}\). Furthermore, the IPS (20) and the mean-field IPS (30) also posses unique global solutions._ The proposition is proven in Section C.1, in Proposition 3 and Proposition 4. We are now in a position to state our main theorem. **Theorem 1**: _Let \(\{\theta_{s}^{j}\}_{j=1}^{J}\) be the solution to (20) with associated empirical measure \(\mu_{\theta_{s}}\) and let \(\mu_{s}\) be the solution to (28). For any \(\varepsilon\), there exists a constant \(C\), depending only on \(m_{0},P_{0},\Phi,d,s\) and \(\varepsilon\), such that_ \[\mathbb{E}[\mathcal{W}_{2}(\mu_{\theta_{s}},\mu_{s}^{N})]\leqslant C_{ \varepsilon,s}J^{-\frac{1}{2}+\varepsilon},\] _where \(\mu_{s}^{N}\) is the \(N\)-fold product measure of \(\mu_{s}\) with itself._ **Proof** (Sketch) We introduce an artificial mean-field particle system \(\eta^{j}\) as described in Section 3. The precise mean-field ODEs can be found in (30). We couple the \(\theta_{s}^{j}\) to the \(\eta_{s}^{j}\) by choosing the same initial conditions, i.e., \(\eta_{0}^{j}=\theta_{0}^{j}\). The \(\eta_{s}^{j}\) are samples from \(\mu_{s}^{N}\), and therefore the Wasserstein distance, being the infimum over all couplings, can be upper bounded by this specific coupling between \(\theta_{s}^{j}\) and \(\eta_{s}^{j}\), i.e., \[\mathbb{E}[\mathcal{W}_{2}(\mu_{\theta_{s}},(\mu_{s})^{N})]\leqslant\mathbb{ E}\left[\left(\frac{1}{J}\sum_{j=1}^{J}|\theta_{s}^{j}-\eta_{s}^{j}|^{2}\right)^{1 /2}\right]. \tag{33}\] We fix a \(T\) and assume that \(s\leqslant T\). To bound (33), we define \[\Delta_{s}=\left(\frac{1}{J}\sum_{j=1}^{J}|\theta_{s}^{j}-\eta_{s}^{j}|^{2} \right)^{1/2}\] and upper bound its growth. In Proposition 2 we will show that we can bound any moment of \(\mathbb{E}[\Delta_{s}]\) independently of the ensemble size \(J\), i.e., \[\mathbb{E}\left[|\Delta_{s}|^{p}\right]^{1/p}\lesssim C_{p}, \tag{34}\] where \(x\lesssim y\) symbolizes that the inequality \(x\leqslant ay\) holds with a constant \(a\) depending only on \(T,m_{0},P_{0},\Phi\) and \(d\). We then use a bootstrapping technique inspired by [8]. The main idea is to show that, if \[\mathbb{E}[\Delta_{s}]\lesssim J^{-\gamma} \tag{35}\] for some \(\gamma\geq 0\), then one can actually improve that \(\gamma\) value to a better \(\gamma^{\prime}\), i.e., \[\mathbb{E}[\Delta_{s}]\lesssim J^{-\gamma^{\prime}} \tag{36}\] holds for some \(\gamma^{\prime}>\gamma\). Plugging \(p=0\) into (34), we see that (35) holds for \(\gamma=0\). We then iteratively improve \(\gamma=0\) to any \(\gamma=\frac{1}{2}-\epsilon\). We now go into a bit more detail on how to improve the current estimate of \(\gamma\). We denote by \(H_{\alpha}\) a random variable, which can change from occurrence to occurrence, such that \(\mathbb{E}[H_{\alpha}^{p}]^{1/p}\leqslant C_{p}J^{-\alpha}\) for all \(p\geq 0\), where \(C_{p}\) only depends on \(T,m_{0},P_{0},\Phi,d\) and \(p\). Considering the ODE for \(\Delta_{s}\) and bounding all terms of its right-hand side, we end up with \[\frac{\mathrm{d}}{\mathrm{d}s}\mathbb{E}[\Delta_{s}] \leqslant C(\mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}H_{1/4}]+\mathbb{ E}[H_{1/2}])\] \[\leqslant C(\mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}^{\varepsilon} \Delta_{s}^{1-\varepsilon}H_{1/4}]+\mathbb{E}[H_{1/2}])\] \[\leqslant C(\mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}]^{1-\varepsilon }\mathbb{E}[\Delta_{s}(H_{1/4})^{1/\varepsilon}]^{\varepsilon}+\mathbb{E}[H_ {1/2}])\] \[\leqslant C(\mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}]^{1-\varepsilon }\mathbb{E}[\Delta_{s}^{2}]^{\varepsilon/2}\mathbb{E}[(H_{1/4})^{2/\varepsilon }]^{\varepsilon/2}+\mathbb{E}[H_{1/2}])).\] Here we repeatedly used Holder's inequality. Now, all moments of \(H_{\alpha}\) can be bounded by \(J^{-\alpha}\) \[\frac{\mathrm{d}}{\mathrm{d}s}\mathbb{E}[\Delta_{s}] \lesssim \mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}]^{1-\varepsilon}J^{ -1/4}+J^{-1/2}\] \[\leqslant \mathbb{E}[\Delta_{s}]+J^{-\gamma(1-\varepsilon)-1/4}+J^{-1/2}.\] In the second inequality we used the a priori assumption that \(\mathbb{E}[\Delta_{s}]\leqslant J^{-\gamma}\). We now apply Groenwall to obtain \[\mathbb{E}[\Delta_{s}]\lesssim J^{-\gamma^{\prime}},\] where \(\gamma^{\prime}=\min\left(\frac{1}{2},\frac{1}{4}+\gamma(1-\varepsilon)\right)\). Plugging in \(\gamma=0\), we obtain a rate of \(J^{-1/4}\) by applying the above argument once. Then, by picking a small enough \(\varepsilon\), we can achieve any rate below \(1/2\) by applying the bootstrapping technique a second time. The full details of the proof can be found in Appendix C.3. \(\Box\) The following a priori bound is crucial for the proof of Theorem 1. **Proposition 2**: _For \(s\in[0,T]\),_ \[\mathbb{E}\left[\Big{|}\frac{1}{J}\sum|\theta_{s}^{j}-\eta_{s}^{j}|^{2}|^{p} \right]^{1/p}\leqslant C_{p},\] _i.e., the \(p\)-norm can be bounded independently of \(J\) for a fixed \(T\). The constant \(C_{p}\) only depends on \(m_{0},P_{0},\Phi,d\) and \(T\)._ The proof relies on the fact that \(\sum_{j=1}^{J}|\theta_{0}^{j}-\eta_{0}^{j}|\) is a martingale and uses martingale inequalities. It can be found in Section C.2. ## 4 Algorithmic implementation A typical way of time-stepping the interacting particle systems presented in this paper is the forward-Euler method. However, due to its restricted domain of stability, using this method can lead to restrictions on the step-size \(\Delta s\). In this section we describe tamed discretizations for the homotopy based moment matching formulation (16) and the deterministic second-order dynamical sampler (20). We introduce a step-size \(\Delta s\geq 0\) and discrete times \(s_{k}=k\Delta s\). Further, we use the shorthand \(\theta_{s_{k}}\approx\theta_{k},m_{\theta_{s_{k}}}\approx m_{k},P_{\theta_{s_ {k}}}\approx P_{k},\Theta_{s_{k}}\approx\Theta_{k},\mu_{\theta_{s_{k}}} \approx\mu_{k}\) in the forthcoming subsections. ### Homotopy using moment matching We employ modifications to the time stepping for moment-matching method (16) by using the following tamed discretizations \[\theta_{k+1}^{j}=\theta_{k}^{j}-\frac{\Delta s}{2}P_{k}\Phi\left(M_{k}\Phi^{ \mathrm{T}}(\theta_{k}^{j}-m_{k})+2(\mu_{k}[y]-d)\right) \tag{37}\] where \[P_{k}=\frac{1}{J}\Theta_{k}\Theta_{k}^{\mathrm{T}}\] and \[M_{k}=\left(\Delta s\Phi^{\mathrm{T}}P_{k}\Phi+\mu_{k}[R]\right)^{-1} \tag{38}\] for \(j=1,\ldots,J\). As discussed in Section 2.1, we propagate \(\theta_{k}^{j}\) forward by evolving the associated empirical mean and ensemble deviations using (17)-(18). The resulting time-stepping of (17)-(18) is of the form \[m_{k+1}=m_{k}-\Delta sP_{k}\Phi\left(\mu_{k}[y]-d\right) \tag{39}\] and \[\Theta_{k+1}=\Theta_{k}-\frac{\Delta s}{2}P_{k}\Phi M_{k}\Phi^{\mathrm{T}} \Theta_{k} \tag{40}\] for the ensemble mean and ensemble deviations, respectively. **Remark 2**: _Inverting the \(N\times N\) matrix (38) can prove prohibitive for large data sets. However, since \(R(\theta)\) is diagonal, taking the full inverse in (38) can be replaced by inverting the diagonal entries of \(\Delta s\Phi^{\mathrm{T}}P_{k}\Phi+\mu_{k}[R]\) only, as proposed in [2]. This inexpensive approximation still provides improved stability compared to an explicit Euler discretization of (18)._ We provide pseudo-code summarizing the second-order moment matching method described in Algorithm 1. ``` Inputs: Data set \(\{(\phi^{n},d^{n})\}_{n=1}^{N}\); feature map \(\Phi=\{\phi^{n}\}_{n=1}^{N}\); initial ensemble \(\{\theta_{0}^{j}\}_{j=1}^{J}\) drawn from a Gaussian distribution; step-size \(\Delta s\) and \(K\) such that \(\Delta sK=1\). for\(k=0\)to\(K-1\)do: 1 Evaluate ensemble mean \(m_{k}\) (9), ensemble deviations \(\Theta_{k}\) (11), covariance matrix \(P_{k}\) (10), \(y\) (6), \(\mu_{k}[y]\), \(\mu_{k}[R]\) and \(M_{k}\) (38). 2 Evolve \(m_{k}\) and \(\Theta_{k}\) using (39)- (40). 3 Determine \(\{\theta_{k}^{j}\}_{j=1}^{J}\) from \(m_{k+1}\) and \(\Theta_{k+1}\). endfor Output: Final ensemble \(\{\theta_{K}^{j}\}_{j=1}^{J}\). ``` **Algorithm 1**Homotopy using moment matching method for Bayesian inference ### Deterministic second-order dynamical sampler We employ the idea of Trotter splitting to solve (20) numerically. The required splitting is provided by the evolution equation (16), already encountered in the homotopy approach, and the remainder \[\frac{\mathrm{d}\theta_{s}^{j}}{\mathrm{d}s}=-\frac{1}{2}P_{\theta_{s}}P_{\text{ prior}}^{-1}(\theta_{s}^{j}+m_{\theta_{s}}-2m_{\text{prior}})+\frac{1}{2}( \theta_{s}^{j}-m_{\theta_{s}}) \tag{41}\] Therefore, for every time step \(k\), given \(\{\theta_{k}^{j}\}_{j=1}^{J}\), we first compute \(\theta_{k+1/2}^{j}\) using the second order moment matching method (37) rewritten as \[\theta_{k+1/2}^{j}=\theta_{k}^{j}-\frac{\Delta s}{2}P_{k}\Phi\left(M_{k}\Phi^{ \mathrm{T}}(\theta_{k}^{j}-m_{k})+2(\mu_{k}[y]-d)\right) \tag{42}\] for \(j=1,2,...,J\). Equivalently, we can obtain \(\theta_{k+1/2}^{j}\) by evaluating \(m_{k+1/2}\) and \(\Theta_{k+1/2}\) as stated in (39)-(40) with subscript \(k+1\) replaced by \(k+1/2\). In the second half step, we approximate (41) using the following scheme: \[\begin{split}\theta_{k+1}^{j}&=\theta_{k+1/2}^{j}- \frac{\Delta s}{2}P_{k+1/2}\left(\Delta sP_{k+1/2}+P_{\text{prior}}\right)^{-1} \left(\theta_{k+1/2}^{j}+m_{k+1/2}-2m_{\text{prior}}\right)\\ &\quad+\frac{\Delta s}{2}(\theta_{k+1/2}^{j}-m_{k+1/2})\end{split} \tag{43}\] Again, if \(P_{\text{prior}}\) is diagonal, full matrix inversion in (43) can be replaced by inverting the diagonal only. We provide pseudo-code describing the algorithm for deterministic second-order dynamical sampler in Algorithm 2. ``` Inputs: Data set \(\{(\phi^{n},d^{n})\}_{n=1}^{N}\); feature map \(\Phi=\{\phi^{n}\}_{n=1}^{N}\); initial ensemble \(\{\theta_{0}^{j}\}_{j=1}^{J}\) drawn from a Gaussian distribution; step-size \(\Delta s\); threshold value \(\epsilon>0\). for\(k\geq 0\)do: 1 Numerically solve ODE (20) by Trotter splitting: 1. Evaluate ensemble mean \(m_{k}\) (9), ensemble deviations \(\Theta_{k}\) (11), covariance matrix \(P_{k}\) (10), \(y\) (6), \(\mu_{k}[y]\), \(\mu_{k}[R]\) and \(M_{k}\) (38) 2. Determine \(\{\theta_{k+1/2}^{j}\}_{j=1}^{J}\) by evolving \(m_{k}\) and \(\Theta_{k}\) using the time-stepping (39)-(40). 3. Evaluate covariance matrix \(P_{k+1/2}\) (10). 4. Determine \(\{\theta_{k+1}^{j}\}_{j=1}^{J}\) using the time-stepping (43). 2. Update the ensemble \(\{\theta_{k}^{j}\}_{j=1}^{J}\rightarrow\{\theta_{k+1}^{j}\}_{j=1}^{J}\). if\(\frac{\|P_{k+1}-P_{k}\|_{2}}{\|P_{k}\|_{2}}<\epsilon\); break endfor Output: Final ensemble \(\{\theta_{k}^{j}\}_{j=1}^{J}\). ``` **Algorithm 2**Deterministic second-order dynamical sampler for Bayesian inference Application to Bayesian logistic regression in ReLU networks Neural networks using ReLU activation functions are possibly the most widely used neural network architectures for classification. However, it has been proven that these networks exhibit arbitrarily high confidence far away from the training data when fitted by minimizing the negative log-likelihood \(\psi\) or, equivalently, by approximating the maximum likelihood estimator (MLE), denoted by \(\tilde{\theta}_{\text{MLE}}\), as demonstrated in [22, 20]. Thus, this architecture along with a MLE training scheme is not robust and does not provide any measure of uncertainty in the model's predictions. One way of obtaining predictive uncertainty is to place distributions over the weights of a ReLU network, which leads to Bayesian neural networks. The idea of replacing the MLE \(\tilde{\theta}_{\text{MLE}}\) by a posterior measure \(\tilde{\pi}_{\text{post}}\) over parameters \(\tilde{\theta}\) therefore enables us to make better informed predictions and to know when our model predictions are not to be trusted. To understand how uncertainty might be expressed in this setting, we put a prior \(\tilde{\pi}_{\text{prior}}\) on the parameters \(\tilde{\theta}\) which can be thought of as incorporating some prior knowledge and then refining it based on data to learn a posterior distribution. That is, after observing \(\mathcal{D}\), we can get the posterior measure through Bayes formula (2). This Bayesian approach to neural networks introduces a certain degree of computational complexity as we now need to sample from \(\tilde{\pi}_{\text{post}}\) instead of minimizing \(\Psi\) which can also be computationally expensive. Computational approximations to \(\tilde{\pi}_{\text{post}}\) have been enabled by advances in Markov chain Monte Carlo (MCMC) methods (see [31]). However, even today's most sophisticated MCMC methods are rendered impractical for deep neural network architectures. Therefore, to make the Bayesian approach tractable, we focus on a last layer Bayesian approximation that places a prior distribution only on the output layer's parameters. Thus, we decompose an \(l\)-layered ReLU network \(G_{\tilde{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) into a feature map \(\phi_{\tilde{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}\) consisting of the first \(l-1\) layers of the ReLU network and the output layer, i.e., \[G_{\tilde{\theta}}(x)=\sigma(\langle\theta,\phi_{\tilde{\theta}}(x)\rangle)\] with \(\tilde{\theta}=(\hat{\theta}^{\text{T}},\theta^{\text{T}})^{\text{T}}\). One now first trains the complete network using a (regularised) MLE approach, which provides \(\tilde{\theta}_{\text{MLE}}\) and the associated trained feature map \[\phi(x):=\phi_{\tilde{\theta}_{\text{MLE}}}(x). \tag{44}\] Furthermore, upon defining the input features \(\phi^{n}=\phi(x^{n})\), \(n=1,\ldots,N\), over the data set \(\{(x^{n},d^{n})\}_{n=1}^{D}\), this architecture is now equivalent to a Bayesian logistic regression problem in the output layer parameters \(\theta\) as discussed in detail in Section 1.1. In this paper, we analyze the performance of ReLU networks in the case of binary classification. While the work in [25] focuses on Laplace approximations for Bayesian inference, we employ algorithms for Bayesian logistic regression based on the methods proposed in Section 2. We demonstrate experimentally that our methods in conjunction with pre-trained deterministic ReLU networks provide desirable uncertainty estimates. However, it should be noted that the methods proposed in this paper are not limited to a 'last layer' use, but can be easily extended to multiple layers or the entire network. We use a 3-layer ReLU network with 20 hidden units at each layer and \(D=50\) units at the output layer in the subsequent numerical experiments. The data set is constructed by generating a 2D binary classification data using scikit-learn. The MLE estimator \(\tilde{\theta}_{\text{MLE}}\) is obtained using the PyTorch library. Figure 1 depicts the above generated data set. ## 6 Numerical experiments In this section, we consider a numerical experiment for binary classification problem as an illustration for uncertainty quantification in ReLU networks. We employ the already described 3-layer neural network architecture \(G_{\tilde{\theta}}(x)\). The training data set \(\{(x^{n},d^{n})\}_{n=1}^{N}\) consists of inputs and the associated labels. We train the ReLU network using stochastic gradient descent (SGD) with 0.9 momentum, learning rate of \(3\times 10^{-4}\) and weight-decay for training over the 2D binary classification data set 1, using \(N=30\) test points for toy binary classification problem. SGD minimizes the cross entropy loss quantity (5) across the training data set, and we denote the computed minimizer by \(\tilde{\theta}_{\text{MLE}}\). The computed set of parameters (\(\hat{\theta}_{\text{MLE}}\)) is then used to provide the feature maps (44) which is used for Bayesian logistic regression. The chosen prior is Gaussian with mean \(m_{\text{prior}}=0\) and covariance matrix \(P_{\text{prior}}=2I\). Using the ensemble of particles distributed according to the posterior \(\tilde{\pi}_{\text{post}}\), approximated with the homotopy based moment-matching method (39)-(40) and the second-order dynamical sampler (43), the predictive distribution is estimated as \[\pi(d=1|x,\mathcal{D})=\frac{1}{J}\sum_{j=1}^{J}\sigma(\langle\theta_{*}^{j}, \phi_{\hat{\theta}_{\text{MLE}}}(x)\rangle), \tag{45}\] where \(\{\theta_{*}^{j}\}_{j=1}^{J}\) is the final ensemble of particles obtained using Algorithm 1 and Algorithm 2. The associated Bayesian posterior distribution now translates uncertainty in weights to uncertainty in model predictions. Results for both methods introduced in this paper are reported using an ensemble size of \(J=200\). Algorithm 1 is implemented with step-size \(\Delta s=10^{-3}\) while Algorithm 2 is implemented with \(\Delta s=10^{-1}\) and \(\Delta sK=20\). The results from Algorithms 1 & 2 are compared to the following two alternative methods. Figure 1: 2D binary classification data set **Laplace approximations.** We compare the uncertainty estimates provided by the proposed EnKF based methods for Bayesian approximation over the output layer parameters to the last-layer Laplace approximation (LLLA) for inference as introduced in [25]. In this case, we perform a Laplace approximation to get the posterior of the weights of the last layer, assuming the previous layers to be fixed at MAP estimates. So, for unknown output layer parameters \(\theta\), we infer \[\pi(\theta|\mathcal{D})=\mathcal{N}(\theta|\theta_{\mathrm{MLE}},H^{-1}) \tag{46}\] where \(H\) is the Hessian of the negative log-posterior with respect to \(\theta\) at \(\theta_{\mathrm{MLE}}\). The predictive distribution in the case of binary classification is thus given by \[\pi(d=1|x,\mathcal{D})=\int\sigma(\langle\theta,\phi(x)\rangle\pi(\theta| \mathcal{D})\mathrm{d}\theta, \tag{47}\] where \(\pi(\theta|\mathcal{D})\) is approximated with (46) and the integral (47) is computed using a prob approximation as described in [25, 30]. **Ensemble learning.** As a non-Bayesian method, we investigate the uncertainty estimates from ensemble learning (or deep ensembles), introduced in [26, 33]. The technique uses an ensemble of deterministic networks, meaning that each network in the ensemble produces a point estimate rather than a distribution. We train an ensemble of \(M=5\) ReLU networks independently, using the entire training data set to train each network. Given an input point \(x_{n}\), target label \(d_{n}\) and cross-entropy loss \(\Psi(\tilde{\theta},x_{n},d_{n})\), we generate an adversarial sample \[x_{n}^{*}=x_{n}+\xi\ \mathrm{sign}(\nabla_{\theta}\Psi(\tilde{\theta},x_{n},d_{ n})),\] where \(\xi\sim\mathcal{N}(0,0.01)\). As described in [26], using adversarial samples for training by adding perturbations in the direction the network is likely to increase the loss provides a 'better random' direction for smoothing predictive distributions. For \(M\) models with MLE estimated parameters \(\{\tilde{\theta}_{m}\}_{m=1}^{M}\), we evaluate the ensemble predictions as \[\pi(d=1\mid x) =M^{-1}\sum_{m=1}^{M}\pi(d=1\mid x,\tilde{\theta}_{m}), \tag{48}\] \[=M^{-1}\sum_{m=1}^{M}G_{\tilde{\theta}_{m}}(x),\] where \(\{\tilde{\theta}_{m}\}_{m=1}^{M}\) denotes the parameters of the \(m^{\mathrm{th}}\) model in the ensemble. For classification, this corresponds to averaging the predicted probabilities. We obtain results for the model's confidence in its prediction on and away from any input data \(x\), where confidence is defined as the maximum predictive probability. In the case of a binary classification problem, confidence in prediction can be expressed as \(\max_{i\in\{0,1\}}p(d=i\mid x)\). Ideally, one would want the model to convey low confidence with such inputs. We next report on the results from these two experimental settings. ### Predictive uncertainty in ReLU networks As shown in Figure 2 and Figure 3, MLE predictions have very high confidence everywhere besides the region close to the decision boundary. Using deep ensembles of ReLU networks improves the accuracy of the prediction, which results in a better decision boundary and shows increased uncertainty estimates near the decision boundary. However, both MLE and ensemble learning predictions do not express uncertainty far away from the training data. On the other hand, last-layer Bayesian approximations, implemented with either Laplace, the homotopy moment matching method or second-order dynamical sampler, assign relatively high confidence close to the training data and are uncertain otherwise. In other words, the region of high confidence is identified much closer to the training data. All Bayesian approximations closely follow the decision boundary obtained using the MLE estimates and thus do not negatively impact the network's predictive accuracy. Furthermore, the second-order dynamical sampler assigns higher confidence closer to training data than any other algorithm and allows one to use a larger step size \(\Delta s\), and can therefore be recommended for further use. It can be noted that last-layer Laplace approximations also improve predictive uncertainty but assigns lower confidence than our methods on and near the training data set. Figure 2: Binary classification on a toy dataset using (a) MAP estimates, (b) ensemble of neural networks, last-layer Gaussian approximations over the weights obtained via (c) Laplace approximation, (d) moment matching method, (e) deterministic second-order dynamical sampler. Background colour depicts the confidence in classification while black line represents the decision boundary obtained for the toy classification problem. Figure 3: Zoomed-out versions of the results in Figure 2 for binary classification on a toy data set using (a) MAP estimates, (b) ensemble of neural networks, last-layer Gaussian approximations over the weights obtained via (c) Laplace approximation, (d) moment matching method, (e) deterministic second-order dynamical sampler. Background colour depicts the confidence in classification. In Figure 3, we show a zoomed out version of the results in Figure 2 to capture confidence levels significantly away from training data. It can be seen that the MLE estimate and ensemble learning demonstrate high confidence in the entire domain. The Bayesian approximations, even when applied to just the last-layer of a ReLU network, give desirable uncertainty estimates. ### Uncertainty on out-of-distribution data In this experiment, we generate test samples on a large grid, such that the test points deviate significantly from the training samples. For each point in the test data set, we evaluate its Euclidean distance \(\delta>0\) from the nearest training point. We then evaluate the model's confidence in classifying these out-of-distribution (OOD) samples as a function of \(\delta\). The results can be found as in Figure 4. It can be seen that as distance from the training set increases, the MLE technique is extremely overconfident in its predictions everywhere. MLE trained network always yields arbitrarily 'overconfident' predictions away from training data and thus, is not robust. Using an ensemble of ReLU networks also does not improve uncertainty estimates and has little affect on the confidence on OOD data. However, last layer-Bayesian approximations assign lower confidence for OOD test data. As the distance from the training set increases, the model is less confident in its prediction and the level of confidence converges to a constant. Furthermore, our approaches assign maximum confidence (higher than Laplace approximations) when \(\delta=0\), i.e., the in-distribution data. Figure 4: Confidence of MAP, ensembles of neural networks, last-layer Laplace approximation, moment matching method, and deterministic second-order dynamical sampler as functions of \(\delta\) over the test set. Thick blue lines and shades correspond to means and \(\pm\) standard deviations, respectively. Dashed black lines signify the desirable confidence for \(\delta\) sufficiently high. ### Effect of varying ensemble sizes on predictive uncertainty We also analyze the effect of varying ensemble sizes on inference of last-layer network parameters. It can be observed that as ensemble size increases, the region of high confidence is identified a lot closer to the training data, while closely following the decision boundary obtained by the MLE estimator. For parameters \(D=50\), as ensemble size \(J\rightarrow\infty\) (here \(J=300\)), we observe that the interacting particle systems converge to their mean-field limit. It can also be concluded that a large ensemble size (\(J>D\)) provides better uncertainty estimates than small ensemble sizes (\(J\leq D\)). However, using ensemble size (\(J<D\)) still results in better uncertainty estimates near the decision boundary than those obtained by MLE estimates. ## 7 Conclusions In this paper, we have presented two extensions of EnKF and related interacting particle systems to Bayesian logistic regression. We have proven quantitative convergence rates for these systems to their mean field limits as the number of particles tends to infinity. We have employed both methods for Bayesian inference in ReLU networks with cross-entropy loss function. The numerical results confirm the effectiveness of the proposed methods for quantifying uncertainty. They have also shown that these uncertainty estimates make ReLU networks more robust with respect to distributional shifts. **Acknowledgements:** The research has been partially funded by the Deutsche Forschungsgemeinschaft (DFG)- Project-ID 318763901 - SFB1294. The authors would also like to thank the Isaac Figure 5: Effect of varying ensemble sizes (\(J\)) on confidence in prediction for binary classification using proposed ensemble sampling methods for Bayesian inference over the network’s output (last) layer. Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme _The Mathematical and Statistical Foundation of Future Data-Driven Engineering_ where work on this paper was undertaken. This work was supported by EPSRC grant no EP/R014604/1.
2310.00447
Optimizing Parameters of the DC Power Flow
Many power system operation and planning problems use the DC power flow approximation to address computational challenges from the nonlinearity of the AC power flow equations. The DC power flow simplifies the AC power flow equations to a linear form that relates active power flows to phase angle differences across branches, parameterized by coefficients based on the branches' susceptances. Inspired by techniques for training machine learning models, this paper proposes an algorithm that seeks optimal coefficient and bias parameters to improve the DC power flow approximation's accuracy. Specifically, the proposed algorithm selects the coefficient and bias parameter values that minimize the discrepancy, across a specified set of operational scenarios, between the power flows given by the DC approximation and the power flows from the AC equations. Gradient-based optimization methods like Broyden-Fletcher-Goldfarb-Shanno (BFGS), Limited-Memory BFGS (L-BFGS), and Truncated Newton Conjugate-Gradient (TNC) enable solution of the proposed algorithm for large systems. After an off-line training phase, the optimized parameters are used to improve the accuracy of the DC power flow during on-line computations. Numerical results show several orders of magnitude improvements in accuracy relative to a hot-start DC power flow approximation across a range of test cases.
Babak Taheri, Daniel K. Molzahn
2023-09-30T17:48:08Z
http://arxiv.org/abs/2310.00447v2
# Optimizing Parameters of the DC Power Flow ###### Abstract Many power system operation and planning problems use the DC power flow approximation to address computational challenges from the nonlinearity of the AC power flow equations. The DC power flow simplifies the AC power flow equations to a linear form that relates active power flows to phase angle differences across branches, parameterized by coefficients based on the branches' susceptances. Inspired by techniques for training machine learning models, this paper proposes an algorithm that seeks optimal coefficient and bias parameters to improve the DC power flow approximation's accuracy. Specifically, the proposed algorithm selects the coefficient and bias parameter values that minimize the discrepancy, across a specified set of operational scenarios, between the power flows given by the DC approximation and the power flows from the AC equations. Gradient-based optimization methods like Broyden-Fletcher-Goldfarb-Shanno (BFGS), Limited-Memory BFGS (L-BFGS), and Truncated Newton Conjugate-Gradient (TNC) enable solution of the proposed algorithm for large systems. After an off-line training phase, the optimized parameters are used to improve the accuracy of the DC power flow during on-line computations. Numerical results show several orders of magnitude improvements in accuracy relative to a hot-start DC power flow approximation across a range of test cases. DC power flow, machine learning, parameter optimization ## I Introduction Power flow analyses are integral to many applications, such as transfer capacity calculations, transmission loading relief, optimal dispatch, unit commitment, expansion planning, etc. [1, 2]. By relating the voltage phasors, power injections, and power flows, the AC power flow equations accurately model the power grid for such applications. However, the nonlinearity of these equations introduces computational challenges such as non-convexities in optimization problems [3, 4, 5]. These challenges frequently limit the direct use of the AC power flow equations in many applications, particularly those demanding large-scale and time-critical computations. To bypass these challenges, engineers often resort to linear approximations [6]. With a history dating back over a century [7], the DC power flow approximation is one such widely used linearization, primarily due to its computational efficiency and its meaningful system representation. In the DC power flow approximation, the active power flow between buses \(i\) and \(j\), \(p_{ij}\), is dictated by the phase angle difference \(\theta_{i}-\theta_{j}\) with proportionality coefficient \(b_{ij}\): \(p_{ij}=b_{ij}(\theta_{i}-\theta_{j})\). The DC power flow plays an integral role across a broad range of applications, spanning both market operations and traditional power system operation and planning tasks [8]. However, the selection of coefficients \(b_{ij}\) in the DC power flow model, typically based on line parameters, often leaves room for improvement. With line resistance \(r_{ij}\) and reactance \(x_{ij}\), one might choose, for instance, \(b_{ij}=1/x_{ij}\) or \(b_{ij}=-\Im(1/(r_{ij}+jx_{ij}))\), where \(\Im(\,\cdot\,)\) takes the imaginary part of a complex argument. When \(r_{ij}\neq 0\), these choices give similar but not the same values for \(b_{ij}\). The choice of the \(b_{ij}\) coefficients significantly impacts the DC power flow approximation's accuracy [8]. It is not always clear which coefficient choice provides the best accuracy for a specific application. There are also multiple ways to select bias parameters that adjust power injections and line flows to model shunts, HVDC infeeds, phase shifts, and line losses [8]. This paper proposes a new approach for adaptively choosing the coefficients \(b_{ij}\) and bias parameters in the DC power flow approximation. We leverage ideas from machine learning to tune these coefficients and biases, aiming to optimize the DC power flow approximation's accuracy. To accomplish this, an offline stage solves a plethora of AC power flow problems across a variety of operating conditions to construct a training dataset. Inspired by training methods for machine learning models, we then utilize a gradient-based method (e.g., Broyden-Fletcher-Goldfarb-Shanno (BFGS), Limited-Memory BFGS (L-BFGS), and Truncated Newton Conjugate-Gradient (TNC)) to optimize the values of the \(b_{ij}\) and bias parameters. This process minimizes a specified loss function that quantifies the discrepancy between the DC power flow output and the AC power flow solutions over the training dataset. With these optimized parameter values, we can then apply the DC power flow approximation to the aforementioned applications to achieve accuracy improvements during online computations. Our approach maintains the structure of the DC power flow approximation to enable seamless integration into many existing optimization models and computational algorithms that rely on this structure. We note that this is distinct from prior data-driven power flow modeling approaches, such as those proposed in [9, 10, 11, 12, 13, 14]; see [15, 16, 17] for literature reviews. These prior approaches primarily focus on directly mapping power injections to power flows in a manner reminiscent of Power Transfer Distribution Factor (PTDF) models. These mappings typically disregard the underlying physical system topology, precluding physical intuition as operating conditions change. Moreover, they often consider quantities such as voltage magnitudes and reactive power injections that are neglected in typical DC power flow formulations. While these other power flow modeling approaches are useful in many settings, there is also substantial value in formulations that main tain the conventional DC power flow structure as this enables straightforward adoption in the many existing applications of DC power flow. For instance, using PTDF-style DC power flow models in optimal transmission switching problems (see, e.g., [18]) is substantially more complicated than DC power flow formulations that maintain phase angles [19]. Moreover, while a DC power flow approximation written in terms of phase angles can be easily translated to a PTDF formulation, the reverse is not straightforward, especially when the PTDF formulation is not based on the physical network structure, as is often the case in typical data-driven power flow models. To summarize, the main contributions of this paper are: 1. Introducing an optimization algorithm that adaptively selects the DC power flow approximation's coefficients \(b_{ij}\) and bias parameters that adjust the power injections to account for bus shunt admittances, HVDC infeeds, phase shift injections, and branch losses. 2. Utilizing and comparing various numerical methods such as BFGS, L-BFGS, and TNC to scale our proposed algorithm to large power systems. 3. Providing numerical results that demonstrate the superior accuracy of our proposed algorithm under both normal and contingency conditions. The remainder of this paper is organized as follows: Section II overviews the power flow equations. Section III presents our proposed algorithm and solution method. Section IV numerically demonstrates and benchmarks our proposed algorithm. Section V concludes the paper. ## II Background on Power Flow Formulations This section describes the AC power flow and the DC power flow approximation. The AC power flow accurately describes a system's steady-state behavior via nonlinear equations. The DC power flow linearly approximates these equations, thus improving tractability at the cost of introducing inaccuracies. ### _AC Power Flow_ The AC power flow equations model a power system via nonlinear relationships among voltage magnitudes, phase angles, and complex power injections and flows. We first establish notation. Let \(\mathcal{N}\), \(ref\), and \(\mathcal{E}\) denote the set of buses, reference bus, and set of lines, respectively. Each bus \(i\in\mathcal{N}\) has a voltage phasor \(V_{i}\) with phase angle \(\theta_{i}\), a complex power injection \(S_{i}\), and a shunt admittance \(Y_{i}^{S}\). Complex power flows into each terminal of each line \((i,j)\in\mathcal{E}\) are denoted as \(S_{ij}\) and \(S_{ji}\). Each line \((i,j)\in\mathcal{E}\) has a series admittance parameter \(Y_{ij}\) and a shunt admittance parameter \(Y_{ij}^{sh}\). The real and imaginary parts of a complex number are denoted as \(\Re(\,\cdot\,)\) and \(\Im(\,\cdot\,)\), respectively. The transpose of a matrix is represented by \((\,\cdot\,)^{T}\). The power flow equations are: \[P_{i}= \sum_{(i,j)\in\mathcal{E}}p_{ij}+V_{i}^{2}\Re(Y_{i}^{S}),\quad Q_ {i}=\sum_{(i,j)\in\mathcal{E}}q_{ij}-V_{i}^{2}\Im(Y_{i}^{S}), \tag{1a}\] \[p_{ij}= V_{i}^{2}\left(\Re(Y_{ij})+\Re(Y_{ij}^{sh})\right)-V_{i}V_{j} \Re(Y_{ij})\cos(\theta_{i}-\theta_{j})\] \[\quad-V_{i}V_{j}\Im(Y_{ij})\sin(\theta_{i}-\theta_{j}),\] (1b) \[q_{ij}= -V_{i}^{2}\left(\Im(Y_{ij})+\Im(Y_{ij}^{sh})\right)-V_{i}V_{j} \Re(Y_{ij})\sin(\theta_{i}-\theta_{j})\] \[\quad+V_{i}V_{j}\Im(Y_{ij})\cos(\theta_{i}-\theta_{j}). \tag{1c}\] For each bus \(i\), \(P_{i}=\Re(S_{i})\) and \(Q_{i}=\Im(S_{i})\) are the real and reactive power injections, respectively. For line \((i,j)\in\mathcal{E}\), \(p_{ij}\) and \(q_{ij}\) are the real and reactive power flows, respectively. ### _DC Power Flow_ As discussed in [8], there are two categories of DC power flow models, "cold-start" and "hot-start", that assume differing levels of information about the system's operating conditions. Here, we will introduce a generic formulation suitable for both cold-start and hot-start formulations and then show how these formulations differ in their parameter choices. The DC power flow approximation uses several assumptions to linearize the non-linear AC power flow equations: neglect reactive power, assume all voltage magnitudes are constant, and consider angle differences across each transmission line to be small such that the small angle approximation for the sine function is applicable. Applying these assumptions to (1) yields the DC power flow: \[P_{i}-\gamma_{i} =\sum_{j\in\mathcal{N}}b_{ij}\cdot(\theta_{i}-\theta_{j}), \tag{2a}\] \[p_{ij}^{DC} =b_{ij}\cdot(\theta_{i}-\theta_{j})+\rho_{ij}, \tag{2b}\] where \(p_{ij}^{DC}\) is the power flow in line \((i,j)\in\mathcal{E}\). As discussed in [8], \(\gamma_{i}\) is a bias parameter that accounts for losses from shunts, HVDC infeeds, and injections modeling phase shifts and branch losses for lines connected to bus \(i\). The bias parameter \(\rho_{ij}\) for line \((i,j)\in\mathcal{E}\) is associated with line losses. Let \(\mathcal{N}^{\prime}=\mathcal{N}\setminus ref\) represent the set of all buses excluding the reference bus, \(\mathbf{P}\) be the vector of net power injections at buses \(i\in\mathcal{N}^{\prime}\), and \(\boldsymbol{\theta}\) be the vector of voltage angles at buses \(i\in\mathcal{N}^{\prime}\). Set \(\theta_{ref}=0\). Furthermore, define \(\mathbf{A}\) as the \(|\mathcal{E}|\times(|\mathcal{N}|-1)\) node-arc incidence matrix describing the connections between the system's buses and branches and let \(\mathbf{b}\) be a length-\(|\mathcal{E}|\) coefficient vector usually obtained using the branch susceptances. The matrix form of (2) is: \[\mathbf{P}-\boldsymbol{\gamma} =\mathbf{B}^{\prime}\cdot\boldsymbol{\theta}, \tag{3a}\] \[\mathbf{p}^{DC} =(\text{diag}(\mathbf{b})\cdot\mathbf{A}\cdot\boldsymbol{\theta})+ \boldsymbol{\rho}, \tag{3b}\] where \(\text{diag}(\,\cdot\,)\) is the diagonal matrix with the argument on the diagonal and \(\mathbf{B}^{\prime}\) is \[\mathbf{B}^{\prime}=\mathbf{A}^{T}\cdot\text{diag}(\mathbf{b})\cdot\mathbf{A}. \tag{3c}\] In (3), \(\mathbf{p}^{DC}\) is a length-\(|\mathcal{E}|\) vector of power flows for each branch and \(\boldsymbol{\rho}\) is a length-\(|\mathcal{E}|\) vector associated with line losses. Solving (3a) for \(\mathbf{\theta}\) and substituting into (3b) yields the so-called PTDF formulation of the DC power flow equations that linearly relates the line flows and real power injections: \[\mathbf{p}^{DC}=\text{diag}(\mathbf{b})\cdot\mathbf{A}\cdot[\mathbf{A}^{T}\cdot \text{diag}(\mathbf{b})\cdot\mathbf{A}]^{-1}\cdot(\mathbf{P}-\mathbf{\gamma})+\mathbf{ \rho}. \tag{4}\] The parameters \(\mathbf{b}\), \(\mathbf{\gamma}\), and \(\mathbf{\rho}\) impact the DC power flow's performance. Cold- and hot-start versions of the DC power flow assume different amounts of prior information when choosing these parameters. #### Ii-C1 Cold-start DC power flow In this version, the coefficient and bias parameters are selected without relying on a nominal AC power flow solution. For instance, the coefficient values \(b_{ij}\) can be selected as either: \[b_{ij}^{cold}=\Im\left(\frac{-1}{r_{ij}+j\cdot x_{ij}}\right)\quad\text{ or }\quad b_{r=0,ij}^{cold}=\frac{1}{x_{ij}}. \tag{5}\] Furthermore, the bias parameters (\(\mathbf{\gamma}\) and \(\mathbf{\rho}\)) are typically set to zero in the cold-start version. These heuristic methods offer simplicity but may not provide adequate accuracy. #### Ii-C2 Hot-start DC power flow A nominal AC power flow solution can provide a good starting point to construct a DC power flow approximation [8]. For instance, the so-called "localized loss modeling" variant of the hot-start DC model in [8] selects: \[b_{ij}^{hot} =b_{ij}v_{i}^{\bullet}v_{j}^{\bullet}\sin(\theta_{i}^{\bullet}- \theta_{j}^{\bullet})/(\theta_{i}^{\bullet}-\theta_{j}^{\bullet}), \tag{6a}\] \[\gamma_{i}^{hot} =\sum_{(i,j)\in\mathcal{E}}\Re(Y_{ij})v_{i}^{\bullet}(v_{i}^{ \bullet}-v_{j}^{\bullet}\cos(\theta_{i}^{\bullet}-\theta_{j}^{\bullet})),\] (6b) \[\rho_{ij}^{hot} =\Re(Y_{ij})v_{i}^{\bullet}(v_{i}^{\bullet}-v_{j}^{\bullet}\cos( \theta_{i}^{\bullet}-\theta_{j}^{\bullet})), \tag{6c}\] where \((\,\cdot\,)^{\bullet}\) denotes quantities from the nominal AC power flow solution. Note that \(\mathbf{\gamma}^{hot}\) denotes injections that model the impacts of branch losses on phase angles and \(\mathbf{\rho}^{hot}\) accounts for the branch losses in the line flow expressions themselves. In the next section, we introduce a machine learning-inspired algorithm to optimize the coefficient (\(\mathbf{b}\)) and bias (\(\mathbf{\gamma}\) and \(\mathbf{\rho}\)) parameters in the DC power flow model. Our proposed algorithm aims to reduce the discrepancy between the power flows predicted by the DC power flow model and the actual power flows from the AC power flow equations. ## III Parameter Optimization Algorithm As illustrated in Fig. 1, our parameter optimization algorithm consists of _offline_ and _online_ stages. The _offline_ stage, a one-time procedure, focuses on computing the optimal parameters, \(\mathbf{b}\), \(\mathbf{\gamma}\), and \(\mathbf{\rho}\), over a range of power injection scenarios. This ensures that our DC model closely aligns with the AC power flow across diverse operating conditions. In the _online_ phase, the DC model, equipped with these optimized parameters, offers rapid and accurate approximations suitable for real-time tasks. Thus, our algorithm invests computational time upfront during offline optimization to reap continual benefits during online applications. Our algorithm is inspired by supervised machine learning: we use power injections as inputs and line flows as targets. However, no traditional machine learning models or neural networks are applied. Rather, the offline phase refines the parameters \(\mathbf{b}\), \(\mathbf{\gamma}\), and \(\mathbf{\rho}\) by solving an optimization problem which has these parameters as decision variables. To optimize the parameters \(\mathbf{b}\), \(\mathbf{\gamma}\), and \(\mathbf{\rho}\), we first formulate a loss function that quantifies the accuracy of the DC power flow approximation by comparing the DC approximation's power flow predictions against the power flows obtained from the AC power flow equations across a set of power injection scenarios. We next compute the sensitivities of the loss function with respect to these parameters. These sensitivities guide the optimization process by indicating the direction in which the parameters should be adjusted to minimize the loss function. Using this sensitivity information, we then apply an optimization method, such as BFGS, L-BFGS, and TNC. These methods offer scalable optimization capabilities, making them well-suited for large power systems. By optimally selecting the parameter values using our algorithm, the DC power flow model's accuracy can be significantly improved across a broad range of power injection scenarios. The details of this algorithm and its implications are presented next. ### _Loss Function_ Here, we introduce a loss function based on the sum of squared two-norm discrepancies between the AC (\(\mathbf{p}_{m}^{AC}\)) and DC (\(\mathbf{p}_{m}^{DC}\)) power flow models across a specified set of power injection scenarios \(\mathcal{M}=1,2,\ldots,S\). This approach is typical in machine learning for its robustness and differentiability. Our loss function, \(\mathcal{L}\), is formulated as: \[\mathcal{L}(\mathbf{b},\mathbf{\gamma},\mathbf{\rho}) =\frac{1}{|\mathcal{E}|}\sum_{m\in\mathcal{M}}||\mathbf{p}_{m}^{ DC}-\mathbf{p}_{m}^{AC}||_{2}^{2},\] \[=\frac{1}{|\mathcal{E}|}\sum_{m\in\mathcal{M}}||\text{diag}( \mathbf{b})\cdot\mathbf{A}\cdot[\mathbf{A}^{T}\cdot\text{diag}(\mathbf{b}) \mathbf{A}]^{-1}\] \[\times(\mathbf{P}_{m}-\mathbf{\gamma})+\mathbf{\rho}-\mathbf{p}_{m}^{AC} ||_{2}^{2}, \tag{7}\] Figure 1: Flowchart describing the proposed algorithm. where the constant \(\frac{1}{|\mathcal{E}|}\) normalizes this function based on the system size. As shown in (4) and (7), \(\mathbf{p}_{m}^{DC}\) (and thus \(\mathcal{L}(\mathbf{b},\boldsymbol{\gamma},\boldsymbol{\rho})\)) is a function of the coefficient parameters \(\mathbf{b}\) and the bias parameters \(\boldsymbol{\gamma}\) and \(\boldsymbol{\rho}\). By focusing on the two-norm of the discrepancies, larger deviations are penalized more heavily. This is well aligned with typical applications where a small number of severe approximation errors would be more problematic than a large number of minor errors. One could instead use other norms without major conceptual changes. Finally, the unconstrained optimization problem to find the best coefficient and bias parameters is formulated as: \[\min_{\mathbf{b},\boldsymbol{\gamma},\boldsymbol{\rho}}\quad\mathcal{L}( \mathbf{b},\boldsymbol{\gamma},\boldsymbol{\rho}). \tag{8}\] ### _Sensitivities of the Coefficient and Bias Parameters_ Optimization methods such as BFGS, L-BFGS, and TNC rely on the gradient of the loss function with respect to the parameters in the \(\mathbf{b}\), \(\boldsymbol{\gamma}\), and \(\boldsymbol{\rho}\) vectors, i.e., the sensitivity of \(\mathcal{L}(\mathbf{b},\boldsymbol{\gamma}.\boldsymbol{\rho})\) to infinitesimal changes in \(\mathbf{b}\), \(\boldsymbol{\gamma}\), and \(\boldsymbol{\rho}\) across all power injection scenarios. We first focus on sensitivities for the \(\mathbf{b}\) parameters, denoted as \(\mathbf{g}^{b}\), which are calculated by taking the derivatives of the loss function (7) with respect to the coefficient parameters \(\mathbf{b}\): \[\mathbf{g}^{b}=\frac{2}{|\mathcal{E}|}\sum_{m\in\mathcal{M}}\frac{\partial \mathbf{p}^{DC}}{\partial\mathbf{b}}\bigg{|}_{\mathbf{p}_{m}^{DC}}\Big{(} \mathbf{p}_{m}^{DC}-\mathbf{p}_{m}^{AC}\Big{)},\] (9a) where \[\frac{\partial\mathbf{p}^{DC}}{\partial\mathbf{b}}\] is obtained from the derivative of ( 4 ) with respect to the coefficient parameters \[\mathbf{b}\] : \[\frac{\partial\mathbf{p}^{DC}}{\partial\mathbf{b}} =\text{diag}\Bigg{(}\Big{[}\mathbf{A}[\mathbf{A}^{T}\cdot\text{ diag}(\mathbf{b})\cdot\mathbf{A}]^{-1}(\mathbf{P}-\boldsymbol{\gamma})\Big{]}^{T} \Bigg{)}\times\] \[\Big{(}\mathbf{I}-\text{diag}(\mathbf{b})\cdot\mathbf{A}[ \mathbf{A}^{T}\cdot\text{diag}(\mathbf{b})\cdot\mathbf{A}]^{-1}\mathbf{A}^{T} \Big{)}. \tag{9b}\] where \(\mathbf{I}\) is the identity matrix. The appendix provides a detailed derivation of (9b). Like the coefficient parameters \(\mathbf{b}\), the bias parameters \(\boldsymbol{\gamma}\) significantly impact the accuracy of DC power flow. The gradient of the loss function with respect to the bias parameters \(\boldsymbol{\gamma}\) is represented by \(\mathbf{g}^{\gamma}\): \[\mathbf{g}^{\gamma}=\frac{2}{|\mathcal{E}|}\sum_{m\in\mathcal{M}}\frac{ \partial\mathbf{p}^{DC}}{\partial\boldsymbol{\gamma}}\bigg{|}_{\mathbf{p}_{m}^ {DC}}\Big{(}\mathbf{p}_{m}^{DC}-\mathbf{p}_{m}^{AC}\Big{)},\] (10a) where \[\frac{\partial\mathbf{p}^{DC}}{\partial\boldsymbol{\gamma}}\] is calculated by taking the derivative of ( 4 ) with respect to bias parameters \[\boldsymbol{\gamma}\] : \[\frac{\partial\mathbf{p}^{DC}}{\partial\boldsymbol{\gamma}}=-\text{diag}( \mathbf{b})\cdot\mathbf{A}[\mathbf{A}^{T}\cdot\text{diag}(\mathbf{b})\cdot \mathbf{A}]^{-1}. \tag{10b}\] Finally, the gradient of the loss function with respect to the bias parameters \(\boldsymbol{\rho}\) is represented by \(\mathbf{g}^{\rho}\): \[\mathbf{g}^{\rho}=\frac{2}{|\mathcal{E}|}\sum_{m\in\mathcal{M}}\frac{ \partial\mathbf{p}^{DC}}{\partial\boldsymbol{\rho}}\bigg{|}_{\mathbf{p}_{m}^{ DC}}\Big{(}\mathbf{p}_{m}^{DC}-\mathbf{p}_{m}^{AC}\Big{)}, \tag{11}\] where \(\frac{\partial\mathbf{p}^{DC}}{\partial\boldsymbol{\rho}}\) is calculated by taking the derivative of (4 ) with respect to bias parameters \(\boldsymbol{\rho}\), which is the identity matrix \(\mathbf{I}\). These sensitivities enable gradient-based methods for optimizing the parameters \(\mathbf{b}\), \(\boldsymbol{\gamma}\), and \(\boldsymbol{\rho}\), as we will describe next. ### _Optimization Formulation and Solution Methods_ With known sensitivities, many gradient-based methods such as BFGS, L-BFGS, and TNC can be applied to the unconstrained optimization problem (8). We next summarize the key characteristics of each method. Our numerical results in the following section empirically compare the performance of each method for a range of test cases. **BFGS**: An iterative quasi-Newton approach proposed by Broyden, Fletcher, Goldfarb, and Shanno [20, p. 136], BFGS uses the gradient to update an inverse Hessian matrix approximation, bypassing the need for the complete Hessian matrix. **L-BFGS**: An evolution of the BFGS method that uses a limited memory approach to handle large datasets. **Conjugate-Gradient (CG)**: The CG method uses a nonlinear conjugate gradient algorithm [20, pp. 120-122], which only relies on the first derivatives. **Newton-CG**: The Newton-CG method (also known as the truncated Newton method) uses a CG method to compute the search direction [20, p. 168]. **Truncated Newton Conjugate-Gradient (TNC)**: The TNC method uses a truncated Newton algorithm to minimize a function with variables subject to bounds [20, 21]. We will numerically assess the performance of each optimization method when solving problem (8). ## IV Experimental Results and Discussion This section presents and benchmarks the results obtained from our proposed algorithm. To demonstrate the model's efficacy, we compare power flows from our machine learning inspired algorithm to those from traditional DC power flow formulations and the AC power flow model. These comparisons consider multiple illustrative test systems from [22, 23]. For these test cases, we generated \(10,000\) power injection scenarios (\(8,000\) for offline training and \(2,000\) for testing). These scenarios were created by multiplying the nominal power injections by a normally distributed random variable with zero mean and standard deviation of \(10\%\). We initialize the proposed algorithm with hot-start parameters. Solutions to the AC power flow problems were computed using PowerModels.jl[24] on a computing node of the Partnership for an Advanced Computing Environment (PACE) cluster at Georgia Tech. This computing node has a 24-core CPU and 32 GB of RAM. The proposed algorithm is implemented in Python 3.0 using a Jupyter Notebook. To minimize the loss function, we used the BFGS, L-BFGS, TNC, CG, and Newton-CG implementations from the scipy.optimize.minimize library. ### _Benchmarking Optimization Methods_ First, we assess the performance of the BFGS, L-BFGS, TNC, CG, and Newton-CG methods, as detailed in Section III-C, using the IEEE 300-bus system as a representative example. Each method uses a convergence tolerance of \(1\times 10^{-6}\). Fig. 2 shows the evolution of the training loss (i.e., (7)) evaluated for the training scenarios) and the training time for each method. For the IEEE 300-bus system as well as the other test cases we considered, the L-BFGS method had the fastest performance during the offline training step for most of the cases. However, the quality of the resulting parameters, as measured by the loss function value for the training dataset, exhibited mixed results with the TNC, BFGS, L-BFGS, CG, and Newton-CG methods each achieving the best performance for some test cases. Due to their overall performance, we choose to focus on applying the TNC, BFGS, and L-BFGS methods to (8). We will evaluate their performance on the test datasets in Section IV-C. ### _Comparison of Parameter Values Across Selection Methods_ Here, we illustrate the distributions of cold-start, hot-start, and optimized parameters \(\mathbf{b}\), \(\boldsymbol{\gamma}\), and \(\boldsymbol{\rho}\) across various test systems using box plots. Each box shows the interquartile range, representing the middle \(50\%\) of the data, with the central line indicating the median. The whiskers, extending from each box, display the data within \(1.5\) times the interquartile range. Data points outside of these whiskers are considered outliers and are plotted as individual dots. The horizontal lines at the whiskers' ends indicate the \(90^{th}\) percentile of the data. For each test system, the box plot figures highlight four distributions: the cold-start (\(\mathbf{b}^{cold}\) or \(\mathbf{b}^{cold}_{r=0}\)), hot-start (\(\mathbf{b}^{hot}\), \(\boldsymbol{\gamma}^{hot}\), and \(\boldsymbol{\rho}^{hot}\)), and the results from our optimization algorithm (\(\mathbf{b}^{opt}\), \(\boldsymbol{\gamma}^{opt}\), and \(\boldsymbol{\rho}^{opt}\)). All data are visualized on a logarithmic scale, emphasizing variations across multiple orders of magnitude. These boxplots in Fig. 2 reveal that the distributions of the optimized parameter values align closely with those from existing heuristics for selecting \(\mathbf{b}\) (i.e., \(\mathbf{b}^{cold}\) and \(\mathbf{b}^{cold}_{r=0}\), and \(\mathbf{b}^{hot}\)). This indicates that our proposed algorithm yields parameter values within a reasonable range, with variations that are consistent with traditional heuristics for choosing \(\mathbf{b}\). Furthermore, Fig. 2(b) provides scatter plots comparing the hot-start coefficient values (\(\mathbf{b}^{hot}\)) with optimized ones (\(\mathbf{b}^{opt}\)) across various test cases. The red dashed line at \(45^{\circ}\) in each subplot signifies a one-to-one correlation in the parameter values. Similarly, Figs. 4 and 5 compare the distributions of the optimized bias parameters \(\boldsymbol{\gamma}^{opt}\) and \(\boldsymbol{\rho}^{opt}\) with the hot-start DC parameters \(\boldsymbol{\gamma}^{hot}\) and \(\boldsymbol{\rho}^{opt}\). Optimized parameters are broadly similar to those from existing heuristics, suggesting an alignment with longstanding power engineering intuition that the line susceptances are a key parameter in dictating power flows. However, there are some lines for which the optimized \(\mathbf{b}\), \(\boldsymbol{\gamma}\), and \(\boldsymbol{\rho}\) values differ from those in traditional heuristics. These differences suggest that targeted adjustments to the \(\mathbf{b}\), \(\boldsymbol{\gamma}\), and \(\boldsymbol{\rho}\) parameters can substantially improve the DC power flow approximation's accuracy. ### _Accuracy with Respect to the AC Power Flow_ To benchmark the accuracy of our algorithm, we next perform comparisons to the AC power flow model. Fig. 6 illustrates the density distributions of errors achieved when using the optimized and traditional DC parameters for the Pegase 1354-bus system over 2000 testing scenarios. Fig. 7 shows the accuracy advantages of our optimized parameters, with maximum errors less than \(0.151\) per unit versus errors up to \(3.965\), \(3.965\), and \(1.484\) per unit resulting from the cold-start DC approximation \(\mathbf{b}^{cold}\) and \(\mathbf{b}^{cold}_{r=0}\) and the hot-start DC approximation with \(\mathbf{b}^{hot}\), \(\boldsymbol{\gamma}^{hot}\), and \(\boldsymbol{\rho}^{hot}\), respectively. Table I provides a detailed comparison of the squared two-norm and \(\infty\)-norm loss functions evaluated for different test cases using three optimization methods: L-BFGS, BFGS, and TNC. While the TNC method exhibits superior performance in many cases, it does not universally outperform the L-BFGS and BFGS methods. For instance, the BFGS method yields better results for certain loss metrics for the IEEE 14-bus case. In addition, the training time for the L-BFGS method is usually much smaller than other methods while having a loss value comparable to the TNC method. For every test case, the use of the optimally selected parameters \(\mathbf{b}^{opt}\), \(\boldsymbol{\gamma}^{opt}\), and \(\boldsymbol{\rho}^{opt}\) reduces the loss function values, implying increased accuracy in the DC power flow. The substantial improvements in both the squared two-norm and \(\infty\)-norm loss demonstrates the effectiveness of our algorithm. For example, applying the TNC method to the 4601-bus test case results in the squared two-norm loss decreasing from \(0.158\) in hot-start DC model to \(0.0009\) (a factor of \(176\) improvement) and the \(\infty\)-norm loss decreasing from \(0.303\) to \(0.058\) (a factor of \(5\times\) improvement). Similar trends are observed across all test cases and optimization methods. The table also shows the training time (in seconds) for various optimization methods. While the training times increase with the system size, even reasonably large systems (several thousand buses) remain within acceptable times for offline computations (several hours). We expect that further efforts Fig. 2: Training losses and times for the L-BFGS, TNC, BFGS, CG, and Newton-CG methods for the IEEE 300-bus system. Figure 4: a) Boxplots showing the distributions of the hot-start and optimal injection bias values, \(\mathbf{\gamma}^{hot}\) and \(\mathbf{\gamma}^{opt}\), across multiple test cases. b) Scatter plots comparing the bias values \(\mathbf{\gamma}^{hot}\) and \(\mathbf{\gamma}^{opt}\) for various test cases. Figure 5: (a) Boxplots showing the distributions of hot-start and optimal flow bias values, \(\mathbf{\rho}^{opt}\) and \(\mathbf{\rho}^{hot}\), across multiple test cases. (b) Scatter plots comparing the loss values \(\mathbf{\rho}^{hot}\) and \(\mathbf{\rho}^{opt}\) for various test cases. Figure 6: Density distributions of the difference between AC and DC power flows in per unit with optimized parameters (\(\mathbf{b}^{opt}\), \(\mathbf{\gamma}^{opt}\), and \(\mathbf{\rho}^{opt}\)), cold-start parameters (both \(\mathbf{b}^{cold}\) and \(\mathbf{b}^{cold}_{error}\), with \(\mathbf{\gamma}\) and \(\mathbf{\rho}\) equal to zero), and hot-start parameters (\(\mathbf{b}^{hot}\), \(\mathbf{\gamma}^{hot}\), and \(\mathbf{\rho}^{hot}\)) for the Pegase 1354-bus system over 2000 testing scenarios. Figure 3: (a) Boxplots showing the distributions of the \(\mathbf{b}\) parameter values for multiple test cases. Each test case is represented by four boxplots indicating the cold-start, hot-start, and the optimal \(\mathbf{b}\) parameter values. (b) Scatter plots comparing the coefficient values \(\mathbf{b}^{hot}\) and \(\mathbf{b}^{opt}\) for various test cases. in selecting and tuning optimization methods and more efficient implementations would lead to additional computational improvements for the training process. We also note that the online execution times required to solve the DC power flow problems with our optimized parameters are comparable to the DC power flow solution times for existing parameter heuristics such as using \(b^{cold}\), \(b^{cold}_{r=0}\), or hot-start. Specifically, the average per DC power flow solution times with cold-start, hot-start, and optimized values range from \(1\) to \(931\) milliseconds across the test cases. ### _Application to \(N-1\) Contingency Analysis_ Given the unpredictable nature of real-world power systems, the capacity to effectively handle changes in topology is a critical characteristic for any power flow model. In particular, the \(N-1\) contingency scenarios, where any single component may fail, are important considerations for power system operations and planning. There are multiple ways one could handle contingencies. In traditional approaches with \(\mathbf{b}^{cold}\) or \(\mathbf{b}^{cold}_{r=0}\), one would simply remove any lines outaged in a line contingency scenario from the problem. With our optimization-based algorithm, one could take the same approach by setting the values of \(\mathbf{b}^{opt}\), \(\boldsymbol{\rho}^{opt}\) corresponding to outaged lines to zero, and adjusting \(\boldsymbol{\gamma}^{opt}\) accordingly. Maintaining accurate performance with this approach would suggest that our algorithm generalizes well across related network topologies (i.e., the parameters are not "overfit" for a particular topology). Alternatively, we could solve tailored optimization problems for each contingency to find the optimal parameters \(\mathbf{b}^{opt}_{tail}\), \(\boldsymbol{\gamma}^{opt}_{tail}\), and \(\boldsymbol{\rho}^{opt}_{tail}\) specific to each scenario. This requires more training time and memory for computing and storing the many additional parameters that must be selected. However, these computations are trivially parallelizable and thus well suited for a high-performance computing setting since the optimization problems for each contingency scenario can be run without requiring any information from other contingencies. To explore these different approaches, we next describe a small-scale experiment using the IEEE 14-bus system. For each line contingency, we optimized the parameters \(\mathbf{b}\), \(\boldsymbol{\gamma}\), and \(\boldsymbol{\rho}\) to minimize the loss function (7) while solely considering power injection scenarios corresponding to that contingency. The results in Table II show the performance of the tailored parameters \(\mathbf{b}^{opt}_{tail}\), \(\boldsymbol{\gamma}^{opt}_{tail}\), and \(\boldsymbol{\rho}^{opt}_{tail}\) versus cold-start, hot-start, and base-case-optimized parameters. Tailoring parameters for individual contingency scenarios consistently yields superior results. On average, using the \(\mathbf{b}^{opt}_{tail}\), \(\boldsymbol{\gamma}^{opt}_{tail}\), and \(\boldsymbol{\rho}^{opt}_{tail}\) parameters provides a \(98.70\%\) improvement over the cold-start approach. When compared to the hot-start heuristic, \(\mathbf{b}^{opt}_{tail}\), \(\boldsymbol{\gamma}^{opt}_{tail}\), and \(\boldsymbol{\rho}^{opt}_{tail}\) exhibits an improvement of approximately \(92\%\). We also note that the \(\mathbf{b}^{opt}_{base}\), \(\boldsymbol{\gamma}^{opt}_{base}\), and \(\boldsymbol{\rho}^{opt}_{base}\) parameters consistently surpass both the cold-start and hot-start approaches across all test scenarios. On average, \(\mathbf{b}^{opt}_{base}\), \(\boldsymbol{\gamma}^{opt}_{base}\), and \(\boldsymbol{\rho}^{opt}_{base}\) parameters show an improvement of approximately \(89\%\) over the cold- start approach and approximately \(16\%\) over the hot-start approach. This indicates that our proposed algorithm exhibits superior generalizability compared to traditional parameter selection heuristics. To compute DC power flow approximation parameters that simultaneously consider accuracy with respect to both the base case and contingencies, one could combine base case and contingency scenarios as inputs to the proposed algorithm. However, in our experiments, this combined approach did not yield satisfactory results. We are currently exploring an alternative approach of clustering related contingency scenarios and optimizing parameters specific to each cluster. This allows for more tailored parameter selection while mitigating the computational burden involved in calculating different parameter values for each contingency. ## V Conclusion This paper presents a machine learning-inspired algorithm to improve the DC power flow approximation's accuracy by optimizing the selection of the coefficient and bias parameters. Our algorithm harnesses L-BFGS, BFGS, and TNC optimization methods to refine the coefficient and bias parameters, achieving better agreement between the DC and AC power flow models. Our simulations on various test systems demonstrate the effectiveness of this algorithm. We improve the accuracy of the DC power flow approximation by several orders of magnitude across a range of test cases. These findings underline the value of our algorithm in enhancing the reliability and accuracy of the DC power flow model, particularly for large-scale power systems. Our future work intends to focus on applying the improved DC power flow model to several critical applications in power systems, such as optimal power flow, unit commitment, and optimal transmission switching. We anticipate that the accuracy gained from our enhanced DC power flow model could lead to significantly improved performance in these and other applications. Regarding next steps in contingency analyses, our experiments showed that naively combining base case and contingency scenarios was not effective. Moving forward, we are focusing on a clustering approach: grouping related contingency scenarios and optimizing parameters for each cluster. This strategy aims to optimize performance across diverse scenarios without overfitting. Our ongoing work also aims to reduce training time. This may involve targeted scenario sampling and methods inspired by techniques for accelerating the training of machine learning models.
2302.14810
Effective formulas for the geometry of normal homogeneous spaces. Application to flag manifolds
Consider a smooth manifold and an action on it of a compact connected Lie group with a bi-invariant metric. Then, any orbit is an embedded submanifold that is isometric to a normal homogeneous space for the group. In this paper, we establish new explicit and intrinsic formulas for the geometry of any such orbit. We derive our formula of the Levi-Civita connection from an existing generic formula for normal homogeneous spaces, i.e. which determines, a priori only theoretically, the connection. We say that our formulas are effective: they are directly usable, notably in numerical analysis, provided that the ambient manifold is convenient for computations. Then, we deduce new effective formulas for flag manifolds, since we prove that they are orbits under a suitable action of the special orthogonal group on a product of Grassmannians. This representation of them is quite useful, notably for studying flags of eigenspaces of symmetric matrices. Thus, we develop from it a geometric solution to the problem of perturbation of eigenvectors which is explicit and optimal in a certain sense, improving the classical analytic solution.
Dimbihery Rabenoro, Xavier Pennec
2023-02-28T18:00:07Z
http://arxiv.org/abs/2302.14810v2
# The geometry of Riemannian submersions from compact Lie groups. Application to flag manifolds ###### Abstract We study the geometry of Riemannian manifolds of finite dimension which are isometric to a quotient of a compact Lie group endowed with the quotient of a bi-invariant metric. Then, we apply the results obtained to enlight the geometry of flag manifolds. ## 1 Introduction In this paper, we study the geometry of Riemannian manifolds of finite dimension which are isometric to a quotient of a compact Lie group endowed with the quotient of a bi-invariant metric [5], [7]. Such a manifold is thus a Riemannian submersion from the compact Lie group, over which the latter is a principal bundle [6]. The initial motivation for our work was to enlight the geometry of flag manifolds. Flags are properly nested sequences of linear subspaces [1], [9]. They can equivalently be described as sequences of mutually orthogonal incremental subspaces whose partial sums generates the nested sequence. Flag manifolds have recently been a rising subject of interest in statistical machine learning for generalizing Grassmanians and handling multiple subspaces at the same time, as in [11]. It was also recently shown that Principal Component Analysis (PCA) can be rephrased as an optimization on the Flag. One of the novelty of [14] is to realize that the nested sequence of linear subspaces actually optimizes a criterion on the flag manifold: the accumulated unexplained variance (AUV). On this basis, new robust variants like the minimal accumulated unexplained p-variance have been proposed, even for manifold-valued data [13]. The ubiquitous appearance of flags in other domains such as numerical analysis exemplified in [15] led these authors to develop the mathematical tools for Riemannian optimization algorithms on the flag manifold. This landmark paper describes flag spaces in multiple ways with different geometric structures. However, it is not always easy to disentangle what is due to each structure. In this paper, we clarify the structure of flag manifolds as Riemannian submersions from the orthogonal group. In that purpose, we develop the general setting described at the beginning of this introduction and prove that flag manifolds fall within this framework. For an arbitrary compact Lie group \(\mathbb{G}\) endowed with a bi-invariant metric \(g^{\mathbb{G}}\), the key point is the decomposition of smooth vector fields as linear combination of left-invariant vector fields with coefficients which are smooth functions on \(\mathbb{G}\). Now, the manifold \((\mathbb{B},g^{\mathbb{B}})\) which we study is a Riemannian submersion from \((\mathbb{G},g^{\mathbb{G}})\). Applying O'Neill's formula [12], we derive notably the Levi-Civita connection and the curvature tensors of \((\mathbb{B},g^{\mathbb{B}})\). In particular, we obtain closed-forms of these Riemannian objects in the case of flag manifolds. In Section 2, we describe the general framework and the basic geometry of \((\mathbb{B},g^{\mathbb{B}})\). Then, in Section 3, we prove that flag manifolds can be placed inside this setting. Finally, in Section 4, we establish the formulas for the Levi-Civita connection and the curvature tensors of \((\mathbb{B},g^{\mathbb{B}})\). ## 2 The general framework **Theorem 1**.: _Let \(\mathbb{B}\) be a subset of a manifold \(M\). Let \(\mathbb{G}\) be a compact lie group, endowed with a bi-invariant metric \(g^{\mathbb{G}}\), which acts smoothly from the left on \(M\). For \(p\in M\), let \(\mathbb{K}(p)\) be its isotropy group. Assume that \((i)\)\(\mathbb{B}\) is the orbit of some \(p_{0}\in M\). \((ii)\) The action of \(\mathbb{K}(p_{0})\) on \(\mathbb{G}\) by right-multiplication is isometric._ _Let \(\psi:\mathbb{G}\longrightarrow\mathbb{G}/\mathbb{K}(p_{0})\) be the canonical map and \(\pi\) the map defined by \(\pi:\mathbb{G}\longrightarrow\mathbb{B}\) and \(\pi(Q)=Q\cdot p_{0}\). Then, (1) \(\mathbb{B}\) is an embedded submanifold of \(M\) and \(\pi\) induces a diffeomorphism \(\overline{\pi}:\mathbb{G}/\mathbb{K}(p_{0})\simeq\mathbb{B}\) such that \(\overline{\pi}\circ\psi=\pi\). (2) The quotient metric \(\overline{g}^{\mathbb{G}}\) on \(\mathbb{G}/\mathbb{K}(p_{0})\) is well-defined and \(\pi\) is a Riemannian submersion from \((\mathbb{G},g^{\mathbb{G}})\) onto \((\mathbb{B},g^{\mathbb{B}})\),where \(g^{\mathbb{B}}:=\overline{\pi}_{*}\overline{g}^{\mathbb{G}}\). (3) \((\mathbb{G},\pi,\mathbb{B},\mathbb{K}(p_{0}))\) is a principal fiber bundle, i.e. with typical fiber the Lie group \(\mathbb{K}(p_{0})\). This is represented by the following diagram._ Proof.: (1) is standard: See Proposition A.2. in [3]. For (2), by section 29.21 in [8], \((ii)\) implies that \(\overline{g}^{\mathbb{G}}\) is well-defined and that \(\psi\) is a Riemannian submersion from \((\mathbb{G},g^{\mathbb{G}})\) onto \((\mathbb{G}/\mathbb{K}(p_{0}),\overline{g}^{\mathbb{G}})\). Finally, (3) is a direct consequence of Lemma 18.3 in [8]. Indeed, \(\mathbb{K}(p_{0})\) acts freely on \(\mathbb{G}\) by right-multiplication and the orbits of this action are the fibers of \(\pi\). **Remark 1**.: _It is not necessary in the proof of Theorem 1 to assume that \(g^{\mathbb{G}}\) is bi-invariant. However we use this condition later, so we suppose it from the beginning._ ### Vertical and horizontal parts Denote by \(e\) the neutral element of \(\mathbb{G}\) and, for \(\widetilde{Q}\in\mathbb{G}\), by \(\mathcal{L}_{\widetilde{Q}}^{\mathbb{G}}\) the left-translation by \(\widetilde{Q}\) in \(\mathbb{G}\): for all \(Q\in\mathbb{G}\), \(\mathcal{L}_{\widetilde{Q}}^{\mathbb{G}}(Q)=\widetilde{Q}Q\). Let \(\mathfrak{g}\) be its Lie algebra. Set \(N:=\dim(\mathfrak{g})=\dim(\mathbb{G})\). For all \(\Omega,\Omega^{\prime}\in\mathfrak{g}\), set \[\left[\Omega,\Omega^{\prime}\right]:=g_{e}\left(\Omega,\Omega^{\prime}\right).\] When \(\mathbb{G}\) is a matrix Lie group, i.e. \(\mathbb{G}\subset GL_{n}(\mathbb{R})\) for \(n\geq 1\), \(\mathfrak{g}=M_{n}(\mathbb{R})\) and \[\left[\Omega,\Omega^{\prime}\right]:=\Omega\Omega^{\prime}-\Omega^{\prime}\Omega.\] For all \(Q\in\mathbb{G}\), the vertical and horizontal subspaces in \(T_{Q}\mathbb{G}\) are respectively \(V_{Q}^{\pi}:=\ker(d\pi)_{Q}\) and \(H_{Q}^{\pi}:=\left(V_{Q}^{\pi}\right)^{\perp}\). We also introduce the linear projections \(\mathrm{ver}_{Q}^{\pi}:T_{Q}\mathbb{G}\longrightarrow V_{Q}^{\pi}\) and \(\mathrm{hor}_{Q}^{\pi}:T_{Q}\mathbb{G}\longrightarrow H_{Q}^{\pi}\). The next lemma states that the vertical and horizontal parts are invariant wrt left-translations in \(\mathbb{G}\). **Lemma 1**.: _For \(Q,\widetilde{Q}\in\mathbb{G}\) and \(\Delta\in T_{Q}\mathbb{G}\), set \(\widetilde{Q}_{*}(\Delta):=\left(d\mathcal{L}_{\widetilde{Q}}^{\mathbb{G}} \right)_{Q}(\Delta)\). Then, for all \(\Omega\in\mathfrak{g}\),_ \[\mathrm{ver}_{\widetilde{Q}Q}^{\pi}\left(\left(\widetilde{Q}Q\right)_{*} \Omega\right)=\widetilde{Q}_{*}\mathrm{ver}_{Q}^{\pi}\left(Q_{*}\Omega\right) \tag{2.1}\] _and_ \[\mathrm{hor}_{\widetilde{Q}Q}^{\pi}\left(\left(\widetilde{Q}Q\right)_{*} \Omega\right)=\widetilde{Q}_{*}\mathrm{hor}_{Q}^{\pi}\left(Q_{*}\Omega\right). \tag{2.2}\] Proof.: \(\mathbb{G}\) acts from the left on \(\mathbb{B}\subset M\), so that for all \(Q,\widetilde{Q}\in\mathbb{G}\), \(\widetilde{Q}\cdot(Q\cdot p_{0})=(\widetilde{Q}Q)\cdot p_{0}\), i.e. \(\widetilde{Q}\cdot\pi(Q)=\pi(\widetilde{Q}Q)\). This means that \(\mathcal{L}_{\widetilde{Q}}^{\mathbb{B}}\circ\pi=\pi\circ\mathcal{L}_{ \widetilde{Q}}^{\mathbb{G}}\), where, for any \(p\in\mathbb{B}\), \(\mathcal{L}_{\widetilde{Q}}^{\mathbb{B}}(p):=\widetilde{Q}\cdot p\). This implies that for any \(Q_{*}\Omega\in T_{Q}\mathbb{G}\), \[\left(d\mathcal{L}_{\widetilde{Q}}^{\mathbb{B}}\right)_{\pi(Q)}\left(\left(d \pi\right)_{Q}\left(Q_{*}\Omega\right)\right)=\left(d\pi\right)_{\mathcal{L}_ {\widetilde{Q}}^{\mathbb{G}}\left(Q\right)}\left(\left(d\mathcal{L}_{\widetilde {Q}}^{\mathbb{G}}\right)_{Q}\left(Q_{*}\Omega\right)\right)=\left(d\pi\right)_{ \widetilde{Q}Q}\left(\widetilde{Q}_{*}\left(Q_{*}\Omega\right)\right).\] Since \(\left(d\mathcal{L}_{\widetilde{Q}}^{\mathbb{B}}\right)_{\pi(Q)}\) is an isomorphism, we deduce that: \(Q_{*}\Omega\in V_{Q}^{\pi}\iff\widetilde{Q}_{*}\left(Q_{*}\Omega\right)\in V_ {\widetilde{Q}Q}^{\pi}\). Since \(g^{\mathbb{G}}\) is a left-invariant metric, this latter equivalence implies the following one: \(Q_{*}\Omega\in H_{Q}^{\pi}\iff\widetilde{Q}_{*}\left(Q_{*}\Omega\right)\in H_{ \widetilde{Q}Q}^{\pi}\). Now, \[Q_{*}\Omega=\mathrm{ver}_{Q}^{\pi}\left(Q_{*}\Omega\right)+\mathrm{hor}_{Q}^{ \pi}\left(Q_{*}\Omega\right)\implies\left(\widetilde{Q}Q\right)_{*}\Omega= \left(\widetilde{Q}\right)_{*}\mathrm{ver}_{Q}^{\pi}\left(Q_{*}\Omega\right)+ \left(\widetilde{Q}\right)_{*}\mathrm{hor}_{Q}^{\pi}\left(Q_{*}\Omega\right). \tag{2.3}\] By the preceding equivalences, (2.3) yields the decomposition of \(\left(\widetilde{Q}Q\right)_{*}\Omega\) into its vertical and horizontal parts in \(T_{\widetilde{Q}Q}\mathbb{G}\), which concludes the proof. ### Basic geometry of the base Recall that for \(p\in\mathbb{B}\) and \(Q\in\pi^{-1}(p)\), the restriction of \(\left(d\pi\right)_{Q}\) to \(H^{\pi}_{Q}\) is an isomorphism between \(H^{\pi}_{Q}\) and \(T_{p}\mathbb{B}\). For \(\Delta\in T_{p}\mathbb{B}\), the horizontal lift wrt \(\pi\) of \(\Delta\) at \(Q\) is denoted by \(\Delta^{\sharp}_{Q}\). It is the unique vector in \(H^{\pi}_{Q}\) whose image by \((d\pi)_{Q}\) is \(\Delta\). Then, \(g^{\mathbb{B}}\) is defined by \[g^{\mathbb{B}}_{p}(\Delta,\widetilde{\Delta})=g^{\mathbb{G}}_{Q}(\Delta^{ \sharp}_{Q},\widetilde{\Delta}^{\sharp}_{Q}). \tag{2.4}\] **Proposition 1**.: \((\mathbb{B},g^{\mathbb{B}})\) _is a complete Riemannian manifold and the geodesic in \(\mathbb{B}\) through \(p\) in the direction \(\Delta\) is of the form_ \[t\mapsto\pi\left(\mathcal{L}^{\mathbb{G}}_{Q}\left(\exp_{\mathfrak{g}}\left(t \left(Q^{-1}\right)_{*}\Delta^{\sharp}_{Q}\right)\right)\right),\] _where \(Q\in\pi^{-1}(p)\) and \(\exp_{\mathfrak{g}}:\mathfrak{g}\longrightarrow\mathbb{G}\) is the Lie group exponential map of \(\mathbb{G}\)._ Proof.: Since \(\mathbb{G}\) is compact, \(\mathbb{B}\) is complete and by the Hopf-Rinow theorem, \((\mathbb{B},g^{\mathbb{B}})\) is complete. Since \(g^{\mathbb{G}}\) is bi-invariant, the geodesics of \((\mathbb{G},g^{\mathbb{G}})\) are left-translates of one-parameter subgroups of \(\mathbb{G}\). Now, by Lemma 1, \(\left(Q^{-1}\right)_{*}\Delta^{\sharp}_{Q}\in H^{\pi}_{e}\). So, the curve \(t\mapsto\mathcal{L}^{\mathbb{G}}_{Q}\left(\exp_{\mathfrak{g}}\left(t\left(Q^{ -1}\right)_{*}\Delta^{\sharp}_{Q}\right)\right)\) is the _horizontal_ geodesic in \(\mathbb{G}\) through \(Q\), in the direction \(\Delta^{\sharp}_{Q}\). Thus, its image by \(\pi\) is indeed the geodesic in \(\mathbb{B}\) through \(p\) in the direction \(\Delta\). ## 3 A new representation of flag manifolds ### Some usual representations of flag manifolds A flag of vector spaces in \(\mathbb{R}^{n}\) is a filtration \(\mathcal{V}\) of subspaces \(V_{0}=\{0\}\subset V_{1},\ldots\subset V_{r}=\mathbb{R}^{n}\). For all \(1\leq i\leq r\), let \(W_{i}\) be the orthogonal complement of \(V_{i-1}\) in \(V_{i}\). Then, \((W_{i})_{1\leq i\leq r}\) is a sequence of mutually orthogonal linear subspaces of \(\mathbb{R}^{n}\), called the incremental subspaces representation of the flag \(\mathcal{V}\). The sequence \(\mathrm{I}=(q_{i})_{1\leq i\leq r}\), where \(q_{i}:=\dim(W^{i})\), is called the type of \(\mathcal{V}\). The set of all flags of type I is denoted by \(\Phi^{\mathrm{I}}\). On one hand, \(\Phi^{\mathrm{I}}\) is represented as follows: see [15]. For \(1\leq q\leq n\), we denote by \(G^{q}_{n}\) the Grassmannian of all linear subspaces of dimension \(q\) of \(\mathbb{R}^{n}\) and we identify \(G^{q}_{n}\) with orthogonal projectors. Then, we define the product \(G^{\mathrm{I}}\) of Grassmanians associated to the type I by \(G^{\mathrm{I}}:=\prod\limits_{i=1}^{r}G^{q_{i}}_{n}\). In this projector perspective, \(\Phi^{\mathrm{I}}\) is identified with the submanifold \(\mathcal{F}^{\mathrm{I}}\) of \(G^{\mathrm{I}}\) defined by \[\mathcal{F}^{\mathrm{I}}:=\left\{\mathcal{P}=(P_{i})_{1\leq i\leq r}\in G^{ \mathrm{I}}:\ P_{i}^{2}=P_{i}=P_{i}^{T},\ P_{i}P_{j}=0,\ i\neq j\right\}. \tag{3.1}\] On the other hand, in [10], the following representation of \(\Phi^{\mathrm{I}}\) is considered. Let \((\lambda_{i})_{1\leq i\leq r}\) be distinct real numbers. Consider the diagonal matrix \(\mathcal{D}_{0}=\mathrm{Diag}\left[\lambda_{1}I_{q_{1}},...,\lambda_{r}I_{q_{ r}}\right]\), where \(I_{q_{i}}\) is the identity matrix of order \(q_{i}\), and the left-action of the orthogonal group \(O(n)\) on \(\mathrm{Sym}_{\mathrm{n}}\) defined by \(Q\cdot S:=QSQ^{T}\). Then, \(\Phi^{\mathrm{I}}\) is identified with \(O(n)\cdot\mathcal{D}_{0}\subset\mathrm{Sym}_{\mathrm{n}}\), i.e. the orbit of \(\mathcal{D}_{0}\). Set \[O(\mathrm{I}):=\prod_{i=1}^{r}O(q_{i}).\] Then, through this representation, \(\Phi^{\mathrm{I}}\) is an embedded submanifold of \(\mathrm{Sym}_{\mathrm{n}}\) which is diffeomorphic to \(O(n)/O(\mathrm{I})\). A similar formalism is developed in [3] to study the geometry of \(G^{q}_{n}\). Namely, consider the projector \(P_{0}:=\mathrm{Diag}\left[I_{q},0_{n-q}\right]\in\mathrm{Sym}_{\mathrm{n}}\). Then, \(G^{q}_{n}\) is the orbit of \(P_{0}\) under the same action of \(O(n)\) on \(\mathrm{Sym}_{\mathrm{n}}\) and the map \(\pi^{q}:O(n)\longrightarrow G^{q}_{n}\) defined by \(\pi^{q}:Q\longmapsto Q\cdot P_{0}\) is a Riemannian submersion. Therefore, the geometry of \(G^{q}_{n}\) is described in terms of that of \(O(n)\). In this paper, we prove that \(\mathcal{F}^{\mathrm{I}}\) is the orbit a fixed flag \(\mathcal{P}_{0}\) under a suitable action of \(O(n)\) on \(G^{\mathrm{I}}\). Analogously to the case of \(G^{q}_{n}\), the combination of the projector perspective and the Riemannian submersion setting allows to derive results on the geometry of the flag manifold \(\mathcal{F}^{\mathrm{I}}\). ### Flag manifolds in the general framework The compact Lie group \(O(n)\) is endowed with the _bi-invariant_ metric \(g^{O}\) defined by \[g^{O}_{Q}\left(\Delta,\widetilde{\Delta}\right):=\frac{1}{2}\mathrm{tr}\left( \Delta^{T}\widetilde{\Delta}\right),\quad Q\in\mathbb{G},\ \Delta,\widetilde{\Delta}\in T_{Q}O(n).\] The group \(O(n)\) acts smoothly on \(G^{\mathrm{I}}\) by \(\left(Q,\mathcal{P}\right)\mapsto Q\cdot\mathcal{P}:=\left(QP_{1}Q^{T},..., QP_{r}Q^{T}\right)\). Introduce the standard flag \(P_{0}^{\mathrm{I}}:=\left(P_{0}^{\mathrm{i}}\right)_{1\leq i\leq r}\in \mathcal{F}^{\mathrm{I}}\), where \[P_{0}^{i}:=\mathrm{Diag}\left[0_{q_{1}},...,I_{q_{i}},...,0_{q_{r}}\right], \quad 1\leq i\leq r.\] Clearly, the isotropy group of \(\mathcal{P}_{0}^{\mathrm{I}}\) is isomorphic to \(O(\mathrm{I})\). Now, this action stabilizes \(\mathcal{F}^{\mathrm{I}}\), so that we may consider the map \(\pi^{\mathrm{I}}:O(n)\longrightarrow\mathcal{F}^{\mathrm{I}}\) defined by \[\pi^{\mathrm{I}}\left(Q\right)=Q\cdot\mathcal{P}_{0}^{\mathrm{I}}=\left(QP_{i }^{0}Q^{T}\right)_{1\leq i\leq r}.\] **Proposition 2**.: \(\mathcal{F}^{\mathrm{I}}\) _satisfies the conditions of Theorem 1 with \(M=G^{\mathrm{I}}\), \(\mathbb{G}=O(n)\), \(g^{\mathrm{G}}=g^{O}\), \(p_{0}=\mathcal{P}_{0}\) and \(\mathbb{K}(p_{0})=O(\mathrm{I})\). So, the map \(\pi^{\mathrm{I}}\) is a Riemannian submersion and induces a principal bundle structure over \(\mathcal{F}^{\mathrm{I}}\)._ Proof.: We need to prove that \((i)\) and \((ii)\) of Theorem 1 hold. For any \(\mathcal{P}=(P_{i})_{1\leq i\leq r}\in\mathcal{F}^{\mathrm{I}}\), let \(Q\in O(n)\) such that for all \(1\leq i\leq r\), its \(i\)-th block \(Q^{(i)}\) of columns is an orthonormal basis of \(\mathrm{Im}(P^{i})\). Then, for all \(1\leq i\leq r\), \(Q^{-1}P_{i}Q=P_{0}^{i}\) and \(\left(QP_{0}^{1}Q^{T},...,QP_{0}^{r}Q^{T}\right)=\mathcal{P}\). This proves that the orbit of \(\mathcal{P}_{0}\) is \(\mathcal{F}^{\mathrm{I}}\), i.e. \((i)\) holds. Since the metric \(g^{O}\) is right-invariant, \((ii)\) holds. ### Basic geometry of flag manifolds #### 3.3.1 Tangent spaces For any \(\mathcal{P}=\left(P_{i}\right)_{i}\in\mathcal{F}^{\mathrm{I}}\), the tangent space \(T_{\mathcal{P}}\mathcal{F}^{\mathrm{I}}\) is a linear subspace of \(T_{\mathcal{P}}G^{\mathrm{I}}=\bigoplus_{i=1}^{r}T_{P_{i}}G^{i}\). By differentiating the relation \(P_{i}P_{j}=0\) in (3.1), we obtain that \[T_{\mathcal{P}}\mathcal{F}^{\mathrm{I}}=\left\{\Delta=\left(\Delta^{i}\right)_ {1\leq i\leq r}\in T_{\mathcal{P}}G^{\mathrm{I}}:\forall\ i\neq j,\ \Delta^{i}P^{j}+P^{i}\Delta^{j}=0\right\}. \tag{3.2}\] **Lemma 2**.: _For all \(Q\in O(n)\) and \(Q\Omega\in T_{Q}O(d)\) with \(\Omega\in\mathfrak{so}(n)\),_ \[\left(d\pi^{\mathrm{I}}\right)_{Q}\left(Q\Omega\right)=\left(Q\left[\Omega,P_ {0}^{i}\right]Q^{T}\right)_{i}.\] Proof.: Let \(\gamma:[0,1]\longrightarrow O(n)\) be a smooth curve with \(\gamma(0)=Q\) and \(\gamma^{\prime}(0)=Q\Omega\). Then, \[\left(d\pi^{\mathrm{I}}\right)_{Q}\left(Q\Omega\right)=\left.\frac{d}{dt} \right|_{t=0}\pi^{\mathrm{I}}\left(\gamma(t)\right)=\left(\left.\frac{d}{dt} \right|_{t=0}\gamma(t)P_{0}^{i}\gamma(t)^{T}\right)_{i}.\] Now \(\Omega^{T}=-\Omega\). So, for all \(1\leq i\leq r\), \[\left.\frac{d}{dt}\right|_{t=0}\gamma(t)P_{0}^{i}\gamma(t)^{T}=\left(Q\Omega \right)P_{0}^{i}Q^{T}+QP_{0}^{i}\left(Q\Omega\right)^{T}=Q\left[\Omega,P_{0}^{ i}\right]Q^{T}.\] By Lemma 2, for any \(Q\in O(n)\), \(V_{Q}^{\pi^{\mathrm{I}}}=\left\{Q\Omega\in T_{Q}O(d):\left[\Omega,P_{0}^{i} \right]=0,\ 1\leq i\leq r\right\}\). The computation of \(\left[\Omega,P_{0}^{i}\right]\) implies that \[V_{Q}^{\pi^{\mathrm{I}}}=\left\{Q\mathrm{Diag}\left[\Omega_{ii}\right]:\Omega_ {ii}\in\mathfrak{so}(q_{i}),\ 1\leq i\leq r\right\}.\] Therefore, \[H_{Q}^{\pi^{\mathrm{I}}}=\left\{Q\Omega:\Omega\in\mathfrak{so}(n),\ \widetilde{\Omega}_{ii}=0,\ 1\leq i\leq r\right\}.\] In particular, we recover that \(\dim\left(\mathcal{F}^{\mathrm{I}}\right)=\dim\left(H_{Q}^{\pi^{\mathrm{I}}} \right)\underset{i>j}{\sum}q_{i}q_{j}\). #### 3.3.2 Horizontal lifts **Lemma 3**.: _For all \(\Omega\in\mathfrak{so}(n)\),_ \[\mathrm{hor}_{e}^{\pi^{\mathrm{I}}_{i}}(\Omega)=\frac{1}{2}\sum_{i=1}^{r}\llbracket \llbracket\llbracket\Omega,P_{0}^{i}\rrbracket,P_{0}^{i}\rrbracket.\] Proof.: This is a consequence of computations of block-matrices in \(\mathfrak{so}(n)\), partitioned wrt the type I. **Proposition 3**.: _For all \(\mathcal{P}=(P_{i})_{i}\in\mathcal{F}^{\mathrm{I}}\), \(\Delta=(\Delta_{i})_{i}\in T_{\mathcal{P}}\mathcal{F}^{\mathrm{I}}\) and \(Q\in\left(\pi^{\mathrm{I}}\right)^{-1}(\mathcal{P})\),_ \[\Delta_{Q}^{\sharp}=\frac{1}{2}\left(\sum_{i=1}^{r}\left\llbracket\Delta_{i},P_ {i}\rrbracket\right)Q.\] Proof.: This follows from the preceding Lemma and the description of the tangent spaces to \(\mathcal{F}^{\mathrm{I}}\). #### 3.3.3 Metric, geodesics and Exponential map For \(\mathcal{P}=(P_{i})_{i}\in\mathcal{F}^{\mathrm{I}}\) and \(\Delta,\widetilde{\Delta}\in T_{\mathcal{P}}\mathcal{F}^{\mathrm{I}}\), \[g_{\mathcal{P}}^{\mathcal{F}^{\mathrm{I}}}\left(\Delta,\widetilde{\Delta} \right)=\frac{1}{4}\sum_{i,j}g_{e}^{O}\left(\llbracket\Delta_{i},P_{i} \rrbracket,\llbracket\widetilde{\Delta}_{j},P_{j}\rrbracket\right)\] **Remark 2**.: _This metric should be compared with that of [15], which is the restriction of that of \(G^{\mathrm{I}}\)._ **Proposition 4**.: _For \(\mathcal{P}=\left(P_{i}\right)_{i}\in\mathcal{F}^{\mathrm{I}}\) and \(\Delta=\left(\Delta_{i}\right)_{i}\in T_{\mathcal{P}}\mathcal{F}^{\mathrm{I}}\), the geodesic of \(\mathcal{F}^{\mathrm{I}}\) starting at \(\mathcal{P}\) in the direction \(\Delta\) is of the form \(t\longmapsto\pi^{\mathrm{I}}\left(Q\exp_{m}\left(tQ^{T}\overline{\Omega}Q\right)\right)\), where \(\exp_{m}\) is the matrix exponential and_ \[\overline{\Omega}=\frac{1}{2}\sum_{i=1}^{r}\left\llbracket\Delta_{i},P_{i} \rrbracket.\] _Then,_ \[\mathrm{Exp}_{\mathcal{P}}^{\mathcal{F}^{\mathrm{I}}}\left(\Delta\right)= \left(\exp_{m}\left(\overline{\Omega}\right)P_{i}\exp_{m}\left(-\overline{ \Omega}\right)\right)_{i}. \tag{3.3}\] Proof.: Since \(\overline{\Omega}=Q^{T}\overline{\Omega}Q\) and \(QP_{i}^{0}Q^{T}=P_{i}\), the following computation implies that (3.3) holds: \[\mathrm{Exp}_{\mathcal{P}}^{\mathcal{F}^{\mathrm{I}}}\left(\Delta\right)=\pi^ {\mathrm{I}}\left(Q\exp_{m}\left(Q^{T}\overline{\Omega}Q\right)\right)=\pi^{ \mathrm{I}}\left(\left(\exp_{m}\left(\overline{\Omega}\right)\right)Q\right)= \left(\exp_{m}\left(\overline{\Omega}\right)QP_{i}^{0}Q^{T}\exp_{m}\left( \overline{\Omega}\right)\right)_{i}.\] **Remark 3**.: _This should be compared with the Exponential map in [15]._ **Remark 4**.: _There exist some algorithms for computing the Riemannian Logarithm of Riemannian submersions._ ## 4 Levi-Civita connection and curvature _Throughout the sequel, in order to alleviate the notations, we assume that \(\mathbb{G}\) is a matrix Lie group, that is \(\mathbb{G}\subset GL_{n}(\mathbb{R})\), for \(n\geq 1\). This means that \(\mathcal{L}_{\widetilde{Q}}^{\mathbb{G}}\) and \(\left(d\mathcal{L}_{\widetilde{Q}}^{\mathbb{G}}\right)_{Q}\) are both replaced by left-multiplication by \(\widetilde{Q}\). The results obtained under this assumption can easily be generalized to an arbitrary compact Lie group._ We shall make use, at times, of the following assumption. **Assumption 1**.: _For all \(\Omega,\Omega^{\prime}\in H_{e}^{\pi}\), \(\llbracket\Omega,\Omega^{\prime}\rrbracket\in V_{e}^{\pi}\)._ ### Decomposition of vector fields The idea is to decompose any smooth vector field on \(\mathbb{G}\) into a linear combination of left-invariant vector fields with coefficients which are smooth functions on \(\mathbb{G}\). This is possible thanks to the following tool. **Proposition 5**.: _A \(1\)-form \(\omega\) on a manifold \(M\) is smooth if and only if for every smooth vector field \(X\) on M, the function \(\omega(X)\) is smooth on \(M\)._ Proof.: We omit the proof. This is standard. **Corollary 1**.: _Let \(\left(\epsilon_{k}\right)_{1\leq k\leq N}\) be any basis of \(\mathfrak{g}\). Let \(\left(E_{k}\right)_{k}\) be the left-invariant vector fields on \(\mathbb{G}\), defined by \(E_{k}(Q)=Q_{*}\epsilon_{k}\), for \(Q\in\mathbb{G}.\) Then, for any \(U\in\mathcal{X}\left(\mathbb{G}\right)\) and \(Q\in\mathbb{G}\),_ \[U_{Q}=\sum\limits_{k=1}^{N}u_{k}(Q)E_{k}(Q)=Q_{*}\left(\sum\limits_{k=1}^{N}u _{k}(Q)\epsilon_{k}\right),\] _where the \(u_{k}\)'s are smooth functions on \(\mathbb{G}\). Let \(\overline{U}:\mathbb{G}\longrightarrow\mathfrak{g}\) be the map defined by_ \[\overline{U}(Q)=\left(Q^{-1}\right)_{*}U_{Q}=\sum\limits_{k=1}^{N}u_{k}(Q) \epsilon_{k}.\] _Then, \(\overline{U}\) is a smooth map. Furthermore,_ \[\left(\bar{d}\overline{U}\right)_{Q}=\sum\limits_{k=1}^{N}\left(du_{k}\right) _{Q}\epsilon_{k}.\] Proof.: Apply Proposition 5. ### Levi-Civita connection **Proposition 6**.: _Let \(U,V\in\mathcal{X}(\mathbb{G})\). Then,_ \[\left(\nabla_{U}^{\mathbb{G}}V\right)_{Q}=Q\left(\bar{d}\overline{V}\right)_{ Q}(U_{Q})+\frac{Q}{2}\llbracket\overline{U}(Q),\overline{V}(Q)\rrbracket. \tag{4.1}\] Proof.: First, for \(1\leq k\leq\ell\leq N\), \(E_{k}\) and \(E_{\ell}\) are livf's. So, for any \(Q\in\mathbb{G}\), \[\left(\nabla_{E_{k}}^{\mathbb{G}}E_{\ell}\right)_{Q}=\frac{1}{2}\left[E_{k}, E_{\ell}\right]_{Q}^{\mathbb{G}}=\frac{Q}{2}\llbracket\epsilon_{k},\epsilon_{ \ell}\rrbracket. \tag{4.2}\] Now, in order to derive (4.1), decompose \(U\) and \(V\) as \(U=\sum\limits_{k=1}^{N}u_{k}E_{k}\) and \(V=\sum\limits_{\ell=1}^{N}v_{\ell}E_{\ell}\). Then, apply the Leibniz rule and conclude by (4.2). **Proposition 7**.: _Let \(X,Y\in\mathcal{X}(\mathbb{B})\) and \(b\in\mathbb{B}\). Then, for any \(Q\in\pi^{-1}(b)\),_ \[\left(\nabla_{X}^{\mathbb{B}}Y\right)_{b}=\left(d\pi\right)_{Q}\left(Q\left( \bar{d}\overline{Y^{\sharp}}\right)_{Q}\left(X^{\sharp}(Q)\right)\right)+ \frac{1}{2}\left(d\pi\right)_{Q}\left(Q\llbracket\overline{X^{\sharp}}(Q), \overline{Y^{\sharp}}(Q)\rrbracket\right). \tag{4.3}\] _Suppose that Assumption \(1\) holds. Then,_ \[\left(\nabla_{X}^{\mathbb{B}}Y\right)_{b}=\left(d\pi\right)_{Q}\left(Q\left( \bar{d}\overline{Y^{\sharp}}\right)_{Q}\left(X^{\sharp}(Q)\right)\right). \tag{4.4}\] Proof.: Since \(\pi\) is a Riemannian submersion, we have that \(\left(\nabla_{X}^{\mathbb{B}}Y\right)_{b}=\left(d\pi\right)_{Q}\left(\left( \nabla_{X^{\sharp}}^{\mathbb{G}}Y^{\sharp}\right)_{Q}\right)\). So (4.3) follows from (4.1). For the second part, Assumption \(1\) implies that for all \(Q\in\mathbb{G}\), \(\llbracket\overline{X^{\sharp}}(Q),\overline{Y^{\sharp}}(Q)\rrbracket\in \mathrm{Ver}_{e}^{\pi}\). Now, by Lemma 1, for all \(Q\in\mathbb{G}\), \(Q\llbracket\overline{X^{\sharp}}(Q),\overline{Y^{\sharp}}(Q)\rrbracket\in \mathrm{Ver}_{Q}^{\pi}\). ### Curvature Throughout the sequel, we assume that \(\left\{\epsilon_{k}:1\leq k\leq N\right\}\) is an orthonormal basis of \(\mathfrak{g}\) such that \(\left\{\epsilon_{k}:1\leq k\leq N^{h}\right\}\) is an orthonormal basis of \(H^{\pi}_{e}\) and \(\left\{\epsilon_{k}:N^{h}+1\leq k\leq N\right\}\) is an orthonormal basis of \(V^{\pi}_{e}\). Since \(g^{\mathbb{G}}\) is a left-invariant metric, for all \(Q\in\mathbb{G}\), \(\left\{E_{k}(Q):1\leq k\leq N^{h}\right\}\) and \(\left\{E_{k}(Q):N^{h}+1\leq k\leq N\right\}\) are respectively orthonormal basis of \(H^{\pi}_{Q}\) and \(V^{\pi}_{Q}\). #### 4.3.1 \((0,4)\) curvature tensor **Lemma 4**.: _For any \(X,Y,Z,W\in\mathcal{X}(\mathbb{B})\) and \(Q\in\mathbb{G}\),_ \[R^{\mathbb{G}}_{Q}\left(X^{\sharp},Y^{\sharp},Z^{\sharp},W^{\sharp}\right)= \frac{1}{4}g^{\mathbb{G}}_{e}\left(\llbracket\overline{X^{\sharp}}(Q), \overline{Y^{\sharp}}(Q)\rrbracket,\llbracket\overline{Z^{\sharp}}(Q), \overline{W^{\sharp}}(Q)\rrbracket\right).\] Proof.: **Lemma 5**.: _For any \(X,Y\in\mathcal{X}(\mathbb{B})\) and \(Q\in\mathbb{G}\),_ \[\mathrm{ver}^{\pi}_{Q}\left(\left[X^{\sharp},Y^{\sharp}\right]^{\mathbb{G}}_{ Q}\right)=Q\mathrm{ver}^{\pi}_{e}\left(\llbracket\overline{X^{\sharp}}(Q), \overline{Y^{\sharp}}(Q)\rrbracket\right).\] Proof.: For any \(U,V\in\mathcal{X}(\mathbb{G})\), \[\left[U,V\right]^{\mathbb{G}}=\sum_{k,\ell=1}^{N}\left[u_{k}E_{k},v_{\ell}E_{ \ell}\right]^{\mathbb{G}}=\sum_{k,\ell=1}^{N}u_{k}(E_{k}\cdot v_{\ell})E_{\ell }+u_{\ell}(E_{\ell}\cdot v_{k})E_{k}+u_{k}v_{\ell}\left[E_{k},E_{\ell}\right]^ {\mathbb{G}}.\] Since \(X^{\sharp}\) and \(Y^{\sharp}\) are horizontal vector fields, the choice of the basis \(\left\{\epsilon_{k}:1\leq k\leq N\right\}\) of \(\mathfrak{g}\) implies that \[\left[X^{\sharp},Y^{\sharp}\right]^{\mathbb{G}}=\sum_{k,\ell=1}^{N^{h}}x^{ \sharp}_{k}(E_{k}\cdot y^{\sharp}_{\ell})E_{\ell}+x^{\sharp}_{\ell}(E_{\ell} \cdot y^{\sharp}_{k})E_{k}+x^{\sharp}_{k}y^{\sharp}_{\ell}\left[E_{k},E_{\ell }\right]^{\mathbb{G}}.\] So, for any \(Q\in\mathbb{G}\), \[\mathrm{ver}^{\pi}_{Q}\left(\left[X^{\sharp},Y^{\sharp}\right]^{\mathbb{G}}_{ Q}\right)=\sum_{k,\ell=1}^{N^{h}}x^{\sharp}_{k}(Q)y^{\sharp}_{\ell}(Q) \mathrm{ver}^{\pi}_{Q}\left(\left[E_{k},E_{\ell}\right]^{\mathbb{G}}_{Q} \right).\] Now, by Lemma 1, \[\mathrm{ver}^{\pi}_{Q}\left(\left[E_{k},E_{\ell}\right]^{\mathbb{G}}_{Q}\right) =\mathrm{ver}^{\pi}_{Q}\left(Q\llbracket\epsilon_{k},\epsilon_{\ell} \rrbracket\right)=Q\mathrm{ver}^{\pi}_{e}\left(\llbracket\epsilon_{k},\epsilon_ {\ell}\rrbracket\right).\] Therefore, \[\mathrm{ver}^{\pi}_{Q}\left(\left[X^{\sharp},Y^{\sharp}\right]^{\mathbb{G}}_{ Q}\right)=Q\sum_{k,\ell=1}^{N^{h}}x^{\sharp}_{k}(Q)y^{\sharp}_{\ell}(Q) \mathrm{ver}^{\pi}_{e}\left(\llbracket\epsilon_{k},\epsilon_{\ell}\rrbracket \right)=Q\mathrm{ver}^{\pi}_{e}\left(\llbracket\overline{X^{\sharp}}(Q), \overline{Y^{\sharp}}(Q)\rrbracket\right).\] **Proposition 8**.: _For any \(X,Y,Z,W\in\mathcal{X}(\mathbb{B})\), set_ \[\mathfrak{R}^{\mathbb{B}}(X,Y,Z,W):=g^{\mathbb{G}}\left(\mathrm{ver}^{\pi} \left(\left[X^{\sharp},Y^{\sharp}\right]^{\mathbb{G}}\right),\mathrm{ver}^{ \pi}\left(\left[Z^{\sharp},W^{\sharp}\right]^{\mathbb{G}}\right)\right).\] _Then, for all \(b\in\mathbb{B}\) and any \(Q\in\pi^{-1}(b)\),_ \[R^{\mathbb{B}}_{b}\left(X,Y,Z,W\right)=\frac{1}{4}g^{\mathbb{G}}_{e}\left( \llbracket\overline{X^{\sharp}}(Q),\overline{Y^{\sharp}}(Q)\rrbracket, \llbracket\overline{Z^{\sharp}}(Q),\overline{W^{\sharp}}(Q)\rrbracket\right) +\frac{1}{2}\mathfrak{R}^{\mathbb{B}}_{b}(X,Y,Z,W)-\frac{1}{4}\mathfrak{R} ^{\mathbb{B}}_{b}(Y,Z,X,W)-\frac{1}{4}\mathfrak{R}^{\mathbb{B}}_{b}(Z,X,Y,W).\] _where_ \[\mathfrak{R}^{\mathbb{B}}_{b}(X,Y,Z,W):=g^{\mathbb{G}}_{e}\left(\mathrm{ver}^{ \pi}_{e}\left(\llbracket\overline{X^{\sharp}}(Q),\overline{Y^{\sharp}}(Q) \rrbracket\right),\mathrm{ver}^{\pi}_{e}\left(\llbracket\overline{Z^{\sharp}}(Q ),\overline{W^{\sharp}}(Q)\rrbracket\right)\right).\] Proof.: By O'Neill's formula, \[R^{\mathbb{B}}\left(X,Y,Z,W\right)=R^{\mathbb{G}}\left(X^{\sharp},Y^{\sharp},Z ^{\sharp},W^{\sharp}\right)+\frac{1}{2}\mathfrak{R}^{\mathbb{B}}(X,Y,Z,W)- \frac{1}{4}\mathfrak{R}^{\mathbb{B}}(Y,Z,X,W)-\frac{1}{4}\mathfrak{R}^{ \mathbb{B}}(Z,X,Y,W). \tag{4.5}\] The first term of (4.5) is provided by Lemma 4. By Lemma 5, the other terms of (4.5) are given by \[\mathfrak{R}^{\mathbb{B}}_{b}(X,Y,Z,W)=g^{\mathbb{G}}_{e}\left(\mathrm{ver}^{ \pi}_{e}\left(\llbracket\overline{X^{\sharp}}(Q),\overline{Y^{\sharp}}(Q) \rrbracket\right),\mathrm{ver}^{\pi}_{e}\left(\llbracket\overline{Z^{\sharp}}(Q), \overline{W^{\sharp}}(Q)\rrbracket\right)\right).\] #### 4.3.2 \((1,3)\) curvature tensor Recall that \(\left\{\epsilon_{k}:1\leq k\leq N^{h}\right\}\) is an orthonormal basis of \(H_{e}^{\pi}\mathbb{G}\). Now, for any \(b\in\mathbb{B}\) and fixed \(Q\in\pi^{-1}(b)\), set \[w_{k}(b):=(d\pi)_{Q}(Q\epsilon_{k})\in T_{\mathbb{B}}\mathbb{B}.\] Since the metric \(g^{\mathbb{G}}\) is left-invariant and \(\pi\) is a Riemannian submersion, \(\left\{w_{k}(b):1\leq k\leq N^{h}\right\}\) is an orthonormal basis of \(T_{\mathbb{B}}\mathbb{B}\). So, for the \((3,1)\) curvature tensor \(R^{\mathbb{B}}\left(X,Y\right)Z\), for any \(b\in\mathbb{B}\), \[R^{\mathbb{B}}_{b}\left(X,Y\right)Z =\sum_{k=1}^{N^{h}}g_{b}^{\mathbb{B}}\left(R^{\mathbb{B}}_{b} \left(X,Y\right)Z,w_{k}(b)\right)w_{k}(b)\] \[=\sum_{k=1}^{N^{h}}R^{\mathbb{B}}_{b}\left(X,Y,Z,W_{k}\right)w_{k }(b),\] where \(W_{k}\) is any vector field such that \((W_{k})_{b}=w_{k}(b)\). For any \(b\in\mathbb{B}\) and fixed \(Q\in\pi^{-1}(b)\), Proposition 8 provide the value at \(b\) of the tensor \(R^{\mathbb{B}}\) in function of that of the vector fields: \[R^{\mathbb{B}}_{b}\left(X,Y,Z,W_{k}\right)=\rho_{b}\left(X_{b},Y_{b},Z_{b},(W_ {k})_{b}\right).\] Therefore, \[R^{\mathbb{B}}_{b}\left(X,Y\right)Z=\sum_{k=1}^{N^{h}}\rho_{b} \left(X_{b},Y_{b},Z_{b},w_{k}(b)\right)w_{k}(b).\] #### 4.3.3 Sectional curvature **Proposition 9**.: _Let \(X,Y\in\mathcal{X}(\mathbb{B})\) be orthonormal vector fields. Then, for all \(b\in\mathbb{B}\) and \(Q\in\pi^{-1}(b)\),_ \[K^{\mathbb{B}}_{b}(X,Y)=\frac{1}{4}\left\|\llbracket\overline{X^{ \sharp}}(Q),\overline{Y^{\sharp}}(Q)\rrbracket\right\|_{e}^{2}+\frac{3}{4} \left\|\operatorname{ver}_{e}^{\pi}\left(\llbracket\overline{X^{\sharp}}(Q), \overline{Y^{\sharp}}(Q)\rrbracket\right)\right\|_{e}^{2}. \tag{4.6}\] _Suppose that Assumption 1 holds. Then, for all \(b\in\mathbb{B}\) and \(Q\in\pi^{-1}(b)\),_ \[K^{\mathbb{B}}_{b}(X,Y)=\left\|\llbracket\overline{X^{\sharp}}(Q),\overline{Y^{\sharp}}(Q)\rrbracket\right\|_{e}^{2}. \tag{4.7}\] Proof.: By O'Neill's formula, \[K^{\mathbb{B}}_{b}(X,Y)=K^{\mathbb{G}}_{P}(X,Y)+\frac{3}{4} \left\|\operatorname{ver}_{\bar{Q}}^{\pi}\left(\left[X^{\sharp},Y^{\sharp} \right]_{Q}^{\mathbb{G}}\right)\right\|_{Q}^{2}.\] Then, Lemmas 5 and 4 combined to the left-invariance of the metric \(g^{\mathbb{G}}\) imply (4.6), and (4.7) follows readily. ### The case of Grassmannians and flag manifolds First, notice that the Frobenius metric is bi-invariant, so that the metric considered on \(O(d)\) is. Let \(X,Y\in\mathcal{X}(\mathcal{F}^{1})\). For any \(\mathcal{P}\in\mathcal{F}^{1}\) and \(Q\in(\pi^{1})^{-1}(\mathcal{P})\), \[\llbracket\overline{X^{\sharp}}(Q),\overline{Y^{\sharp}}(Q)\rrbracket=\frac{1 }{4}Q^{T}\Omega_{\mathcal{P}}^{X,Y}Q,\] where \[\Omega_{\mathcal{P}}^{X,Y}:=\left(\sum_{i,j}\llbracket\llbracket X _{i}(\mathcal{P}),P_{i}\rrbracket,\llbracket\bar{Y}_{j}(\mathcal{P}),P_{j} \rrbracket\rrbracket\right)\in\mathfrak{so}(n).\] **Lemma 6**.: _Assumption 1 holds for \(G_{n}^{q}\) but not for \(\mathcal{F}^{1}\)._ Proof.: This follows from simple computations in \(\mathfrak{so}(n)\) #### 4.4.1 Levi-Civita connection of Grassmannians and flag manifolds **Proposition 10**.: _For any \(X,Y\in\mathcal{X}(G_{n}^{q})\) and \(P\in G_{n}^{q}\), \(Q\in(\pi^{m})^{-1}(P)\),_ \[\left(\nabla_{X}^{\sharp}Y\right)_{P}=(d\pi^{m})_{Q}\left(Q\left(d\overline{Y^{ \sharp}}\right)_{Q}\left(X^{\sharp}(Q)\right)\right),\] _where \(X^{\sharp}(Q)=\llbracket X(\pi^{m}(Q)),\pi^{m}(Q)\rrbracket Q\) and \(\overline{Y^{\sharp}}(Q)=Q^{T}\llbracket Y(\pi^{m}(Q)),\pi^{m}(Q)\rrbracket Q \in\mathfrak{so}(n)\)._ Proof.: Since Assumption 1 holds for \(G_{n}^{q}\), the result follows readily from (4.4). We obtain an expression of the latter when \(X\) and \(Y\) are defined through charts of \(G_{n}^{q}\). **Proposition 11**.: _For any \(X,Y\in\mathcal{X}(\mathcal{F}^{1})\) and \(\mathcal{P}\in\mathcal{F}^{1}\), \(Q\in\left(\pi^{1}\right)^{-1}(\mathcal{P})\),_ \[\left(\nabla_{X}^{\sharp}Y\right)_{\mathcal{P}}=\left(d\pi^{1}\right)_{Q} \left(Q\left(d\overline{Y^{\sharp}}\right)_{Q}\left(X^{\sharp}(Q)\right) \right)+\frac{1}{8}(d\pi^{1})_{Q}\left(\Omega_{\mathcal{P}}^{X,Y}Q\right).\] where \(X^{\sharp}(Q)=\frac{1}{2}\left(\sum\limits_{i=1}^{r}\llbracket X_{i}(\pi^{1}(Q )),\pi^{1}(Q)\rrbracket\right)Q\) and \(\overline{Y^{\sharp}}(Q)=\frac{1}{2}Q^{T}\left(\sum\limits_{i=1}^{r} \llbracket Y_{i}(\pi^{1}(Q)),\pi^{1}(Q)\rrbracket\right)Q\in\mathfrak{so}(n)\). Proof.: Apply Proposition 7. #### 4.4.2 Curvature of Grassmannians and flag manifolds This general framework allows us to recover the sectional curvature of Grassmannians, given in Proposition 4.1. of [3]. **Proposition 12**.: _Let \(X,Y\in\mathcal{X}(G_{n}^{q})\) be orthonormal vector fields. Then, for all \(P\in G_{n}^{q}\) and \(Q\in\pi^{-1}(P)\),_ \[K_{P}^{G_{n}^{q}}(X,Y)=\left\|\llbracket X_{P},Y_{P}\rrbracket\right\|_{e}^{2}.\] Proof.: First, notice that the statement makes sense. Indeed, for all \(P\in G_{n}^{q}\), \(\llbracket X_{P},Y_{P}\rrbracket\in\mathfrak{so}(n)\). Now, the result follows from (4.7), since Assumption 1 holds for Grassmannians. **Proposition 13**.: _Let \(X,Y\in\mathcal{X}(\mathcal{F}^{1})\). Then, for all \(\mathcal{P}\in\mathcal{F}^{1}\),_ \[K_{\mathcal{P}}^{\mathcal{F}^{1}}(X,Y)=\frac{1}{16}\left(\left\|\Omega_{ \mathcal{P}}^{X,Y}\right\|_{e}^{2}+3\left\|\mathrm{ver}_{e}^{\pi^{1}}\left( \Omega_{\mathcal{P}}^{X,Y}\right)\right\|_{e}^{2}\right),\] _where for any \(\Omega\in\mathfrak{so}(n)\),_ \[\mathrm{ver}_{e}^{\pi^{1}}\left(\Omega\right)=\Omega-\frac{1}{2}\sum\limits_{ i=1}^{r}\llbracket\llbracket\Omega,P_{0}^{i}\rrbracket,P_{0}^{i}\rrbracket.\] Proof.: This is a consequence of (4.6). ### Acknowledgements The authors have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement G-Statistics No 786854). It was also supported by the French government through the 3IA Cote d'Azur Investments ANR-19-P3IA-0002 managed by the National Research Agency.
2309.08193
Lyapunov exponents of orthogonal-plus-normal cocycles
We consider products of matrices of the form $A_n=O_n+\epsilon N_n$ where $O_n$ is a sequence of $d\times d$ orthogonal matrices and $N_n$ has independent standard normal entries and the $(N_n)$ are mutually independent. We study the Lyapunov exponents of the cocycle as a function of $\epsilon$, giving an exact expression for the $j$th Lyapunov exponent in terms of the Gram-Schmidt orthogonalization of $I+\epsilon N$. Further, we study the asymptotics of these exponents, showing that $\lambda_j=(d-2j)\epsilon^2/2+O(\epsilon^4|\log\epsilon|^4)$.
Sam Bednarski, Anthony Quas
2023-09-15T06:51:18Z
http://arxiv.org/abs/2309.08193v1
# Lyapunov exponents of orthogonal-plus-normal cocycles ###### Abstract We consider products of matrices of the form \(A_{n}=O_{n}+\epsilon N_{n}\) where \(O_{n}\) is a sequence of \(d\times d\) orthogonal matrices and \(N_{n}\) has independent standard normal entries and the \((N_{n})\) are mutually independent. We study the Lyapunov exponents of the cocycle as a function of \(\epsilon\), giving an exact expression for the \(j\)th Lyapunov exponent in terms of the Gram-Schmidt orthogonalization of \(I+\epsilon N\). Further, we study the asymptotics of these exponents, showing that \(\lambda_{j}=(d-2j)\epsilon^{2}/2+O(\epsilon^{4}|\log\epsilon|^{4})\). ## 1 Introduction and Statement of Results Lyapunov exponents play a highly important role in dynamical systems, allowing quantification of chaos, the development of a theory of hyperbolic and non-uniformly hyperbolic dynamical systems and much more. We work in the framework of multiplicative ergodic theory, where one has a base dynamical system \(\sigma\colon\Omega\to\Omega\) preserving an ergodic measure \(\mathbb{P}\) and a measurable map \(A\colon\Omega\to M_{d\times d}(\mathbb{R})\). One then takes the cocycle of partial products \(A_{\omega}^{(n)}\) of the sequence of \(d\times d\) matrices and one studies the limiting growth rate of the \(j\)th singular value of the products. In the case \(d=1\) this is often straightforward to calculate: the Lyapunov exponent is just \(\int\log|A(\omega)|\,d\mathbb{P}(\omega)\). In higher dimensions, Lyapunov exponents tend to be much harder to calculate, and it is rare to be able to give exact expressions. In this paper, we are able to establish exact expressions for Lyapunov exponents for cocycles of a particular form, namely where the matrices \(A_{\omega}\) are of the form \(O_{\omega}+\epsilon N_{\omega}\), where the \(O_{\omega}\) are orthogonal matrices and the \(N_{\omega}\) are mutually independent Gaussian matrices with independent standard normal entries. We further assume that the \((N_{\sigma^{n}\omega})\) are independent of the cocycle \(O_{\omega}\). We then interpret the cocycle as an additive noise perturbation of a cocycle of orthogonal matrices. Our main results are the following: **Theorem 1**.: _Let \(\sigma\colon\Omega\to\Omega\) be an ergodic transformation preserving an invariant measure \(\mathbb{P}\) and let \(O\colon\Omega\to O(d,\mathbb{R})\) be a measurable map into the \(d\times d\) orthogonal matrices. Suppose that \(N\colon\Omega\to M_{d\times d}(\mathbb{R})\) is measurable, and that that conditioned on \((O_{\sigma^{n}\omega})_{n\in\mathbb{Z}}\) and \((N_{\sigma^{n}\omega})_{n\neq 0}\), \(N_{\omega}\) has independent standard normal entries. Then for all \(\epsilon\in\mathbb{R}\), the Lyapunov exponents of the cocycle \(A_{\omega}=O_{\omega}+\epsilon N_{\omega}\) are given by_ \[\lambda_{j}=\mathbb{E}\log\|c_{j}^{\perp}(I+\epsilon N)\|,\] _where \(c_{j}^{\perp}(A)\) is the \(j\)th column of the Gram-Schmidt orthogonalization of \(A\)._ The following theorem describes the asymptotic behaviour of the exponents as \(\epsilon\) tends to \(0\). **Theorem 2**.: _Let the matrix cocycle be as above. Then the Lyapunov exponents satisfy_ \[\lambda_{j}(\epsilon)=(d-2j)\tfrac{\epsilon^{2}}{2}+O(\epsilon^{4}|\log \epsilon|^{4})\] _as \(\epsilon\to 0\)._ We make the following conjecture. Let \(\sigma\) be an ergodic measure-preserving transformation of a space \((\Omega,\mathbb{P})\). If \(B\colon\Omega\to M_{d\times d}(\mathbb{R})\) is the generator of a matrix cocycle with the property that \(\|B_{\omega}\|\leq 1\) almost surely, and \(N_{\omega}\) is Gaussian with the independence properties above, then setting \(\lambda_{j}^{\prime}(\epsilon)\) to be the sequence of Lyapunov exponents of the cocycle \(A_{\omega}^{\epsilon}=B_{\omega}+\epsilon N_{\omega}\), one has \[\lambda_{j}^{\prime}(\epsilon)-\lambda_{j+1}^{\prime}(\epsilon)\geq\lambda_{ j}(\epsilon)-\lambda_{j+1}(\epsilon)\text{ for all }\epsilon>0,\] where \(\lambda_{j}(\epsilon)\) are the Lyapunov exponents for the cocycle described in Theorem 1. That is, we conjecture that there are universal lower bounds on the gaps between consecutive Lyapunov exponents of Gaussian perturbed cocycles of matrices where the matrices in the unperturbed cocycle have norm at most \(1\); and that these lower bounds are obtained in the case where all of the matrices are the identity matrix. The results in this paper are closely related to results in Newman [7], where he gave a result similar to Theorem 1 for some i.i.d. cocycles involving Gaussian matrices and SDE flows on the space of non-singular matrices. Newman also re-derives an important result of Dynkin [3] that also has intermediate proofs due to LeJan [6]; Baxendale and Harris [1]; and Norris, Rogers and Williams [8]. Dynkin's result concerns the Lyapunov exponents of a simple stochastic flow on \(GL_{d}(\mathbb{R})\), the group of invertible \(d\times d\) matrices, and identifies explicit exact Lyapunov exponents for the flow. Although this cocycle is not the same as ours, it is in the same spirit. The Lyapunov exponents in that paper have a similar form to ours and are given by \(\lambda_{k}=(d-2k+1)\sigma^{2}/2\) ## 2 Definitions and Preliminary lemmas If \(N\) is a \(d\times d\) matrix valued random variable whose entries are independent standard normal random variables, we will say that \(N\) is the _standard Gaussian matrix random variable_. We will need the following property of the normal distribution: **Lemma 3**.: _Let \(U\) be an orthogonal matrix and let \(N\) be a standard Gaussian matrix random variable of the same dimensions. Then the matrices \(N\), \(UN\) and \(NU\) are equal in distribution._ This follows from a more general fact about the multivariate normal distribution. **Proposition 4**.: _Let \(X\sim N(\boldsymbol{\mu},\boldsymbol{\Sigma})\) be a \(d\)-dimensional multivariate normal distribution with mean vector \(\boldsymbol{\mu}\) and covariance matrix \(\boldsymbol{\Sigma}\). Suppose \(V\) is a \(d\times n\) matrix of rank \(d\). Then \(VX\sim N(V\boldsymbol{\mu},V\boldsymbol{\Sigma}V^{T})\)._ Proof.: Recall that \(X\sim N(\boldsymbol{\mu},\boldsymbol{\Sigma})\) if and only if \(X\sim AZ+\boldsymbol{\mu}\) where \(AA^{T}=\boldsymbol{\Sigma}\) and \(Z\sim N(\boldsymbol{0},I_{d})\). If \(X=AZ+\boldsymbol{\mu}\) this implies that \[VX =VAZ+V\boldsymbol{\mu}\] \[\sim N(V\boldsymbol{\mu},VA(VA)^{T})\text{ by the fact above}\] \[\sim N(V\boldsymbol{\mu},VAA^{T}V^{T})\] \[\sim N(V\boldsymbol{\mu},V\boldsymbol{\Sigma}V^{T})\] **Lemma 5**.: _Let \(N\) be a standard normal random variable. Then for any \(a\) and for \(b\neq 0\), \(\mathbb{E}\log^{-}|a+bN|\leq\mathbb{E}\log^{-}|bN|<\infty\)._ Proof.: We have \[\mathbb{E}\log^{-}|a+bN| =\int_{0}^{\infty}\mathbb{P}(\log^{-}|a+bN|<t)\,dt\] \[=\int_{0}^{\infty}\mathbb{P}(N\in[-a-e^{-t}/|b|,-a+e^{-t}/|b|]\,dt\] \[\leq\int_{0}^{\infty}\mathbb{P}(N\in[-e^{-t}/|b|,e^{-t}/|b|]\,dt\] \[=\mathbb{E}\log^{-}|bN|.\] Since \(\log^{-}|bN|\leq\log^{-}|b|+\log^{-}|N|\), and \(\mathbb{E}\log^{-}|N|=\frac{2}{\sqrt{2\pi}}\int_{0}^{1}|\log x|e^{-x^{2}/2}\,dx\), it is easy to see \(\mathbb{E}\log^{-}|bN|<\infty\). For a matrix \(B\), let \(c_{j}(B)\) denote the \(j\)th column of \(B\) and let \(\theta_{j}(B)=\operatorname{dist}\bigl{(}c_{j}(B),\ln(\{c_{i}(B)\colon i\neq j \})\bigr{)}\) and let \(\Theta(B)=\min\theta_{j}(B)\). **Lemma 6**.: _Let \(A\) be an arbitrary matrix and let \(\epsilon>0\). Let \(Z\) denote a standard \(d\times d\) Gaussian matrix random variable and \(N\) denote a standard normal random variable. Then \(\mathbb{E}\log^{-}\Theta(A+\epsilon Z)\leq\mathbb{E}\log^{-}(\epsilon N)<\infty\)._ _Further, if \(S\) is any set, \(\mathbb{E}\bigl{(}\log^{-}\Theta(A+\epsilon Z)\,\mathbf{1}_{S}\bigr{)}\leq d \,\mathbb{P}(S)(1+\log^{-}(\epsilon\mathbb{P}(S)))\)._ Proof.: Let \(\mathcal{F}\) denote the \(\sigma\)-algebra generated by the columns of \(N\) except for the \(j\)th. Then \(\mathbb{E}\log^{-}\theta_{j}(A+\epsilon Z)=\mathbb{E}\bigl{(}\mathbb{E}(\log^ {-}\theta_{j}(A+\epsilon Z)|\mathcal{F})\bigr{)}\). Let \(\mathbf{n}\) be an \(\mathcal{F}\)-measurable unit normal to the subspace spanned by \((c_{i}(A+\epsilon Z))_{i\neq j}\) (this is almost surely unique up to a change of sign). Then \(\theta_{j}(A+\epsilon Z)=|\langle\mathbf{n},c_{j}(A+\epsilon Z)\rangle|=| \langle\mathbf{n},c_{j}(A)\rangle\rangle+\epsilon\langle\mathbf{n},c_{j}(Z)\rangle|\). Let \(a=\langle\mathbf{n},c_{j}(A)\rangle\rangle\) (an \(\mathcal{F}\)-measurable random variable) and note that since \(c_{j}(Z)\) is independent of the unit vector \(\mathbf{n}\), conditioned on \(\mathcal{F}\), by Proposition 4, \(\langle\mathbf{n},c_{j}(Z)\rangle\) is distributed as a standard normal random variable. Hence we have \(\mathbb{E}\log^{-}\theta_{j}(A+\epsilon Z)=\mathbb{E}\bigl{(}\mathbb{E}(\log^ {-}\theta_{j}(A+\epsilon Z)\big{|}\mathcal{F})\bigr{)}=\mathbb{E}\bigl{(} \mathbb{E}(\log^{-}|a+\epsilon N||\mathcal{F})\bigr{)}\leq\mathbb{E}\bigl{(} \mathbb{E}(\log^{-}|\epsilon N||\mathcal{F})\bigr{)}=\mathbb{E}\log^{-}| \epsilon N|\) which is finite by the lemma above. By definition, \(\Theta(A+\epsilon Z)=\min_{j}\theta_{j}(A+\epsilon Z)\) so that \(\log^{-}\Theta(A+\epsilon Z)=\max_{j}\log^{-}\theta_{j}(A+\epsilon Z)\leq\sum_ {j}\log^{-}\theta_{j}(A+\epsilon Z)\). By Lemma 5, we see \(\mathbb{E}\log^{-}\Theta(A+\epsilon Z)<\infty\) as required. Now if \(S\) is any set, we have \[\mathbb{E}\big{(}\log^{-}\theta_{j}(A+\epsilon Z)\,\mathbf{1}_{S} \big{)} =\int_{0}^{\infty}\mathbb{P}(S\cap\{\log^{-}\theta_{j}(A+\epsilon Z )>t\})\,dt\] \[\leq\int_{0}^{\infty}\min\big{(}\mathbb{P}(S),\mathbb{P}(\theta_{ j}(A+\epsilon Z)<e^{-t})\big{)}\,dt\] \[\leq\int_{0}^{\infty}\min\big{(}\mathbb{P}(S),\mathbb{P}(a+ \epsilon N\in[-e^{-t},e^{-t}])\big{)}\,dt\] \[\leq\int_{0}^{\infty}\min\big{(}\mathbb{P}(S),e^{-t}/\epsilon \big{)}\,dt,\] where in the third line, as above, \(a\) is a random variable that is independent of \(N\). For the fourth line, we used the fact that the density of a standard normal is bounded above by \((2\pi)^{-1/2}<\frac{1}{2}\). Separating the integration region into \([0,\log^{-}(\epsilon\mathbb{P}(S))]\) and \([\log^{-}(\epsilon\mathbb{P}(S)),\infty)\), we obtain \(\mathbb{E}\big{(}\log^{-}\theta_{j}(A+\epsilon Z)\,\mathbf{1}_{S}\big{)}\leq \mathbb{P}(S)\log^{-}(\epsilon\mathbb{P}(S))+\mathbb{P}(S)\). Since \(\log^{-}\Theta(B)\leq\sum_{j=1}^{d}\log^{-}\theta_{j}(B)\), the result follows. For any vector \(y\) in \(\mathbb{R}^{d}\), \(y\) has at least one coefficient of magnitude \(\|y\|/\sqrt{d}\), say the \(j\)th, so \(\|By\|\geq\|y_{j}c_{j}(B)+\sum_{i\neq j}y_{i}c_{i}(B)\|\geq|y_{j}|\theta_{j}(B )\geq(1/\sqrt{d})\Theta(B)\|y\|\). If \(B\) is invertible then \(\Theta(B)\) is non-zero and substituting \(y=B^{-1}x\) gives \(\|B^{-1}\|\leq\sqrt{d}/\Theta(B)\). **Corollary 7**.: _Let \((A_{n})\) denote an i.i.d. sequence of \(d\times d\) random matrices where \(A_{n}=I+\epsilon N_{n}\), and where \(N_{n}\) is a \(d\times d\) standard Gaussian matrix random variables. Then \((A_{n})\) satisfies the following:_ 1. \(A_{n}\in GL_{d}(\mathbb{R})\) _a.s.;_ 2. _the distribution of_ \(A_{n}\) _is fully supported in_ \(GL_{d}(\mathbb{R})\)_: for any non-empty open set_ \(U\subset GL_{d}(\mathbb{R})\)_,_ \(\mathbb{P}(A_{n}\in U)>0\)_._ 3. \(\log\|A_{n}\|\in L^{1}(\Omega)\)_._ This corollary establishes that the sequence \((A_{n})\) satisfies the conditions of the Gol'dsheid-Margulis theorem [4, Theorem 5.4] which ensures that the Lyapunov exponents of the cocycle \(A^{(n)}=(I+\epsilon N_{n})\cdots(I+\epsilon N_{1})\) are all distinct. Proof.: The distribution of the matrices \(A_{i}\) is mutually absolutely continuous with respect to Lebesgue measure. Since the zero locus of the polynomial equation \(\det(A)=0\) is a measure zero set, the first and second conclusions are established. To show \(\log\|A_{i}\|\) is integrable, we separately show that \(\log^{+}\|A_{i}\|\) and \(\log^{-}\|A_{i}\|\) are integrable. First, \[\mathbb{E}\log^{+}\|A_{1}\|\leq\mathbb{E}\,\|A_{1}\|\leq\mathbb{E}\sum_{1\leq i,j\leq d}|A_{ij}|\] where each \(A_{ij}\) is an integrable normal random variable. The fact that \(\mathbb{E}\log^{+}\left\|A_{1}^{-1}\right\|<\infty\) follows from Lemma 6 and the observation that \(\|A_{1}^{-1}\|\leq\sqrt{d}/\Theta(A_{1})\) made above. We make extensive use of the singular value decomposition in what follows. More information on this topic may be found in Horn and Johnson [5] and Bhatia [2]. For a \(d\times d\) matrix \(A\), a singular value decomposition is a triple \((L,D,R)\) where \(L\) and \(R\) are orthogonal matrices and \(D\) is a diagonal matrix with non-negative entries such that \(A=LDR\). We impose without loss of generality the requirement that the diagonal entries of \(D\) are decreasing. The matrix \(D\) is uniquely determined by \(A\) while there is some freedom in the choice of \(L\) and \(R\). The _singular values_ of \(A\) are denoted \(s_{i}(A)\), where \(s_{i}(A)\) is the \(i\)th entry of the diagonal of \(A\). It is known (see for example Ragunathan [10, Lemma 1]) that there exist measurable functions \(L\), \(D\) and \(R\) mapping \(M_{d}(\mathbb{R})\) to \(O(d,\mathbb{R})\), \(M_{\mathrm{diag}}(d,\mathbb{R})\) and \(O(d,\mathbb{R})\) respectively such that \(A=L(A)D(A)R(A)\). It is well known that \(|s_{i}(A)-s_{i}(B)|\leq\|A-B\|\) where \(\|\cdot\|\) is the standard operator norm on matrices. We also make use of the products \(S_{i}^{j}(A)=s_{i}(A)\cdots s_{j}(A)\). These have an interpretation in terms of exterior algebra. We write \(\bigwedge^{k}\mathbb{R}^{d}\) for the \(k\)th exterior power of \(\mathbb{R}^{d}\) and equip it with the standard inner product coming from the Cauchy-Binet formula and the corresponding norm. In particular if \(v_{1},\ldots,v_{d}\) is an orthonormal basis for \(\mathbb{R}^{d}\), then \(\{v_{i_{1}}\wedge\cdots\wedge v_{i_{k}}\colon i_{1}<i_{2}<\ldots<i_{k}\}\) is an orthonormal basis for \(\bigwedge^{k}\mathbb{R}^{d}\). With respect to the corresponding operator norm, it is well known that \(\left\|A^{\wedge k}\right\|=S_{1}^{k}(A)\). If \((A_{n})\) is an independent identically distributed sequence of random variables taking values in \(GL_{d}(\mathbb{R})\) such that \(\mathbb{E}\log\|A_{1}\|^{\pm 1}<\infty\), it was shown by Oseledets [9] and Raghunathan [10] that the limits \[\lim_{n\to\infty}\frac{1}{n}\log s_{j}(A_{n}\ldots A_{1})\] exist and are almost surely constant for almost every realization of \((A_{n})\). The almost sure limit is denoted \(\lambda_{j}\) and the \((\lambda_{j})\) are the _Lyapunov exponents_ of the cocycle. Exact expressions for Lyapunov exponents For \(1\leq k\leq d\), we define \(e_{k}\) to be the \(k\)th coordinate vector, so that, as previously defined, \(c_{k}(A):=Ae_{k}\) is the \(k\)th column of \(A\). Let \(c_{k}^{\perp}(A)\) denote the component of the \(k\)th column of \(A\) which is orthogonal to the first \(k-1\) columns. That is, suppressing the matrix \(A\) for brevity, \(c_{1}^{\perp}=c_{1}\), and \[c_{k}^{\perp}:=c_{k}-\sum_{1\leq j<k}\frac{\left\langle c_{j}^{\perp},c_{k} \right\rangle}{\left\langle c_{j}^{\perp},c_{j}^{\perp}\right\rangle}c_{j}^{ \perp}.\] We now prove Theorem 1 which we restate here for convenience. **Theorem**.: _Let \((U_{n})_{n\in\mathbb{Z}}\) be a sequence of \(d\times d\) orthogonal matrices and let \((N_{n})_{n\in\mathbb{Z}}\) be a sequence of independent \(d\times d\) matrices, each with independent standard normal coefficients. Let \(\epsilon>0\) and let \(A_{j}^{\epsilon}=U_{j}+\epsilon Z_{j}\). Then for \(1\leq k\leq d\), the \(k\)th Lyapunov exponent of the cocycle \((A_{\sigma^{n-1}\omega}^{\epsilon}\cdots A_{\omega}^{\epsilon})\) is given by_ \[\lambda_{k}=\mathbb{E}(\log\left\|c_{k}^{\perp}(I+\epsilon N)\right\|).\] Fix \(\epsilon>0\) and set \(A_{i}:=U_{i}+\epsilon N_{i}\) for each \(i\). To find the Lyapunov exponents of this sequence we work with the products \(A^{(n)}=A_{n}\cdots A_{1}\). We now define \(\Sigma_{n}=D(A^{(n)})\) and study the evolution of \(\Sigma_{n}\). More precisely we are interested in the stochastic process \((\Sigma_{n})_{n\geq 0}\). To write \(\Sigma_{n+1}\) in terms of \(\Sigma_{n}\), we have \(\Sigma_{n+1}=D\big{(}A_{n+1}L(A^{(n)})\Sigma_{n}R(A^{(n)})\big{)}\). The following lemma shows that this process \((\Sigma_{n})\) is Markov and that the process has the same distribution as the simpler process \(\Sigma_{n+1}^{\prime}=D((1+\epsilon N_{n+1})\Sigma_{n}^{\prime})\). **Lemma 8**.: _(\((\Sigma_{n})\) is a Markov process) Let the sequence of matrices \((A_{i})\) be given by \(U_{i}+\epsilon N_{i}\) as above and let \(\Sigma_{n}=D(A^{(n)})\). Then \((\Sigma_{n})\) is a Markov process: For any measurable set \(F\) of diagonal matrices,_ \[\mathbb{P}(\Sigma_{n+1}\in F|\Sigma_{n},\ldots,\Sigma_{1}) =\mathbb{P}(\Sigma_{n+1}\in F|\Sigma_{n})\] \[=\mathbb{P}(D((I+\epsilon N)\Sigma_{n})\in F|\Sigma_{n}).\] _That is, the Markov process \((\Sigma_{n})\) has the same distribution as the Markov process \((\Sigma_{n}^{\prime})\) where \(\Sigma_{0}^{\prime}=I\) and \(\Sigma_{n+1}^{\prime}=D(A_{n+1}^{\prime}\Sigma_{n})\), where \((A_{n}^{\prime})\) is an independent sequence of matrices, each distributed as \(I+\epsilon N\)._ Proof.: Let \(\mathcal{F}_{n}\) denote the smallest \(\sigma\)-algebra with respect to which \(N_{1},\ldots,N_{n}\) are measurable. Let \(\mathcal{G}_{n}\) be the smallest \(\sigma\)-algebra with respect to which \(\Sigma_{1},\ldots,\Sigma_{n}\) are measurable (so that \(\mathcal{G}_{n}\) is a sub-\(\sigma\)-algebra of \(\mathcal{F}_{n}\)). As usual, we write \(A^{(n)}\) for the product \(A_{n}\dots A_{1}\). Let \(L_{n}=L(A^{(n)})\), \(\Sigma_{n}=D(A^{(n)})\), \(R_{n}=R(A^{(n)})\). Let \(F\) be a measurable subset of the range of \(D\). We compute \[\mathbb{P}(\Sigma_{n+1}\in F|\mathcal{F}_{n}) =\mathbb{P}\Big{(}D(A_{n+1}L_{n}\Sigma_{n}R_{n})\in F|\mathcal{F} _{n}\Big{)}\] \[=\mathbb{P}\Big{(}D(A_{n+1}L_{n}\Sigma_{n})\in F|\mathcal{F}_{n} \Big{)}\] \[=\mathbb{P}\Big{(}D\big{(}(U_{n+1}+\epsilon N_{n+1})L_{n}\Sigma_{ n}\big{)}\in F|\mathcal{F}_{n}\Big{)}\] \[=\mathbb{P}\Big{(}D\big{(}U_{n+1}L_{n}(I+\epsilon L_{n}^{-1}U_{n +1}^{-1}N_{n+1}L_{n})\Sigma_{n}\big{)}\in F|\mathcal{F}_{n}\Big{)}\] \[=\mathbb{P}\Big{(}D\big{(}(I+\epsilon L_{n}^{-1}U_{n+1}^{-1}N_{n+ 1}L_{n})\Sigma_{n}\big{)}\in F|\mathcal{F}_{n}\Big{)}\] \[=\mathbb{P}\Big{(}D\big{(}(I+\epsilon N_{n+1})\Sigma_{n}\big{)} \in F|\mathcal{F}_{n}\Big{)},\] where the second and fifth lines follow from that facts that \(D(A)=D(AU)=D(UA)\) for any matrix \(A\) and any orthogonal matrix \(U\). The sixth line uses the fact that \(N_{n+1}\) is independent of \(\mathcal{F}_{n}\) and Lemma 3 so that conditioned on \(\mathcal{F}_{n}\), \(L_{n}^{-1}U_{n+1}^{-1}N_{n+1}L_{n}\) has the same distribution as \(N_{n+1}\). Since \(N_{n+1}\) is independent of \(\mathcal{F}_{n}\), this is equal to \(\mathbb{P}\Big{(}D\big{(}(I+\epsilon N_{n+1})\Sigma_{n}\big{)}\in F|\Sigma_{n} \Big{)}\). We have established that \[\mathbb{P}(\Sigma_{n+1}\in F|\mathcal{F}_{n})=\mathbb{P}\Big{(}D\big{(}(I+ \epsilon N_{n+1})\Sigma_{n}\big{)}\in F|\Sigma_{n}\Big{)}.\] Taking conditional expectations of both sides with respect to \(\mathcal{G}_{n}\), we deduce \[\mathbb{P}(\Sigma_{n+1}\in F|\Sigma_{n},\dots,\Sigma_{1})=\mathbb{P}(\Sigma_{ n+1}\in F|\Sigma_{n}).\] Proof of Theorem 1.: Fix \(1\leq k\leq d\). We use the stochastic process described in Lemma 8: let \(A_{n}=I+\epsilon N_{n}\), \(\Sigma_{0}=I\), \(\Sigma_{n}=D(A_{n}\Sigma_{n-1})=\operatorname{diag}(s_{1}(A_{n}\Sigma_{n-1}), \dots,s_{d}(A_{n}\Sigma_{n-1}))\). As before, we write \(A^{(n)}=A_{n}\dots A_{1}\). We note that \(\Sigma_{n}\) is _not_ equal to \(D(A^{(n)})\), but using Lemma 8 the two processes \((\Sigma_{n})_{n\geq 0}\) and \(\big{(}D(A^{(n)})\big{)}_{n\geq 0}\) have the same distribution. Let \(B_{n}=A_{n}\Sigma_{n-1}\left(e_{1}\;\dots\;e_{k}\;0\;\dots\;0\right)\). Then for all \(1\leq j\leq k\), \[\big{|}s_{j}(\Sigma_{n})-s_{j}(B_{n})\big{|} =\big{|}s_{j}(A_{n}\Sigma_{n-1})-s_{j}(B_{n})\big{|}\] \[\leq\big{\|}A_{n}\Sigma_{n-1}-B_{n}\big{\|}\] \[=\big{\|}A_{n}\Sigma_{n-1}\left(0\;\dots 0\;e_{k+1}\;\dots\;e_{d} \right)\big{\|}\] \[=\|A_{n}\operatorname{diag}(0,\dots 0,s_{k+1}(\Sigma_{n-1}),\dots,s_{ d}(\Sigma_{n-1}))\|\] \[\leq s_{k+1}(\Sigma_{n-1})\,\|A_{n}\|\] Then we have \[\left|\frac{s_{j}(B_{n})}{s_{j}(\Sigma_{n})}-1\right| =\left|\frac{s_{j}(B_{n})-s_{j}(\Sigma_{n})}{s_{j}(\Sigma_{n})}\right|\] \[\leq\frac{s_{k+1}(\Sigma_{n-1})}{s_{j}(\Sigma_{n})}\left\|A_{n}\right\|\] By Gol'dsheid and Margulis, [4, Theorem 5.4], \(\frac{1}{n}\log s_{j}(A^{(n)})\rightarrow\lambda_{j}\) and \(\frac{1}{n}\log s_{k+1}(A^{(n)})\rightarrow\lambda_{k+1}\) almost surely for some \(\lambda_{j}>\lambda_{k+1}\). Since the processes \(\left(D(A^{(n)})\right)\) and \((\Sigma_{n})\) have a common distribution, it follows that \(\frac{1}{n}\log s_{j}(\Sigma_{n})\rightarrow\lambda_{j}\) and \(\frac{1}{n}\log s_{k+1}(\Sigma_{n})\rightarrow\lambda_{k+1}\) almost surely. So \[\frac{1}{n}\log\left(\frac{s_{k+1}(\Sigma_{n-1})}{s_{j}(\Sigma_{n})}\right) \rightarrow\lambda_{k+1}-\lambda_{j}<0\] almost surely as \(n\rightarrow\infty\). If this occurs, there is some \(N\in\mathbb{N}\) such that \(s_{k+1}(\Sigma_{n-1})/s_{j}(\Sigma_{n})<e^{-n(\lambda_{k+1}-\lambda_{j})/2}\) for all \(n\geq N\). A well-known consequence of the Strong Law of Large Numbers ensures that \(C(\omega):=\sup_{n}\left\|A_{n}\right\|/n\) is finite a.s., so that \(\left\|A_{n}\right\|/n\leq C(\omega)\) for all \(n\). For \(n\geq N\) we then have \[\left|\frac{s_{k+1}(\Sigma_{n-1})}{s_{j}(\Sigma_{n})}\right|\left\|A_{n} \right\|\leq C(\omega)ne^{-n(\lambda_{k+1}-\lambda_{j})/2}\to 0\] as \(n\rightarrow\infty\). Hence \[\frac{s_{j}(B_{n})}{s_{j}(\Sigma_{n})}\to 1\text{ as }n\rightarrow\infty. \tag{1}\] For a matrix \(A\), let \(s_{1}^{k}(A)=s_{1}(A)\cdots s_{k}(A)\). Since \(B_{n}\) has \(k\) non-zero columns, \(B_{n}{}^{\wedge k}\) has rank one and we have \[s_{1}^{k}(B_{n}) =\left\|B_{n}e_{1}\wedge B_{n}e_{2}\wedge\cdots\wedge B_{n}e_{k}\right\|\] \[=\left\|(A_{n}\Sigma_{n-1})e_{1}\wedge(A_{n}\Sigma_{n-1})e_{2} \wedge\cdots\wedge(A_{n}\Sigma_{n-1})e_{k}\right\|\] \[=\left\|(A_{n}e_{1})s_{1}(\Sigma_{n-1})\wedge(A_{n}e_{2})s_{2}( \Sigma_{n-1})\wedge\cdots\wedge(A_{n}e_{k})s_{k}(\Sigma_{n-1})\right\|\] \[=s_{1}^{k}(\Sigma_{n-1})\left\|c_{1}(A_{n})\wedge c_{2}(A_{n}) \wedge\cdots\wedge c_{k}(A_{n})\right\|\] \[=s_{1}^{k}(\Sigma_{n-1})\big{\|}c_{1}^{\perp}(A_{n})\big{\|} \big{\|}c_{2}^{\perp}(A_{n})\big{\|}\cdots\big{\|}c_{k}^{\perp}(A_{n})\big{\|},\] where \(\wedge\) denotes the wedge product. For \(n\in\mathbb{N}\) and \(1\leq k\leq d\), let \(X_{n}^{k}:=\big{\|}c_{1}^{\perp}(A_{n})\big{\|}\big{\|}c_{2}^{\perp}(A_{n}) \big{\|}\cdots\big{\|}c_{k}^{\perp}(A_{n})\big{\|}\). Then \(X_{1}^{k},X_{2}^{k},\dots\) is a sequence of i.i.d. random variables. Since \(\Theta(A)\leq\Theta(A)\) for all \(n\geq N\), we have \[\left|\frac{s_{k+1}(\Sigma_{n-1})}{s_{j}(\Sigma_{n})}\right|\left\|A_{n}\right\| \leq C(\omega)ne^{-n(\lambda_{k+1}-\lambda_{j})/2}\to 0\] as \(n\rightarrow\infty\). Hence \[\frac{s_{j}(B_{n})}{s_{j}(\Sigma_{n})}\to 1\text{ as }n\rightarrow\infty. \tag{2}\] For a matrix \(A\), let \(s_{1}^{k}(A)=s_{1}(A)\cdots s_{k}(A)\). Since \(B_{n}\) has \(k\) non-zero columns, \(B_{n}{}^{\wedge k}\) has rank one and we have \[s_{1}^{k}(B_{n}) =\left\|B_{n}e_{1}\wedge B_{n}e_{2}\wedge\cdots\wedge B_{n}e_{k}\right\|\] \[=\left\|(A_{n}\Sigma_{n-1})e_{1}\wedge(A_{n}\Sigma_{n-1})e_{2} \wedge\cdots\wedge(A_{n}\Sigma_{n-1})e_{k}\right\|\] \[=\left\|(A_{n}e_{1})s_{1}(\Sigma_{n-1})\wedge(A_{n}e_{2})s_{2}( \Sigma_{n-1})\wedge\cdots\wedge(A_{n}e_{k})s_{k}(\Sigma_{n-1})\right\|\] \[=s_{1}^{k}(\Sigma_{n-1})\left\|c_{1}(A_{n})\wedge c_{2}(A_{n}) \wedge\cdots\wedge c_{k}(A_{n})\right\|\] \[=s_{1}^{k}(\Sigma_{n-1})\big{\|}c_{1}^{\perp}(A_{n})\big{\|} \big{\|}c_{2}^{\perp}(A_{n})\big{\|}\cdots\big{\|}c_{k}^{\perp}(A_{n})\big{\|},\] where \(\wedge\) denotes the wedge product. For \(n\in\mathbb{N}\) and \(1\leq k\leq d\), let \(X_{n}^{k}:=\big{\|}c_{1}^{\perp}(A_{n})\big{\|}\big{\|}c_{2}^{\perp}(A_{n}) \big{\|}\cdots\big{\|}c_{k}^{\perp}(A_{n})\big{\|}\). Then \(X_{1}^{k},X_{2}^{k},\dots\) is a sequence of i.i.d. random variables. Since \(\Theta(A)\leq\Theta(A)\) for all \(n\geq N\), we have \[\left|\frac{s_{k+1}(\Sigma_{n-1})}{s_{j}(\Sigma_{n})}\right|\left\|A_{n}\right\| \leq C(\omega)ne^{-n(\lambda_{k+1}-\lambda_{j})/2}\to 0\] as \(n\rightarrow\infty\). Hence \[\frac{s_{j}(B_{n})}{s_{j}(\Sigma_{n})}\to 1\text{ as }n\rightarrow\infty. \tag{3}\] For a matrix \(A\), let \(s_{1}^{k}(A)=s_{1}(A)\cdots s_{k}(A)\). Since \(B_{n}\) has \(k\) non-zero columns, \(B_{n}{}^{\wedge k}\) has rank one and we have \[s_{1}^{k}(B_{n}) =\left\|B_{n}e_{1}\wedge B_{n}e_{2}\wedge\cdots\wedge B_{n}e_{k}\right\|\] \[=\left\|(A_{n}\Sigma_{n-1})e_{1}\wedge(A_{n}\Sigma_{n-1})e_{2} \wedge\cdots\wedge(A_{n}\Sigma_{n-1})e_{k}\right\|\] \[=\left\|(A_{n}e_{1})s_{1}(\Sigma_{n-1})\wedge(A_{n}e_{2})s_{2}( \Sigma_{n-1})\wedge\cdots\wedge(A_{n}e_{k})s_{k}(\Sigma_{n-1})\right\|\] \[=s_{1}^{k}(\Sigma_{n-1})\left\|c_{1}(A_{n})\wedge c_{2}(A_{n}) \wedge\cdots\wedge c_{k}(A_{n})\right\|\] \[=s_{1}^{k}(\Sigma_{n-1})\left\|c_{1}^{\perp}(A_{n})\right\| \big{\|}c_{2}^{\perp}(A_{n})\big{\|}\cdots\big{\|}c_{k}^{\perp}(A_{n}) \big{\|},\] where \(\wedge\) denotes the wedge product. For \(n\in\mathbb{N}\) and \(1\leq k\leq d\), let \(X_{n}^{k}:=\big{\|}c_{1}^{\perp}(A_{n})\big{\|}\big{\|}c_{2}^{\perp}(A_{n}) \big{\|}\cdots\big{\|}c_{k}^{\perp}(A_{n})\big{\|}\). Then \(X_{1}^{k},X_{2}^{k},\dots\) is a sequence of i.i.d. random variables. Since \(\Theta(A)\leq\Theta(A)\) for all \(n\geq N\), we have \[\left|\frac{s_{k+1}(\Sigma_{n-1})}{s_{j}(\Sigma_{n})}\right|\ \(\|c_{i}^{\perp}(A)\|\leq\|A\|\), we see, using Lemma 6 and Corollary 7 that \(\log\|c_{i}^{\perp}(A)\|\) is integrable. We have \[s_{1}^{k}(\Sigma_{n}) =\frac{s_{1}^{k}(\Sigma_{n})}{s_{1}^{k}(B_{n})}s_{1}^{k}(B_{n})\] \[=\frac{s_{1}^{k}(\Sigma_{n})}{s_{1}^{k}(B_{n})}X_{n}^{k}s_{1}^{k} (\Sigma_{n-1}).\] Using induction, we obtain \[s_{1}^{k}(\Sigma_{n})=\left(\prod_{j=1}^{n}\frac{s_{1}^{k}(\Sigma_{j})}{s_{1}^ {k}(B_{j})}\right)X_{1}^{k}\ldots X_{n}^{k}.\] Hence \[\tfrac{1}{n}\log s_{1}^{k}(\Sigma_{n})=\frac{1}{n}\sum_{j=1}^{n}\log\frac{s_{ 1}^{k}(\Sigma_{j})}{s_{1}^{k}(B_{j})}+\frac{1}{n}\sum_{j=1}^{n}\log X_{j}^{k}.\] By (1), the first term on the right side converges almost surely to \(0\) and by the Strong Law of Large Numbers the second term converges almost surely to \(\mathbb{E}\log X_{1}^{k}\). Hence we obtain \[\lambda_{1}+\ldots+\lambda_{k}=\mathbb{E}\big{(}\log\|c_{1}^{\perp}(I+ \epsilon N)\|+\ldots+\log\|c_{k}^{\perp}(I+\epsilon N)\|\big{)}.\] Subtracting the \((k-1)\)-fold partial sum from the \(k\)-fold partial sum, we obtain \[\lambda_{k}=\mathbb{E}\log\|c_{k}^{\perp}(I+\epsilon N)\|,\] as required. This gives us an explicit description of \(\lambda_{k}\). However it is difficult to compute for large matrices. In the next section we find an approximation for \(\lambda_{k}\) which is easier to compute. ## 4 An approximation for \(\lambda_{j}\) In this section we focus on the case where \(A\sim I_{d}+\epsilon N\) and introduce the computationally simpler vectors \(c_{j}^{\prime}(A)\) approximating \(c_{j}^{\perp}(A)\), defined by \(c_{1}^{\prime}(A)=c_{1}(A)\) and \[c_{k}^{\prime}(A)=c_{k}(A)-\sum_{1\leq j<k}\left\langle c_{j}(A),c_{k}(A) \right\rangle c_{j}(A)\] With the same setup as in the previous section, when \(|\epsilon\log\epsilon|<(100d)^{-1}\) we have **Theorem 9**.: _For any \(d\in\mathbb{N}\), if \(A_{1}\sim I_{d}+\epsilon N\) and \(1\leq k\leq d\) then \(\mathbb{E}\log\left\|c_{k}^{\perp}\right\|=\mathbb{E}\log\left\|c_{k}^{\prime} \right\|+O(\epsilon^{4}|\log\epsilon|^{4})\)._ We will say that \(A=I+\epsilon N\) is _bad_ if \(|N_{ij}|>|\log\epsilon|\) for some \(i,j\). Let \(\mathsf{bad}\) denote the event that \(A\) is bad. We first control the contribution to \(\mathbb{E}\log\left\|c_{k}^{\perp}\right\|-\mathbb{E}\log\left\|c_{k}^{\prime}\right\|\) coming from the bad set. **Lemma 10**.: _Let \(\epsilon>0\). Then_ \[\mathbb{E}\big{(}\mathbb{1}_{\mathsf{bad}}\big{|}\log\|c_{j}^{ \perp}(I+\epsilon N)\|\big{|}\big{)} =O(|\log\epsilon|e^{-(\log\epsilon)^{2}/2});\text{ and }\] \[\mathbb{E}\big{(}\mathbb{1}_{\mathsf{bad}}\big{|}\log\|c_{j}^{ \prime}(I+\epsilon N)\|\big{|}\big{)} =O(|\log\epsilon|e^{-(\log\epsilon)^{2}/2}).\] Proof.: We write \(c_{j}^{\perp}\) and \(c_{j}^{\prime}\) for \(c_{j}^{\perp}(I+\epsilon N)\) and \(c_{j}^{\prime}(I+\epsilon N)\) respectively. We control the positive parts \(\log^{+}\left\|c_{j}^{\prime}\right\|\) and \(\log^{+}\left\|c_{j}^{\perp}\right\|\), and the negative parts \(\log^{-}\left\|c_{j}^{\prime}\right\|\) and \(\log^{-}\left\|c_{j}^{\perp}\right\|\). For the positive parts, notice that \(\left\|c_{j}^{\perp}\right\|\leq\left\|c_{j}\right\|\leq\sum_{i,j}|a_{ij}|\) and \(\left\|c_{j}^{\prime}\right\|\leq\left(1+\sum_{i,j}|a_{ij}|\right)^{3}\). The set \(\mathsf{bad}\) is a union of \(d^{2}\) parts of the form \(\mathsf{bad}_{ij}=\{N\colon|N_{ij}|>|\log\epsilon|\}\). Using the bound \(\log^{+}(x)\leq x\), this gives \[\mathbb{E}\big{(}\mathbb{1}_{\mathsf{bad}}\log^{+}\left\|c_{j}^{ \perp}\right\|\big{)} \leq\sum_{i,j}\int_{\mathsf{bad}_{i,j}}\Big{(}d+\epsilon\sum_{k,l }|x_{kl}|\Big{)}f_{X}\big{(}(x_{kl})\big{)}d(x_{kl})\] \[\leq d^{2}\int_{\mathsf{bad}_{1,1}}\big{(}d+\epsilon d^{2}|x_{11} |\big{)}\,f_{N}(x_{11})dx_{11}\] \[=O(\exp(-(\log\epsilon)^{2}/2)).\] A similar argument holds for \(\mathbb{E}\big{(}\mathbb{1}_{\mathsf{bad}}\log^{+}\left\|c_{j}^{\prime}\right\| \big{)}\). To control \(\mathbb{E}\big{(}\mathbb{1}_{\mathsf{bad}}\log^{-}\left\|c_{j}^{\perp}\right\| \big{)}\) and \(\mathbb{E}\big{(}\mathbb{1}_{\mathsf{bad}}\log^{-}\left\|c_{j}^{\prime}\right\| \big{)}\), recall \(\left\|c_{j}^{\perp}\right\|\) and \(\left\|c_{j}^{\prime}\right\|\) are bounded below by \(\Theta(A)\). By standard estimates on the tail of the normal distribution, \(\mathbb{P}(\mathsf{bad})=O(e^{-(\log\epsilon)^{2}/2}/|\log\epsilon|)\). We see from Lemma 6, \(\mathbb{E}\log^{-}\Theta(I+\epsilon N)\mathbb{1}_{\mathsf{bad}}=O(|\log \epsilon|e^{-(\log\epsilon)^{2}/2})\), which gives the required estimates. We now give pointwise estimates for \(\big{|}\log\left\|c_{k}^{\perp}\right\|-\log\left\|c_{k}^{\prime}\right\|\) when \(A\) is not bad. That is, when \(A=I+\epsilon N_{ij}\) where \(|N_{ij}|\leq|\log\epsilon|\) for all \(i,j\). **Lemma 11**.: _There exist \(\epsilon_{0}>0\) and \(C>0\) depending only on \(d\) such that for all matrices \(A\) of the form \(A=I+\epsilon X\) where \(|X_{ij}|\leq|\log\epsilon|\) for each \(i,j\), then for each \(k\),_ \[\Big{|}\log\big{\|}c_{k}^{\perp}(A)\big{\|}-\log\big{\|}c_{k}^{\prime}(A)\big{\|} \Big{|}\leq C(\epsilon|\log\epsilon|)^{4}\text{ for all }\epsilon<\epsilon_{0}.\] As usual, we write \(c_{j}\), \(c_{j}^{\perp}\) and \(c_{j}^{\prime}\) in place of \(c_{j}(A)\), \(c_{j}^{\perp}(A)\) and \(c_{j}^{\prime}(A)\) for brevity. We define \(\alpha_{i}^{j}:=\frac{\left\langle c_{i}^{\perp},c_{j}\right\rangle}{\big{\|} c_{i}^{\perp}\big{\|}^{2}}\) so that \(c_{j}^{\perp}=c_{j}-\sum_{i<j}\alpha_{i}^{j}c_{i}^{\perp}\). Throughout the proof, we let \(\eta=|\log\epsilon|\). We let \(\epsilon_{0}\) be sufficiently small that \(\epsilon\eta<1/(100d)\) for \(\epsilon<\epsilon_{0}\). The proof makes use of a number of claims. **Claim 1**.: _Let \(A=I+\epsilon X\) where \(|X_{ij}|\leq\eta\) for all \(i,j\). For all \(1\leq n\leq d\), the following hold:_ 1. \(|\left\|c_{n}\right\|^{2}-1|\leq 2\epsilon\eta+d\eta^{2}\epsilon^{2}\leq 3 \epsilon\eta\)_;_ 2. \(|\left\|c_{n}^{\perp}\right\|^{2}-1|\leq 3\epsilon\eta\)_;_ 3. \(|\alpha_{i}^{k}|\leq 6\epsilon\eta\) _for all_ \(i\leq n\) _and_ \(k>i\)_;_ 4. \(|\left\langle c_{n}^{\perp},c_{k}\right\rangle|\leq 3\epsilon\eta\) _for all_ \(k>n\)_._ Proof.: Since \(|X_{ij}|\leq\eta\) for all \(i,j\), for any \(1\leq n\leq d\) and \(i<j\) we have \[|\left\|c_{n}\right\|^{2}-1| \leq 2\epsilon\eta+d\epsilon^{2}\eta^{2}\quad\text{and}\] \[|\left\langle c_{i},c_{j}\right\rangle| \leq 2\epsilon\eta+d\epsilon^{2}\eta^{2}.\] This shows (i) for all \(n\), as well as (ii), (iii) and (iv) for \(n=1\). Now suppose for some \(2\leq j\leq d\), (ii)-(iv) each hold for all \(n\leq j-1\). Then for all \(k>j\) we have \[\left\langle c_{j}^{\perp},c_{k}\right\rangle=\left\langle c_{j}-\sum_{i<j} \alpha_{i}^{j}c_{i}^{\perp},c_{k}\right\rangle=\left\langle c_{j},c_{k}\right \rangle-\sum_{i<j}\alpha_{i}^{j}\langle c_{i}^{\perp},c_{k}\right\rangle\] This implies \[\left|\left\langle c_{j}^{\perp},c_{k}\right\rangle\right| \leq|\left\langle c_{j},c_{k}\right\rangle|+\sum_{i<j}|\alpha_{i }^{j}|\big{|}\big{\langle}c_{i}^{\perp},c_{k}\big{\rangle}\big{|}\] \[\leq(2\epsilon\eta+d\epsilon^{2}\eta^{2})+d\cdot(6\epsilon\eta)( 3\epsilon\eta)\] \[\leq 3\epsilon\eta,\] where we used (i) and the induction hypotheses in the second line and the condition on \(\epsilon_{0}\) in the third line. This establishes (iv) for \(n=j\). Since \(c_{1}^{\perp},\ldots,c_{j}^{\perp}\) are mutually perpendicular, it follows that \[\big{\|}c_{j}\big{\|}^{2}=\big{\|}c_{j}^{\perp}\big{\|}^{2}+\sum_{i<j}(\alpha_{i }^{j})^{2}\big{\|}c_{i}^{\perp}\big{\|}^{2}\] Thus we have \[\Big{|}\big{\|}c_{j}^{\perp}\big{\|}^{2}-1\Big{|} =\bigg{|}\big{\|}c_{j}\|^{2}-1+\sum_{i<j}(\alpha_{i}^{j})^{2} \big{\|}c_{i}^{\perp}\big{\|}^{2}\bigg{|}\] \[\leq\big{|}\big{\|}c_{j}\big{\|}^{2}-1\Big{|}+\sum_{i<j}(\alpha_{ i}^{j})^{2}\big{\|}c_{i}^{\perp}\big{\|}^{2}\] \[\leq(2\epsilon\eta+d\epsilon^{2}\eta^{2})+d(6\epsilon\eta)^{2}(1 +3\epsilon\eta)\] \[\leq 3\epsilon\eta,\] establishing (ii) for \(n=j\). We show that (iii) holds for \(n=j\). Since by the induction hypothesis, \(|\alpha_{i}^{k}|\leq 6\epsilon\eta\) for all \(i<j\) and \(k>i\), if suffices to show that \(|\alpha_{j}^{k}|\leq 6\epsilon\eta\) for all \(k>j\). For any \(k>j\), using (iv), we have \[\big{|}\alpha_{j}^{k}\big{|}=\frac{\big{|}\big{\langle}c_{j}^{\perp},c_{k} \big{\rangle}\big{|}}{\big{\|}c_{j}^{\perp}\big{\|}^{2}}\leq\frac{3\epsilon \eta}{1/2}=6\epsilon\eta\] which shows that (iii) holds for \(n=j\). **Claim 2**.: _For each \(1\leq n\leq d\), \(c_{n}^{\perp}=c_{n}+\sum_{j<n}\beta_{j}^{n}c_{j}\) where \(|\beta_{j}^{n}|<7\epsilon\eta\)._ Proof.: We use induction on \(j\). The base case is \(c_{1}^{\perp}=c_{1}\). Suppose the claim holds for all \(n<j\leq d\). Then \[c_{j}^{\perp} =c_{j}-\sum_{i<j}\alpha_{i}^{j}c_{i}^{\perp}\] \[=c_{j}-\sum_{i<j}\alpha_{i}^{j}\Big{(}c_{i}+\sum_{\ell<i}\beta_{ \ell}^{i}c_{\ell}\Big{)}\] \[=c_{j}-\sum_{\ell<j}\alpha_{\ell}^{j}c_{\ell}-\sum_{i<j}\alpha_{i }^{j}\sum_{\ell<i}\beta_{\ell}^{i}c_{\ell}\] \[=c_{j}-\sum_{\ell<j}\alpha_{\ell}^{j}c_{\ell}-\sum_{\ell<j-1} \Big{(}\sum_{i=\ell+1}^{j-1}\alpha_{i}^{j}\beta_{\ell}^{i}\Big{)}c_{\ell}\] For any \(\ell<j\), the coefficient of \(c_{\ell}\) in the above expression is bounded by \[\left|\alpha_{\ell}^{j}\right|+\sum_{i=\ell+1}^{j-1}\left|\alpha_{i}^{j}\beta_{ \ell}^{i}\right|\leq 6\epsilon\eta+d(6\epsilon\eta)(7\epsilon\eta)\leq 7\epsilon\eta\] **Claim 3**.: _For all \(1\leq j\leq d\), \(c_{j}^{\prime}=c_{j}^{\perp}+\sum_{n<j}\gamma_{n}c_{n}\) where \(\gamma_{n}=O(\epsilon^{2}\eta^{2})\), where the implicit constant depends only on \(d\)_ Proof.: For any such \(j\) we have \[c_{j}^{\prime}-c_{j}^{\perp}=\sum_{i<j}\bigg{(}\frac{\left\langle c_{i}^{ \perp},c_{j}\right\rangle}{\left\langle c_{i}^{\perp},c_{i}^{\perp}\right\rangle }c_{i}^{\perp}-\left\langle c_{i},c_{j}\right\rangle c_{i}\bigg{)}.\] We identify the coefficient of \(c_{\ell}\) when \(c_{j}^{\prime}-c_{j}^{\perp}\) is expanded in the basis \((c_{k})\). That coefficient may be seen to be \[\frac{\left\langle c_{\ell}^{\perp},c_{j}\right\rangle}{\left\langle c _{\ell}^{\perp},c_{\ell}^{\perp}\right\rangle}-\left\langle c_{\ell},c_{j} \right\rangle+\sum_{\ell<i<j}\frac{\left\langle c_{i}^{\perp},c_{j}\right\rangle }{\left\langle c_{i}^{\perp},c_{i}^{\perp}\right\rangle}\beta_{\ell}^{i}\] \[= \frac{\left\langle c_{\ell}^{\perp},c_{j}\right\rangle-\left\langle c _{\ell},c_{j}\right\rangle}{\left\langle c_{\ell}^{\perp},c_{\ell}^{\perp} \right\rangle}+\frac{\left\langle c_{\ell},c_{j}\right\rangle\left(1-\left\langle c _{\ell}^{\perp},c_{\ell}^{\perp}\right\rangle\right)}{\left\langle c_{\ell}^{ \perp},c_{\ell}^{\perp}\right\rangle}+O(\epsilon^{2}\eta^{2}),\] where we added and subtracted \(\left\langle c_{\ell},c_{j}\right\rangle/\left\langle c_{\ell}^{\perp},c_{ \ell}^{\perp}\right\rangle\); and the estimate for the third term follows from Claims 1 and 2. Since \(\left\langle c_{\ell}^{\perp},c_{j}\right\rangle-\left\langle c_{\ell},c_{j} \right\rangle=-\left\langle\sum_{i<\ell}\beta_{i}^{\ell}c_{i},c_{j}\right\rangle\), the estimates in Claims 1 and 2 show the first term is \(O(\epsilon^{2}\eta^{2})\). Finally since \(\left\langle c_{\ell},c_{j}\right\rangle=O(\epsilon\eta)\) and \(1-\left\langle c_{\ell}^{\perp},c_{\ell}^{\perp}\right\rangle\) is \(O(\epsilon\eta)\) by Claim 1, the middle term is also \(O(\epsilon^{2}\eta^{2})\). Proof of Lemma 11.: By orthogonality, \[\left\|c_{j}^{\prime}\right\|^{2}=\left\|c_{j}^{\perp}+\sum_{n<j}\gamma_{n}c_ {n}\right\|^{2}=\left\|c_{j}^{\perp}\right\|^{2}+\bigg{\|}\sum_{n<j}\gamma_{n}c _{n}\bigg{\|}^{2},\] where \(\gamma_{n}\) is as in Claim 3. Since \(\gamma_{n}=O(\epsilon^{2}\eta^{2})\), we obtain \(\left\|c_{j}^{\prime}\right\|^{2}=\left\|c_{j}^{\perp}\right\|^{2}+O( \epsilon^{4}\eta^{4})\). Since \(\left\|c_{j}^{\perp}\right\|^{2}\) is in the range \((\frac{1}{2},\frac{3}{2})\), it follows that \(\left|\log\left\|c_{j}^{\prime}\right\|-\log\left\|c_{j}^{\perp}\right\| \right|=O(\epsilon^{4}\eta^{4})\) as required. Proof of Theorem 9.: Lemma 10 shows that \[\left|\mathbb{E}\big{(}\log\|c_{k}^{\perp}\|-\log\|c_{k}^{\prime}\| \big{)}\mathbf{1}_{\mathsf{bad}}\big{)}\right|\] \[\leq\mathbb{E}\big{(}\log\|c_{k}^{\perp}\|\mathbf{1}_{\mathsf{bad} }\big{)}+\mathbb{E}\big{(}\log\|c_{k}^{\prime}\|\mathbf{1}_{\mathsf{bad}}\big{)}\] \[=O(|\log\epsilon|e^{-(\log\epsilon)^{2}}).\] and Lemma 11 shows that \(\big{|}\log\big{\|}c_{k}^{\perp}\|-\log\big{\|}c_{k}^{\prime}\big{\|}\big{|} \mathbf{1}_{\mathsf{bad}^{c}}=O(\epsilon^{4}|\log\epsilon|^{4})\). Taking the expectation of this and combining the estimates gives the theorem. ## 5 Computing \(\mathbb{E}\log\|c_{k}^{\prime}\|\) Finally, we find the dominant term in the asymptotic expansion for \(\mathbb{E}\log\|c_{j}^{\prime}\|\) in the same setup as the previous section. This is Theorem 2 which we restate here for convenience. **Theorem**.: _Consider an orthogonal-plus-Gaussian cocycle as in Theorem 1. Then the Lyapunov exponents satisfy_ \[\lambda_{k}(\epsilon)=(d-2k)\tfrac{\epsilon^{2}}{2}+O(\epsilon^{4}|\log \epsilon|^{4})\text{ as }\epsilon\to 0.\] As in the previous sections, let \(A=I_{d}+\epsilon N\) where \(N\) is a standard Gaussian matrix random variable. Proof.: Let \(\eta=|\log\epsilon|\) and let \(\mathsf{bad}\) be defined as above. We assume \(\epsilon\) is sufficiently small that \(\|c_{j}^{\prime}(I+\epsilon N)\|^{2}\in(\tfrac{1}{2},\tfrac{3}{2})\) for all \(N\in\mathsf{bad}^{c}\). Expanding, we have that \[\|c_{j}^{\prime}\|^{2}=\left\langle c_{j}-\sum_{i<j}\langle c_{i},c_{j}\rangle c_{i}\,,\,c_{j}-\sum_{k<j}\langle c_{k},c_{j}\rangle c_{k}\right\rangle\] \[=\|c_{j}\|^{2}-2\sum_{i<j}\langle c_{i},c_{j}\rangle^{2}+\sum_{i, k<j}\langle c_{i},c_{j}\rangle\langle c_{k},c_{j}\rangle\langle c_{i},c_{k}\rangle\] \[=\|c_{j}\|^{2}-2\sum_{i<j}\langle c_{i},c_{j}\rangle^{2}+\sum_{i< j}\langle c_{i},c_{j}\rangle^{2}\|c_{i}\|^{2}+2\sum_{i<k<j}\langle c_{i},c_{j} \rangle\langle c_{k},c_{j}\rangle\langle c_{i},c_{k}\rangle\] \[=\|c_{j}\|^{2}-\sum_{i<j}\langle c_{i},c_{j}\rangle^{2}(2-\|c_{i} \|^{2})+2\sum_{i<k<j}\langle c_{i},c_{j}\rangle\langle c_{k},c_{j}\rangle \langle c_{i},c_{k}\rangle,\] where to obtain the third line from the second, we separated the case \(i=k\) from the case \(i\neq k\). We take a finite Taylor expansion, valid for \(t\in(-1,1)\): \(\log(1+t)=t-\tfrac{t^{2}}{2}+\tfrac{t^{3}}{3}-R(t)\) where \(R(t)=\tfrac{1}{4}(1+\xi)^{-4}t^{4}\) for some \(\xi\) with \(|\xi|\leq|t|\). Let \(X_{j}\) be the random variable \(\|c^{\prime}_{j}(I+\epsilon N)\|^{2}-1\). Notice from the above that \(X_{j}\) is a polynomial of degree \(6\) (whose coefficients don't depend on \(\epsilon\)) in the entries of \(\epsilon N\). If \(N=0\), then \(c^{\prime}_{j}(I+\epsilon N)=e_{j}\) so that the constant term in the polynomial \(X_{j}\) is \(0\). Notice also that by Claim 1, on \(\mathsf{bad}^{c}\), all terms other than the first term in the expression for \(\|c^{\prime}_{j}\|^{2}\) are \(O(\epsilon^{2}|\log\epsilon|^{2})\), while a calculation shows that \(\|c_{j}\|^{2}=1+O(\epsilon|\log\epsilon|)\). Hence \(X_{j}\mathbf{1}_{\mathsf{bad}^{c}}=O(\epsilon|\log\epsilon|)\). Let \(Y_{j}=X_{j}-\frac{1}{2}X_{j}^{2}+\frac{1}{3}X_{j}^{3}\), so that \(Y_{j}\) is another polynomial in the entries of \(\epsilon N\) with no constant term. Combining the above, on \(\mathsf{bad}^{c}\) \[\log(\|c^{\prime}_{j}(I+\epsilon N)\|^{2})=\log(1+X_{j})=Y_{j}+O(\epsilon^{4} |\log\epsilon|^{4}).\] Then we have \[\begin{split}&\mathbb{E}\log(\|c^{\prime}_{j}(I+\epsilon N)\|^{2}) \\ &=\mathbb{E}\log(\|c^{\prime}_{j}(I+\epsilon N)\|^{2}\mathbf{1}_{ \mathsf{bad}^{c}})+\mathbb{E}\log(\|c^{\prime}_{j}(I+\epsilon N)\|^{2} \mathbf{1}_{\mathsf{bad}})\\ &=\mathbb{E}(Y_{j}\mathbf{1}_{\mathsf{bad}^{c}})+O(\epsilon^{4} |\log\epsilon|^{4})+\mathbb{E}\log(\|c^{\prime}_{j}(I+\epsilon N)\|^{2} \mathbf{1}_{\mathsf{bad}})\\ &=\mathbb{E}Y_{j}-\mathbb{E}(Y_{j}\mathbf{1}_{\mathsf{bad}})+ \mathbb{E}\log(\|c^{\prime}_{j}(I+\epsilon N)\|^{2}\mathbf{1}_{\mathsf{bad}} )+O(\epsilon^{4}|\log\epsilon|^{4}).\end{split} \tag{2}\] Since \(Y_{j}\) is a fixed polynomial function of the entries of \(\epsilon N\), and all monomials that are products of entries \(N\) have finite expectation, we see that \(\mathbb{E}Y_{j}\) agrees up to order \(\epsilon^{4}\) with the expectation of its terms of degree \(3\) or lower. Also, since the entries of \(N\) are independent and each has a symmetric distribution, the constant term of \(Y_{j}\) being \(0\), the only terms that give a non-zero contribution to \(\mathbb{E}Y_{j}\) are the terms of the forms \(N_{ab}^{2}\). Since the lowest order terms in \(Y_{j}\) are polynomials of degree \(1\), and \(Y_{j}=X_{j}-\frac{1}{2}X_{j}^{2}+\frac{1}{3}X_{j}^{3}\), the terms of the form \(N_{ab}^{2}\) in \(Y_{j}\) are those appearing in \(X_{j}\) and \(\frac{1}{2}X_{j}^{2}\). We established above \[X_{j}=\|c_{j}\|^{2}-1-\sum_{i<j}\langle c_{i},c_{j}\rangle^{2}(2-\|c_{i}\|^{2 })+2\sum_{i<k<j}\langle c_{i},c_{j}\rangle\langle c_{k},c_{j}\rangle\langle c _{i},c_{k}\rangle.\] We see that \(\|c_{j}\|^{2}-1=2\epsilon N_{jj}+\epsilon^{2}\sum_{i}N_{ij}^{2}\) and \(\langle c_{i},c_{j}\rangle=\epsilon(N_{ij}+N_{ji})+\epsilon^{2}\sum_{k}N_{ki} N_{kj}\). Substituting these in the expression for \(X_{j}\), we see \[\begin{split}\mathbb{E}X_{j}&=d\epsilon^{2}-\epsilon ^{2}\sum_{i<j}\mathbb{E}(N_{ij}+N_{ji})^{2}+O(\epsilon^{4})\\ &=(d-2j+2)\epsilon^{2}+O(\epsilon^{4}).\end{split}\] We also see \(\mathbb{E}X_{j}^{2}=4\epsilon^{2}\mathbb{E}N_{jj}^{2}+O(\epsilon^{4})\). Combining these gives \[\mathbb{E}Y_{j}=\mathbb{E}(X_{j}-\tfrac{1}{2}X_{j}^{2})+O(\epsilon^{4})=(d-2j )\epsilon^{2}+O(\epsilon^{4}).\] Therefore by (2), to finish the argument, it suffices to show \(\mathbb{E}(Y_{j}\mathbf{1}_{\mathtt{bad}})=O(\epsilon^{4}|\log\epsilon|^{4})\) and \(\mathbb{E}\log(\|c^{\prime}_{j}(I+\epsilon N)\|^{2}\mathbf{1}_{\mathtt{bad}})=O (\epsilon^{4}|\log\epsilon|^{4})\). Since \(\|c^{\prime}_{j}(A)\|\geq\Theta(A)\), Lemma 6 shows \[\mathbb{E}(\log^{-}\|c^{\prime}_{j}(I+\epsilon N)\|\mathbf{1}_{\mathtt{bad}}) =O(|\log\epsilon|^{2}e^{-(\log\epsilon)^{2}/2}).\] Since \(\|c^{\prime}_{j}(A)\|\leq 2(\sum_{k,l}|A_{kl}|)^{3}\), we see \(\mathbb{E}(\log^{+}\|c^{\prime}_{j}(I+\epsilon N)\|^{2}\mathbf{1}_{\mathtt{bad }})=O(\mathbb{P}(\mathtt{bad}))=O(e^{-|\log\epsilon|^{2}/2}/|\log\epsilon|)\). Finally, for any of the (finitely many) monomial terms \(M\) appearing in \(Y_{j}\), we can check \(\mathbb{E}M\mathbf{1}_{\mathtt{bad}}=O(\mathbb{P}(\mathtt{bad}))=O(e^{-|\log \epsilon|^{2}/2}/|\log\epsilon|)\). This completes the proof.
2308.00180
General Anomaly Detection of Underwater Gliders Validated by Large-scale Deployment Datasets
Underwater gliders have been widely used in oceanography for a range of applications. However, unpredictable events like shark strikes or remora attachments can lead to abnormal glider behavior or even loss of the instrument. This paper employs an anomaly detection algorithm to assess operational conditions of underwater gliders in the real-world ocean environment. Prompt alerts are provided to glider pilots upon detecting any anomaly, so that they can take control of the glider to prevent further harm. The detection algorithm is applied to multiple datasets collected in real glider deployments led by the University of Georgia's Skidaway Institute of Oceanography (SkIO) and the University of South Florida (USF). In order to demonstrate the algorithm generality, the experimental evaluation is applied to four glider deployment datasets, each highlighting various anomalies happening in different scenes. Specifically, we utilize high resolution datasets only available post-recovery to perform detailed analysis of the anomaly and compare it with pilot logs. Additionally, we simulate the online detection based on the real-time subsets of data transmitted from the glider at the surfacing events. While the real-time data may not contain as much rich information as the post-recovery one, the online detection is of great importance as it allows glider pilots to monitor potential abnormal conditions in real time.
Ruochu Yang, Chad Lembke, Fumin Zhang, Catherine Edwards
2023-07-31T22:29:16Z
http://arxiv.org/abs/2308.00180v3
# General Anomaly Detection of Underwater Gliders ###### Abstract Underwater gliders have been widely used in oceanography for a range of applications. However, unpredictable events like shark strikes or remora attachments can lead to abnormal glider behavior or even loss of the instrument. This paper employs an anomaly detection algorithm to assess operational conditions of underwater gliders in the real-world ocean environment. Prompt alerts are provided to glider pilots upon detecting any anomaly, so that they can take control of the glider to prevent further harm. The detection algorithm is applied to multiple datasets collected in real glider deployments led by the University of Georgia's Skidaway Institute of Oceanography (SkIO) and the University of South Florida (USF). In order to demonstrate the algorithm generality, the experimental evaluation is applied to four glider deployment datasets, each highlighting various anomalies happening in different scenes. Specifically, we utilize high resolution datasets only available post-recovery to perform detailed analysis of the anomaly and compare it with pilot logs. Additionally, we simulate the online detection based on the real-time subsets of data transmitted from the glider at the surfacing events. While the real-time data may not contain as much rich information as the post-recovery one, the online detection is of great importance as it allows glider pilots to monitor potential abnormal conditions in real time. ## I Introduction Underwater gliders are extensively used in ocean research for activities such as ocean sampling, surveillance, and other purposes [1, 2, 3, 4, 5]. However, given the complexity of the ocean environment and the long-duration of glider missions, unexpected events such as shark attacks, wing loss, or attachment of marine species can cause gliders to operate abnormally or even totally fail [6, 7, 8]. In such cases, the gliders may drift to unexpected areas, making localization and rescue operations challenging. Furthermore, it can be difficult to detect the abnormal behavior of gliders, particularly when external disturbances arise, due to the lack of monitoring devices [9, 10, 11, 12]. The deployment of monitoring devices for gliders or the addition of self-monitoring of performance would increase mission costs and pilot complexity. Typically, glider pilots can only rely on heavily subsetted data transmitted by the glider in real time to form hypotheses about potential anomalies. Sometimes, they just resort to climb and dive ballast data to assess if the glider is surfacing or diving as expected. However, this empirical detection can never be conclusively confirmed as the mission is going on. To address this challenge, we develop an anomaly detection algorithm that systematically utilizes simple glider data such as glider speed, heading, and trajectory. This algorithm is feasible for theoretical validation on numerous real-world glider datasets, and runs autonomously in real-time, as opposed to manual detection by human pilots. By monitoring gliders in real-time, the algorithm allows glider pilots to take appropriate actions promptly to ensure the safety and success of missions. Different strategies have been in the field of underwater robotics to identify abnormal behavior of underwater gliders. Some anomaly detection algorithms focus on changes in robot motion, such as roll angle or pitch angle, to detect possible motion deviation or a foreign object attached to the glider [13, 14]. Some algorithms monitor the power consumption or motor performance of the glider, as variations in these parameters can indicate degeneration of individual components, such as propellers and rotors [15, 16, 17]. Other algorithms utilize machine learning techniques to identify anomalous behavior by analyzing sensor data collected by the glider over time, such as changes in the speed, roll, pitch, or depth [18, 19, 20]. However, most of the existing research relies on shore-based manual implementations and does not resolve issues like inability to perform online detection on the gliders or lack of real-time experimental verification. In addition, it is essential to determine whether the detected anomaly is false positive [21, 22]. When the ocean current speed is significantly greater than the maximum speed of the marine robot, it can lead to a considerable performance degradation. Under such circumstances, false alarms should be avoided since the anomaly caused by an unexpected ocean current is unrelated to the glider itself. In practice, it is challenging to separate flow speed and glider speed due to hardware limitations, but leveraging the Controlled Lagrangian Particle Tracking (CLPT) framework [23], the anomaly detection algorithm in [24] generates real-time estimates of the glider speed and flow speed from the trajectory and heading angles. The estimated glider speed is compared with the normal speed range to detect anomalies, while the algorithm-estimated flow speed is compared with the glider-estimated flow speed to avoid false alarms. We initially validate the anomaly detection algorithm by using two real-life deployment datasets [25]. Building upon this previous work, we aim to extend the algorithm to large-scale datasets, thus effectively handling various anomalies in diverse missions. We also plan to simulate online implemen tation of the detection to enable real-time interaction with glider pilots. This objectives constitute primary motivation of this paper, and our main contribution are summarized as follows. * We demonstrate generality of the anomaly detection algorithm based on four glider datasets collected in real deployments featuring diverse anomalies. * We simulate online mode implementation of the algorithm to a real glider deployment with limited data streams in real time for the first time. The SouthEast Coastal Ocean Observing Regional Association (SECOORA) glider Franklin, operated by Skidaway Institute of Oceanography (SkIO), and the University of South Florida (USF) gliders USF-Sam, USF-Gansett, and USF-Stella provide numerous examples of valuable experimental data in which anomalies may be associated with marine bio-hazards. Promising anomaly detection results of these datasets are shown to well match glider pilots' hindcast analysis. Building off its efficacy, the real-time anomaly detection algorithm is incorporated into the autonomous glider navigation software GENIoS_Python [26] to better assist human pilots as an add-on warning functionality. This paper is organized as follows. Section II illustrates the framework of the anomaly detection algorithm. Section III describes the experimental setup of glider deployments, verifies the algorithm by detecting anomalies in large-scale real experiments, and simulates the online implementation on subsetted glider datasets. Section IV provides conclusions and future work. ## II Anomaly Detection Algorithm The pipeline of anomaly detection and false alarm elimination is shown in Fig. 1. By generating the glider speed estimate, the algorithm assumes no anomaly if the estimate is within the normal speed range. Otherwise, the glider may be encountering issues. The flow speed estimate is checked against the glider-estimated flow speed to circumvent any false alarm. We model the glider dynamics as follows: \[\dot{x}=F_{R}(x,t)+V_{R}(t)\Psi_{c}(t)\] \[\Psi_{c}(t)=[\cos\psi_{c}(t),\sin\psi_{c}(t)]^{T}, \tag{1}\] where \(F_{R}\) is the true flow field, \(x\) is the true glider position, \(V_{R}\) is the true glider speed, and \(\psi_{c}\) is the true heading angle. As shown in (2), we model the ocean flow field by spatial-temporal basis functions [27], \[F_{R}(x,t)=\theta\phi(x,t) \tag{2}\] where \(\theta\) is the unknown parameter to be estimated, \[\phi=\begin{bmatrix}\phi^{1}(x,t)&\cdots&\phi^{N}(x,t)\end{bmatrix}^{T} \tag{3}\] \[\phi^{i}(x,t)=\exp^{-\frac{||x-c_{i1}||}{2\sigma_{i}}}\cos(\omega_{i}t+\upsilon _{i}) \tag{4}\] are the basis functions, \(c_{i}\) is the center, \(\sigma_{i}\) is the width, \(\omega_{i}\) is the tidal frequency, \(\upsilon_{i}\) is the tidal phase, and \(N\) is the number of basis functions. Using the heading \(\Psi_{c}(t)\) and the true trajectory \(x(t)\), the detection algorithm can generate estimates of the glider speed \(V_{R}\) and the unknown flow parameter \(\theta\). These estimates will converge to true values as long as the maximum trajectory estimation error (CLLE) converges to zero. The heading data \(\Psi_{c}(t)\) and the glider trajectory \(x(t)\) are always available from full post-recovery Dinkum binary data (DBD) and typically included in subsetted short Dinkum binary data (SBD) sent in real-time. Three gains \(K\), \(\bar{\gamma}\) and \(s\) are designed to accelerate the estimating process. If the estimated glider speed falls within an expected range \(V_{L}(t)\in[V_{min},V_{max}]\), where the maximum glider speed \(V_{max}\) and the minimum glider speed \(V_{min}\) are defined a priori, no anomaly should have happened. Otherwise, the glider may not be operating normally. Additionally, introduce \(F_{L}(t)\) as the algorithm-estimated flow speed, and \(F_{M}(t)\) as the glider-estimated flow speed which can be generated by ocean models or sensor measurements. By defining a criteria \(p_{E}\), we quantitatively evaluate the flow estimation error \(||F_{M}(t)-F_{L}(t)||\) as \[p_{E}=\frac{||F_{M}(t)-F_{L}(t)||}{2max(\hat{F}_{Lmax},\hat{F}_{Mmax})} \tag{5}\] , where \(\hat{F}_{Lmax}=max(||F_{L}(\tau)||_{\tau\in[0,t]})\) is the maximum algorithm-estimated flow speed until time \(t\), and \(\hat{F}_{Mmax}=max(||F_{M}(\tau)||_{\tau\in[0,t]})\) is the maximum glider-estimated flow speed until time \(t\). If flow estimation error is too large (\(p_{E}>\gamma_{f}\), where \(\gamma_{f}\) is a pre-selected threshold), the detected anomaly is likely a false alarm. Fig. 1: Pipeline of anomaly detection and false alarm elimination. Three steps: 1) generate glider speed estimate and flow speed estimate based on DBD or SBD data; 2) compare glider speed to detect anomalies; 3) compare flow speed for false alarm. ## III Experimental Evaluation We apply the anomaly detection algorithm to four glider deployments across the coastal ocean of Florida and Georgia, USA. For evaluation, the anomaly detected by the algorithm is cross-validated by high-resolution glider DBD data and pilot notes. In particular, we simulate the online detection process on SBD data and compare the result with that detected from DBD data. For reference, the designed parameters are listed in TABLE I. ### _Experimental Setup_ All the gliders used in the experiments are Slocum gliders, which are a type of autonomous underwater vehicles (AUVs) that move by adjusting buoyancy and center of gravity [28]. These gliders are able to perform ocean surveying for months by traveling at \(0.25\)-\(0.35m\cdot s^{-1}\). During the mission, the gliders surface at fixed intervals (usually four hours) to transmit lightweight SBD datasets to the onshore dockserver. It also estimates the average flow speed through dead reckoning. Post recovery, all the datasets are downloaded off the glider and stored as DBD files. The anomaly detection algorithm in Section II is applied to post-recovery DBD data and real-time SBD data in the offline and online mode, respectively, and verified by both the sensor data segments and the glider pilots' notes. The deployment details of four glider deployments are shown in TABLE II along with the Google Earth trajectories in Fig. 2. It is worth mentioning that USF-Sam is piloted under the support of GENIoS_Python [26] in real time, and USF-Sam is simulated by the online detection algorithm to report any potential anomaly. ### _Large-scale Experiments_ The large-scale experiments apply hindcast anomaly detection to full resolution DBD files downloaded from the glider on the shore. For verification, the algorithm-detected anomaly is compared with that directly seen from post-mission DBD data with the highest possible resolution and pilot logs. #### Iii-B1 Franklin From October 12 to October 13, 2022, Franklin experienced two aborts and delays of up to 40 minutes of subsequent surfacings. The glider pilot time believed that Franklin had attracted remoras or had encountered an obstruction on his port wing, resulting in a roll change shown in Fig. (a)a. Its climb to the ocean surface is unexpectedly slow even though flying with climb ballast near the upper limits of extended buoyancy pump, as shown in Fig. (b)b. Applied to the DBD data downloaded from the glider, the detection algorithm guarantees convergence of the estimated trajectory to the true trajectory as shown in Fig. 4. There are four basis functions (four green circles) covering the glider trajectory in the whole flow fields, which is an essential condition for parameter estimation to converge. As shown in Fig. 5, the maximum CLLE is small enough as \(2.5m\), considering the glider moves hundreds of kilometers in the entire deployment, so we can also conclude the convergence of CLLE. The CLLE convergence guarantees the convergence of both the glider speed estimate and the flow speed estimate. For precise comparison, the flow is divided into its West-East (W-E or zonal) component, denoted as \(u\), and its North-South (N-S or meridional) component, denoted as \(v\). The graphs in Fig. (a)a demonstrate that the algorithm-estimated W-E flow is close to the corresponding glider estimate, indicating minimal error in the \(u\) flow estimation. A similar comparison can be observed for the N-S flow, as shown in Fig. (b)b. This comparative analysis provides reliability of the anomaly detection when it is triggered. If the estimated glider speed drops out of the normal speed range, the anomaly should have occurred. As shown in Fig. 7, the estimated glider speed drops out of the normal speed range (green dot line) at around October 13, 2022, 15:00 UTC. The timestamp when the anomaly is detected by the algor Fig. 2: Google Earth trajectories of glider deployments timestamp detected from the glider team's report and the post-recovery DBD data. Therefore, the algorithm is verified by successfully detecting the anomaly. #### Iv-A2 USF-Sam During the mission, glider pilots suggest that the remora attachment should occur between March 11 and March 12, 2023 UTC when USF-Sam has a couple of roll and pitch changes shown in Fig. (a)a and Fig. (b)b. This suggestion is reinforced by USF-Sam's prolonged period of being stuck at a certain depth. Based on the DBD data, the detection algorithm generates the estimated trajectory, which is close to the true trajectory as shown in Fig. 9. From quantitative analysis in Fig. 10, the maximum CLLE \(45m\) is small enough to conclude the CLLE convergence. We follow the same process of evaluating flow estimation error in Section III-B1. As shown in Fig. 11, the small flow estimation error suggests that the detection result can be trusted. As shown in Fig. 12, the estimated glider speed drops out of the normal speed range (green dot line) at around March 11, 2023, 20:00 UTC. The timestamp when \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline Deployments & Time (UTC) & Area & Mission distance & Glider team & Algorithm \\ \hline Franklin & Oct. 08 - Nov. 01, 2022 & Savannah, Georgia & 600 km & SkIO & Anomaly Detection \\ \hline USF-Sam & Feb. 25 - Mar. 27, 2023 & Gray’s Reef & 610 km & USF & Anomaly Detection \& GENIoS\_Python \\ \hline USF-Gansett & Nov. 10 - Dec. 8, 2021 & Tampa Bay, Florida & 1100 km & USF & Anomaly Detection \\ \hline USF-Stella & Mar. 24 - May 09, 2023 & Clearwater, Florida & 400 km & USF & Anomaly Detection \\ \hline \end{tabular} \end{table} TABLE II: Deployment details Fig. 4: Comparison of the estimated (blue) and true (red) trajectory for the 2022 Franklin deployment. The four green circles are the four basis functions covering the whole trajectory. \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline Parameters & Franklin & USF-Sam & USF-Gansett & USF-Stella \\ \hline number of basis functions N & 4 & 4 & 4 & 4 \\ \hline width \(\sigma_{i}\) & 13e3 & 50e3 & 32e3 & 30e3 \\ \hline tidal phase \(\upsilon_{i}\) & 0 & 0 & 0 & 0 \\ \hline tidal frequency \(\omega_{i}\) & \(2\pi\)e-6 & \(2\pi\)e-6 & \(2\pi\)e-6 & \(2\pi\)e-6 \\ \hline gain \(K\) & \(\begin{bmatrix}0.003&0\\ 0&0.003\end{bmatrix}\) & \(\begin{bmatrix}0.002&0\\ 0&0.002\end{bmatrix}\) & \(\begin{bmatrix}0.003&0\\ 0&0.003\end{bmatrix}\) & \(\begin{bmatrix}0.003&0\\ 0&0.003\end{bmatrix}\) \\ \hline gain \(\bar{\gamma}\) & 5e-7 & 5e-7 & 1e-6 & 1e-7 \\ \hline gain \(s\) & 30e-3 & 7e-3 & 18e-3 & 30e-3 \\ \hline false alarm threshold \(\gamma_{f}\) & 1.0 & 1.0 & 1.0 & 1.0 \\ \hline maximum glider speed \(V_{max}\) & 0.25 & 0.25 & 0.25 & 0.25 \\ \hline minimum glider speed \(V_{min}\) & 0.15 & 0.15 & 0.15 & 0.15 \\ \hline \end{tabular} \end{table} TABLE I: Designed parameters of experiments Fig. 3: ground truth for the 2022 Franklin deployment. the anomaly is detected by the algorithm corresponds to the timestamp detected from the glider team's report and the DBD dataset. The simulated online experiment implements anomaly detection using subsetted real-time SBD files transmitted from the glider to the dockserver during the mission. For example, the SBD file may contain fewer than 30 variables at 18-1800 s intervals, and is often subsampled to every 3rd or 4th yo (or down-up cycle), compared to the approximately 3000 variables stored at approximately 1 s interval on every yo in the DBD file processed on shore. The algorithm fetches new SBD files from the dockserver, parses SBD data from the SBD files, and applies the detection algorithm to the SBD data in an online mode. The online detection holds unique significance from the perspective that the detection results could help pilots monitor glider conditions in real time, thus circumventing any further loss and DBD data anomaly is only available post recovery. Instead of waiting for the DBD data after the entire mission, the online detection is capable of utilizing the SBD data in real time. As shown in Fig. 13, the detection algorithm can also achieve trajectory convergence similar to using the DBD data. The maximum CLLE \(5m\) in Fig. 14 is sufficiently small given the glider moving range in the ocean. Therefore, we can conclude the CLLE convergence. We follow the same process of evaluating flow estimation error in Section III-B1. As shown in Fig. 15, the small flow estimation error suggests that the online detection result is trustworthy. As shown in Fig. 16, the estimated glider speed drops out of the normal speed range (green dot line) at around March 12, 2023, 03:00 UTC. This result matches reasonably well the above result from the DBD data, which justifies that we can trust the online anomaly detection applied to real-time SBD data. #### Iv-B3 Usf-Gansett At November 12, 2021, 22:32 UTC, the glider USF-Gansett sharply rolled to starboard \(47^{\circ}\) and pitches to \(54^{\circ}\), settling back by 22:36 UTC to a roll of \(11^{\circ}-15^{\circ}\) and normal pitches as shown in Fig. 17a and Fig. 5: CLLE (m) for the 2022 Franklin deployment. Fig. 8: ground truth for the 2023 USF-Sam deployment. Fig. 6: Comparison of glider-estimated and algorithm-estimated W-E (\(u\), upper) and N-S (\(v\), lower) flow velocities for the 2022 Franklin deployment. Fig. 7: Comparison of estimated glider speed (red) and normal speed range (green) for the 2022 Franklin deployment. Fig. 16(b). Heading changes during this time also varied by over \(100^{\circ}\) as shown in Fig. 16(c). This abnormal roll persisted even though pitch and heading returns to normal afterwards. Upon recovery, gouges resembling teeth marks are evident on the aft hull and science bay as shown in Fig. 17(a). The arc of the marks span approximately 9 inches. The chord between the top and bottom ends of the aft hull markings is approximately 7.5 inches. The netting on the hull was cut in numerous areas, suggesting a serious shark strike. It is highly hypothesized that the bent starboard wing in Fig. 17(b) is caused by the shark strike. Based on the DBD data, the estimated trajectory is close to the true trajectory as shown in Fig. 19. From quantitative analysis in Fig. 20, the maximum CLLE \(1.1m\) is small enough to conclude the CLLE convergence. We follow the same process of evaluating flow estimation error in Section III-B1. As shown in Fig. 21, the small flow estimation error suggests that the detection result can be trusted. As shown in Fig. 22, the estimated glider speed dropped out of the normal speed range (green dot line) at around November 12, 2021, 22:00 UTC, followed by radical speed changes that match the persistent roll change in the glider team's report. The timestamp when the anomaly is detected by the algorithm corresponds to the timestamp hypothesised by the glider team. #### V-B4 Usf-Stella After performing hindcast analysis of the ground truth data as shown in Fig. 23, the glider team was certain that USF-Stella takes several hits during the deployment. At some point, the strike was serious enough that one of the wing support rails are broken, as shown in Fig. 24. Based on the DBD data, the algorithm-estimated trajectory is close to the true trajectory, as shown in Fig. 25. From quantitative analysis in Fig. 26, the maximum CLLE \(45m\) is small enough to conclude the CLLE convergence. We follow the same process of evaluating flow estimation error in Section III-B1. As shown in Fig. 27, the small flow estimation error suggests that the detection result can be trusted. As shown in Fig. 28, the estimated glider speed dropped out of the normal speed range (green dot line) at Fig. 11: Comparison of glider-estimated and algorithm-estimated W-E (\(u\), upper) and N-S (\(v\), lower) flow velocities for the 2023 USF-Sam deployment. Fig. 12: Comparison of estimated glider speed (red) and normal speed range (green) for the 2023 USF-Sam deployment. Fig. 10: CLLE (m) for the 2023 USF-Sam deployment. Fig. 9: Comparison of the estimated (blue) and true (red) trajectory for the 2023 USF-Sam deployment. The four green circles are the four basis functions covering the whole trajectory. around April 02, 2023, 15:00 UTC. The anomaly timestamp detected by the algorithm corresponds to the timestamp in the glider team's report. ## IV Conclusion In this paper, we apply an anomaly detection algorithm to four real glider missions supported by the Skidaway Institute of Oceanography in the University of Georgia and the University of South Florida. On one side of generality, the algorithm is capable of detecting anomalies like remora attachment and shark hit in diverse real-world deployments based on high-resolution DBD data. On the other side of real-time performance, we simulate the online detection on subsetted SBD data. It utilizes generic data of glider trajectory and heading angle to estimate glider speed and flow speed. Anomalies can be identified by comparing the estimated glider speed with the normal speed range. False alarms can be minimized by comparing the algorithm-estimated flow speed with the glider-estimated flow speed. The algorithm achieves real-time estimation through a model-based framework by continuously updating estimates based on ongoing deployment feedback. Future work will enhance estimation accuracy by incorporating large amount of glider data into a data-driven framework. It is also worth taking into account the impact of the anomaly on the estimated flow speed, aiding in the process of determining false alarms.
2303.17956
Ensemble Methods for Multi-Organ Segmentation in CT Series
In the medical images field, semantic segmentation is one of the most important, yet difficult and time-consuming tasks to be performed by physicians. Thanks to the recent advancement in the Deep Learning models regarding Computer Vision, the promise to automate this kind of task is getting more and more realistic. However, many problems are still to be solved, like the scarce availability of data and the difficulty to extend the efficiency of highly specialised models to general scenarios. Organs at risk segmentation for radiotherapy treatment planning falls in this category, as the limited data available negatively affects the possibility to develop general-purpose models; in this work, we focus on the possibility to solve this problem by presenting three types of ensembles of single-organ models able to produce multi-organ masks exploiting the different specialisations of their components. The results obtained are promising and prove that this is a possible solution to finding efficient multi-organ segmentation methods.
Leonardo Crespi, Paolo Roncaglioni, Damiano Dei, Ciro Franzese, Nicola Lambri, Daniele Loiacono, Pietro Mancosu, Marta Scorsetti
2023-03-31T10:37:19Z
http://arxiv.org/abs/2303.17956v1
# Ensemble Methods for Multi-Organ Segmentation ###### Abstract In the medical images field, semantic segmentation is one of the most important, yet difficult and time-consuming tasks to be performed by physicians. Thanks to the recent advancement in the Deep Learning models regarding Computer Vision, the promise to automate this kind of task is getting more and more realistic. However, many problems are still to be solved, like the scarce availability of data and the difficulty to extend the efficiency of highly specialised models to general scenarios. Organs at risk segmentation for radiotherapy treatment planning falls in this category, as the limited data available negatively affects the possibility to develop general-purpose models; in this work, we focus on the possibility to solve this problem by presenting three types of ensembles of single-organ models able to produce multi-organ masks exploiting the different specialisations of their components. The results obtained are promising and prove that this is a possible solution to finding efficient multi-organ segmentation methods. Deep Learning, Medical Imaging, Semantic Segmentation ## I Introduction and Related Works Radiation therapy (RT) is a prevalent treatment for various types of cancer. It involves administering high doses of radiation to eliminate cancer cells and shrink tumors. When devising an RT treatment plan, it is crucial to deliver an appropriate dose distribution to tumoral targets while minimizing exposure to nearby organs, also known as organs-at-risk (OARs). Hence, accurate delineation and segmentation of OARs are essential for effective treatment planning, a process that is both time-consuming and error-prone for human experts. Recent advancements in the field of Deep Learning suggest that it is feasible to automate this segmentation process, commonly referred to as _Semantic Segmentation_ in the Machine Learning literature [1]. Significant progress has been made in semantic segmentation of medical imaging over the past few years. While atlas-based solutions remain widely used in commercial environments, state-of-the-art approaches now typically employ deep learning techniques such as auto-encoders (AE), convolutional neural networks (CNN), fully convolutional networks (FCN), generative adversarial networks (GAN), and regional-CNN [2, 3]. A major challenge when applying Deep Learning to medical imaging segmentation is the limited availability of annotated data. Specifically, when focusing on OARs segmentation for whole-body treatments like total marrow irradiation [4], acquiring a substantial amount of training data with annotations for all organs to segment is difficult [2, 3]. A promising approach to address this issue involves using several smaller datasets, which include annotations for only one or a few of the OARs to segment, to train multiple single-organ segmentation models that can be merged later for multi-organ segmentation. Additionally, this approach allows for different data preprocessing (e.g., applying a unique look-up table) for each model and, therefore, for each OAR. In this paper, we explore this research direction, inspired by multi-modal segmentation architectures [5], in which the segmentation of different medical imaging acquisition modes (e.g., CT scans and MRI) are combined. Ensemble learning, in general, aims to combine several models to improve individual performances. Accordingly, previous works in the literature [6, 7] investigated the application of ensemble methods to achieve better performance than single models. However, only a few works have focused on employing ensembles to address the lack of annotated data in multi-organ segmentation. The most notable example is the work of Fang et al. [8], who proposed an ensemble approach to train a model on multiple datasets with partial annotations. In this study, instead, we concentrate on exploring three ensembling strategies to combine previously trained single-organ segmentation models for multi-organ segmentation tasks on CT series. Our experimental analysis includes comparing different data sources and architectures (Unet, SE-ResUnet, and DeepLabV3). Our results are promising and indicate that ensemble methods can generally outperform or, at the very least, achieve similar performance to both multi-organ segmentation models and the single models used to construct them, while also reducing development effort. ## II Proposed Solutions Three different methods have been developed and tested. The leitmotiv of this work is to use multiple binary segmentation models trained on single organs in ensembles, in order to exploit the previously acquired high specialisation to tackle a broader problem. Inspiration for the methods described in this section has often been drawn from multimodal multi-class segmentation scenarios [5], with the difference that single-organ models have been used. ### _Binary models_ As mentioned, the basic components of the methods proposed are binary models, each trained and specialised in the segmentation of one of six organs (right and left lung, heart, trachea, esophagus, spinal cord), the ones available in the StructSeg dataset (further described in III-A). Three different binary models for each organ have been trained, using some of the most popular architectures from the literature: * **U-Net**: introduced in [9], is one of the most popular semantic segmentation architectures; it has a symmetric structure featuring an encoder, which downsamples the input extracting condensed features which represent the original data, and a decoder, responsible to upsample these features to create the desired output-the binary segmentation mask, in this case; skipped connections link the corresponding levels of downsampling-upsampling, respectively, of the encoder and the decoder so to reduce information loss and retain the original image structure. * **SE-ResNet**: described in [10] and inspired by ResUNet [11] (which is a modification of U-Net using residual blocks [12]), employs squeeze-and-excitation (SE) [13] blocks to enable the network to perform internal channel-wise features recalibration. It has been proven to be valuable in similar scenarios. * **DeepLabV3**: is another state-of-the-art architecture for semantic segmentation presented in [14]; once again, the network is composed by an encoder and a decoder, even though this architecture doesn't stem from the original U-Net; it relies, in the encoding part, on Spatial Pyramid Pooling [15], which allows pooling while keeping the same spatial level feature together, and Atrous Convolutions [15], able to gather information from distant pixels in the image, while also lightening the training process. ### _Baseline: Argmax ensemble_ The first and simplest ensemble method relies on a heterogeneous pool of models only and consists in assigning to each pixel of the resulting image the prediction from the most confident positive model above a certain threshold. The operation is an _argmax_ on C classes for each \((x,y)\) pixel on the set of the masks M: \[\underset{C}{argmax}M_{c}(x,y) \tag{1}\] The outputs of single models are stacked, then a sigmoid function is applied, it is thresholded, and the argmax is computed; the method requires no other computational effort than the forward pass in the networks and the result is a multi-class segmentation mask. The natural consequence is that the ensemble is strictly as good as the single models involved, as no further training is required. This ensemble method, together with multiclass single models, has been used as a baseline for the experiments. ### _Logits Convolution_ The second method exploits the logits of the single models. It still relies heavily on their performance, but in this case, supplementary training is necessary: a 1x1 Convolutional layer combines the concatenated outputs of the binary models, reducing the feature maps to the number of labels while learning how to weight the single model, which might give it an edge on the simpler argmax; in the worst case scenario, the expected result is that the added layer learns to take the maximum value. Also, it is possible to analyse the weight assigned to a certain model for a single class, understanding more about the decision process. After the convolution, the resulting feature maps are thresholded and the final output is a multi-label mask. In the first row of 1, this method is schematised. ### _Meta U-Net_ Multiple authors proposed with success cascaded models for segmentation of ROIs in the medical imaging field; in [16], a method using two CNN put in sequence is shown, with the first extracting a piece of rougher information, like the bounding box containing the ROI, while the second actually segments the target from within the bounding box; it obtained very promising results on CT images, chest x-rays, and retinal images. Similarly, but for a different application, in [17] the authors developed two cascaded fully convolutional networks to segment the liver first, and subsequently, lesions in it looking in the liver area only, all from CT scans. In this other work by Roth et al., [18], the authors propose a two-step process using CNNs to pass from a coarse segmentation to a finer one, using the input of the upper-level net to expand the information given to the second. In general, methods using a two steps system proved to be rather successful in the field; this third method stems from this consideration and uses the binary models up to the logits once again: the logits maps are Fig. 1: Schemes representing the three methods used to ensemble binary models. The coloured blocks represent the downsampling and upsampling steps, common in the networks specialised in segmentation, and do not have any intention of precisely representing the models used but rather of showing intuitively how the ensembles are assembled. stacked and fed to a second model, the meta-model, which is then trained to learn the actual segmentation map from them; being a segmentation problem, the meta-model of choice, in this case, is a U-Net [9], which is able to take the concatenated feature maps as input. The second row in figure 1 shows a scheme of the architecture. An important remark is that during the training of the meta-model, the weights of the previous one are not updated. ### _Layer fusion_ In the last method, binary models are merged in their last layers before the logits, obtaining a more complex single model with multiple input branches. For what concerns U-Net and SE-ResUnet nets, the last layer is the one following the last transposed convolution/upsampling, right before the final 1x1 convolution; in DeepLabV3, the last upsampling layer has been kept, in order to have coherent features that can be concatenated. Hence, as shown in the third row of figure 1, features from different binary models are concatenated, so that one would get 64 feature maps from U-Net and SE-ResUnet and 256 from DeepLabV3. Contrary to the previous methods, the weights for this last concatenated layers are not frozen during the training of the remainder of the model. A 1x1 convolutional layer follows the concatenation, similarly as before, to produce a number of logit maps corresponding to the number of classes; thresholding is then applied to obtain the final multilabel mask. ## III Experiments and Results In order to assess the capabilities of the ensembles proposed in this work, we carried out multiple experiments with the objective to simulate multiple scenarios that are likely to arise in the real world; this is aimed to provide a comprehensive overview of the strength and weaknesses of such an approach and also to highlight where this system may be useful and where this may not. ### _Datasets_ The data used to build and test models come from two popular public datasets specifically built for segmentation tasks in the context of two different challenges: _Structseg_, from the _Automatic Structure Segmentation for Radiotherapy Planning Challenge_, part of MICCAI 2019, and SegTHOR, from the challenge 4 of the IEEE International Symposium on Biomedical Imaging [19]. * **StructSeg**: a collection of 50 thoracic CT scans from 50 different patients, annotated by expert oncologists with six labels representing six OARs during radiotherapy treatments for lung cancer (right and left lung, heart, trachea, esophagus, spinal cord). The CT scans come in a size of 512 x 512 pixels and their in-plane resolution is 1.2 mm x 1.2 mm. The number of slices in the scans varies between 80 to 127, and the z-resolution is 5 mm. * **SegTHOR**: thoracic CT scans of patients affected by lung cancer or Hodgkin's Lymphoma, used to prepare radiotherapy treatments. The dataset contains 40 scans with 4 OARs annotated by an expert: heart, esophagus, trachea, and aorta. The CT scans come in a size of 512 x 512 pixels and their in-plane resolution varies between 0.90 mm and 1.37 mm per pixel, depending on the patient. The number of slices in the scans varies between 150 to 284, and the z-resolution lies between 2 mm and 3.7 mm. ### _Preprocessing and Data Augmentation_ In DL, it is common to use some light preprocessing steps in order to prepare the data for training, which will then be repeated for every data used at inference time and whenever the model is used again in the future; the aim is to make the data coherent while also improving the training process and possibly the results. Data augmentation is usually necessary on small datasets, as it has been demonstrated that training on a larger sample size yields better final results [20]; also, it makes models more robust to variations; similarly to preprocessing, there are some common techniques adopted to perform this, carefully trying to avoid as much as possible the introduction of bias. The following techniques have been used: * **Lookup table (LUT)** : is a technique used to preprocess CT images to enhance their visualization and facilitate diagnosis. CT imaging involves acquiring a series of X-ray images of the body from different angles, which are then reconstructed into a 3D image using computer algorithms. The resulting CT images are often displayed in grayscale, where each pixel's intensity represents the attenuation of X-rays passing through the corresponding part of the body. The range of these intensities, expressed in Hounsfield Units, a dimensionless unit universally used in CT imaging, is [-1024, 3071] [21], hence 12 bits of information. Windowing the range of values represented in an image on a smaller range, compressed to 8 bits, makes it easier to highlight a specific organ structure [22]. The windows used, expressed in width (WW) and level (WL), are summarised in table I. * **Normalisation**: the case for normalisation is given by the fact that values with a specific physical meaning, in particular regarding medical imaging, can be affected by the equipment or the different conditions of acquisition, therefore posing the risk of affecting the results, especially in a case where data come from different sources with the respect to the training set. To avoid this, pixels are normalised to the fixed range [0, 1]. * **Center cropping**: it has been used to speed up and lighten the training process because the actual informative and relevant area of the slices is limited to the central portion; therefore it is useless to analyse the whole 512x512; the optimal value found to keep all the relevant parts has been set to 320x320 on both datasets. * **Geometric transformations**: grid distortion, random rotations, and elastic transformation have been used for data augmentation; these methods allow for nondestructive manipulations of the original data and increase the sample size for training. They are not included in the test set; the transformations are applied to both the image and the mask so that the pixels still correspond. ### _Training_ Training has been performed in two steps: single models first, of course, and ensembles later. One binary model with each of the architectures considered in II-A has been trained on each OAR in a supervised fashion where the only considered mask is the one corresponding to that OAR. To train the ensembles, the parameters of the binary models have been kept frozen and only the ensemble parameters have been updated, as specified in section II. A composite loss function, comprising a Dice Score (DS) Loss term [23] and a Cross-entropy (CE) loss term, equally weighted, has been used, as suggested in [24] to take advantage of the characteristics of both; in literature [10], non-uniform weights for the different labels for the CE terms have been tested but after experiments with it, it has been decided to keep them uniform as they did not yield any advantage; an adaptive optimiser is the choice for the gradient descent algorithm (Adam, [25]). Early stopping technique [26] helped improve the efficiency of the process, together with a progressive decrease of the learning rate (starting at \(10^{-3}\) and exponentially decreased as in [27]) with reductions on plateau phases encountered in the validation loss curves, computed through a moving average to smooth the plot and take into consideration the past steps. Pytorch 1.10 [28] is the framework used to build and train the model in Python while the computational duties have been carried out on a machine equipped with an NVIDIA Quadro P4000 GPU with 8GB of memory for the binary models, and with an NVIDIA GeForce GTX 1060 with 6GB of memory for the ensembles. ### _Evaluation metrics_ Two main metrics have been chosen to be shown in this work to evaluate the performances of the models: the _Dice Similarity Coefficient_ (DSC) and the _Hausdorff Distance 95_ (HD95). The former [29] allows us to evaluate how much the mask overlaps to the GT; it assumes values in the range [0, 1] where 0 means no overlap and 1 identifies a perfect match. The HD95, differently, measures the distance between the two sets of points, highlighting how close the predicted mask is to the GT; the HD95 indicates that the 95 percentile is considered for this metric, avoiding possible outliers. These two metrics are fairly common in segmentation works and provide with a meaningful representation of the performance. ### _Experiments_ Five different experiments have been run in order to assess the capabilities of the different ensemble methods described in the paper. The ensembles have been assembled using only the best binary network for each organ, according to their validation loss during training; the choices have been: _SE-ResUNet_ for left and right lung, esophagus, and spinal cord, and _DeepLabv3_ for heart and trachea. _U-Net_ has not performed best in any of the tasks considered for binary models. #### Iv-C1 Baseline The Argmax ensemble has been used as an ensemble baseline because it represents the most confident output of the binary network and would be the same as always having the best binary network to segment each organ. In order to broaden the comparison pool, multiclass networks have been trained, one for each architecture considered; the process described in III-C still applies to this. Results obtained are shown in table II; it is evident how the argmax ensemble has a clear edge on all the multiclass networks both in DSC and HD95. #### Iv-C2 Experiment 1 - Full Dataset The first experiment consist in the straightforward training of the ensembles, followed by inference on the test dataset. The average results on the labels are shown in table III. Looking at the DSC, it seems that no significant differences are noticeable with the respect to the baseline, with a performance that is actually even worse for the Meta U-Net by one percentage point, despite it showing the best precision. All the ensembles, in particular Layers Fusion and Logits Convolution, are able, however, to improve the HD95 score, meaning that the prediction is still more reliable, shape-wise, to the GT, than the baseline. #### Iv-C3 Experiment 2 - Redundant branches Considering the single label scores (for space reasons, visible in the supplementary material), it is evident how some organs are more difficult to segment than others: in particular trachea and esophagus show the worst results both in terms of DSC and HD95. This is possibly due to the fact that these two organs are rather small compared to the others, therefore the positive/negative pixel ration is quite disadvantageous and the models struggle to correctly identify them. This second experiment tests the capabilities of the ensembles to improve on these targets by adding a second binary network specialised in the segmentation of such OARs: a supplementary DeepLabV3 for the esophagus and a supplementary U-Net for the trachea are then added. A clear improvement (table IV) is shown in both DSC and HD95 for all the ensembles, that now consistently score better than the baseline in the most relevant scores. Among the ensembles, the meta model is still the worse, possibly due to a more difficult training process. #### Iii-B4 Experiment 3 - Differentiated sources In a real world scenario might happen that there are models available, but those are trained on different data with the respect to the ones available; of course transfer learning is an option, at least to try to lightly update the weights with some samples from the available data, but this is not always a viable possibility and it doesn't guarantee to produce reliable results; ensembles might help to solve this problem and with this experiment the aim is to simulate a scenario similar to the one described: binary models for heart, lungs and spinal cord have been trained on the StructSeg dataset only, while trachea and esophagus on SegThor, using the most succesful architectures of the previous experiments. The results, shown in table V, obtained from the ensembles trained then on both the datasets together, are largely underwhelming, with consistently and significantly worse scores in all the categories with the respect to other scenarios; however, the Layer Fusion actually scores significantly better than the argamx ensemble obtained with this same models, proving to be a more viable approach. The general performance drop was to be expected because, first of all, of the negatively affected sample size, which cripples the performances of the binary networks, but the generalisation to new data is inherently worse for the same reason. #### Iii-B5 Experiment 4 - More redundancy In order to study the behaviour of the different ensembles methods with more diversity of models, in the fourth experiment, together with the binary networks, a multiclass one (_DeepLabV3_) has been included, expecting a better overall performance because of the richer pool of segmentators included. Following on the previous one, once again the aim is to test the ensembles changing the diversity of the models and thus, this time, including a multiclass network (a multiclass _DeepLabV3_). Training and testing has been carried out in the same way as for experiments 1 and 2; this seems to yield the best results yet in terms of HD95 with the Layers Fusion while the DSC are on par with the ones from experiments 2, meaning that the addition of a multi-purpose network to an ensemble of specialised ones leads to only a slightly improved performance. However, in a real case scenario this approach might be more difficult as usually, not enough data is available to train both a multiclass network and the ensemble, therefore the increased burden necessary to add it might not be worth the performance increment. #### Iii-B6 Experiment 5 - Data scarcity The last experiments tries to emulate a scneario, likely to occur in the real world, were the data to train the ensemble is even scarcer than what's available for these challenges; using binary models as in experiment 1, because the hypothesis is that pre-trained models are available, the training set for the ensembles has been reduced to the 20% of the initial one. The trend is the same showed in the previous tests, with a surprisingly low drop in performance, still on par with the baseline in the DSC but superior considering HD95. The Logits Convolution is the best performing ensemble, closely followed by the Meta U-Net, that in this scenario unexpectedly seems to be able to grasp the most out of the reduced sample size. In general, from the results available, it seems to be possible to consider that the ensembles are generally superior than the baseline, considering both the argmax and the multiclass networks, and provide a powerful and promising tool to combine highly specialised single models in ensembles for multi-label segmentation, so that the whole becomes more than the sum of its parts. The solutions shown seems to be also robust to variations in the sample size and able to consistently score better as more specialised models are available. The most obvious drawback of such an approach is that binary models, specialised in the necessary task, have to be available, which is not always the case, but thanks to the increasing diffusion of public challenges, datasets and the efforts to share weights and models themselves through public repositories and so on, this is very likely to improve, to the point where it will be beneficial to combine them rather than to develop new ones. ## IV Conclusion In this study, we introduced ensembles of various Deep Neural Networks, each specialized in the segmentation of a single organ, with the goal of producing multi-organ segmentation masks. We tested three different methods inspired by multimodal segmentation ensembles and conducted several experiments to evaluate their strengths and weaknesses in different scenarios, attempting to mimic realistic real-world situations. The data used for training and testing the models and ensembles were sourced from two popular public challenges. The results obtained are promising, as they suggest that the proposed solutions could be valuable systems for multi-organ segmentation tasks in situations where binary models specialized in single organs are available, and the data is not suitable for training high-performing networks from scratch. Nevertheless, a more in-depth research is required, involving testing different compositions of the ensembles, introducing additional data sources, and considering other anatomical regions and organs to determine if these methods can genuinely aid in real-world clinical practice. Furthermore, it is essential to test better-performing models, potentially already available within the community, to fully assess the realistic applicability of such an approach.
2309.12875
A second-order in time, BGN-based parametric finite element method for geometric flows of curves
Over the last two decades, the field of geometric curve evolutions has attracted significant attention from scientific computing. One of the most popular numerical methods for solving geometric flows is the so-called BGN scheme, which was proposed by Barrett, Garcke, and N\"urnberg (J. Comput. Phys., 222 (2007), pp.~441--467), due to its favorable properties (e.g., its computational efficiency and the good mesh property). However, the BGN scheme is limited to first-order accuracy in time, and how to develop a higher-order numerical scheme is challenging. In this paper, we propose a fully discrete, temporal second-order parametric finite element method, which integrates with two different mesh regularization techniques, for solving geometric flows of curves. The scheme is constructed based on the BGN formulation and a semi-implicit Crank-Nicolson leap-frog time stepping discretization as well as a linear finite element approximation in space. More importantly, we point out that the shape metrics, such as manifold distance and Hausdorff distance, instead of function norms, should be employed to measure numerical errors. Extensive numerical experiments demonstrate that the proposed BGN-based scheme is second-order accurate in time in terms of shape metrics. Moreover, by employing the classical BGN scheme as mesh regularization techniques, our proposed second-order schemes exhibit good properties with respect to the mesh distribution. In addition, an unconditional interlaced energy stability property is obtained for one of the mesh regularization techniques.
Wei Jiang, Chunmei Su, Ganghui Zhang
2023-09-22T14:00:40Z
http://arxiv.org/abs/2309.12875v2
# A second-order in time, BGN-based parametric finite element method for geometric flows of curves ###### Abstract Over the last two decades, the field of geometric curve evolutions has attracted significant attention from scientific computing. One of the most popular numerical methods for solving geometric flows is the so-called BGN scheme, which was proposed by Barrett, Garcke, and Nurnberg (J. Comput. Phys., 222 (2007), pp. 441-467), due to its favorable properties (e.g., its computational efficiency and the good mesh property). However, the BGN scheme is limited to first-order accuracy in time, and how to develop a higher-order numerical scheme is challenging. In this paper, we propose a fully discrete, temporal second-order parametric finite element method, which incorporates a mesh regularization technique when necessary, for solving geometric flows of curves. The scheme is constructed based on the BGN formulation and a semi-implicit Crank-Nicolson leap-frog time stepping discretization as well as a linear finite element approximation in space. More importantly, we point out that the shape metrics, such as manifold distance and Hausdorff distance, instead of function norms, should be employed to measure numerical errors. Extensive numerical experiments demonstrate that the proposed BGN-based scheme is second-order accurate in time in terms of shape metrics. Moreover, by employing the classical BGN scheme as a mesh regularization technique when necessary, our proposed second-order scheme exhibits good properties with respect to the mesh distribution. keywords: Parametric finite element method, geometric flow, shape metrics, BGN scheme, high-order in time. + Footnote †: journal: Computer Physics Communications ## 1 Introduction Geometric flows, which describe the evolution of curves or surfaces over time based on the principle that the shape changes according to its underlying geometric properties, such as the curvature, have been extensively studied in the fields of computational geometry and geometric analysis. In particular, second-order (e.g., mean curvature flow, which is also called as curve-shortening flow for curve evolution) and fourth-order (e.g., surface diffusion flow) geometric flows have attracted considerable interest due to their wide-ranging applications in materials science [6; 31], image processing [1], multiphase fluids [21] and cell biology [13]. For more in-depth information, readers can refer to the recent review articles [14; 17], and references provided therein. In this paper, we focus on three different types of geometric flows of curves: curve-shortening flow (CSF), area-preserving curve-shortening flow (AP-CSF) and surface diffusion flow (SDF). First, assume that \(\Gamma(t)\) is a family of simple closed curves in the two-dimensional plane. We consider that the curve is governed by the three geometric flows, i.e., its velocity is respectively given by \[\mathcal{V}=\left\{\begin{array}{ll}-\kappa\mathbf{n},&\text{CSF},\\ (-\kappa+\langle\kappa\rangle)\mathbf{n},&\text{AP-CSF},\\ (\partial_{ss}\kappa)\mathbf{n},&\text{SDF},\end{array}\right. \tag{1.1}\] where \(\kappa\) is the curvature of the curve, \(s\) is the arc-length, \(\langle\kappa\rangle:=\int_{\Gamma(t)}\kappa\mathrm{d}s/\int_{\Gamma(t)}1 \mathrm{d}s\) is the average curvature and \(\mathbf{n}\) is the outward unit normal to \(\Gamma\). Here, we use the sign convention that a unit circle has a positive constant curvature. By representing the curves \(\Gamma(t)\) as a parametrization \(\mathbf{X}(\cdot,t):\mathbb{I}\to\mathbb{R}^{2}\), where \(\mathbb{I}:=\mathbb{R}/\mathbb{Z}\) is the "periodic" interval \([0,1]\), Barrett, Garcke and Nurnberg [10; 14] creatively reformulated the above equations (1.1) into the following coupled forms: \[\partial_{t}\mathbf{X}\cdot\mathbf{n} =\left\{\begin{array}{ll}-\kappa,&\text{CSF},\\ -\kappa+\left\langle\kappa\right\rangle,&\text{AP-CSF},\\ \partial_{ss}\kappa,&\text{SDF},\end{array}\right. \tag{1.2}\] \[\kappa\mathbf{n} =-\partial_{ss}\mathbf{X}.\] Based on the above equations and the corresponding weak formulations, a series of numerical schemes (the so-called BGN schemes) were proposed for solving different geometric flows, such as mean curvature flow and surface diffusion [10; 11], Willmore flow [13], anisotropic geometric flow [5], solid-state dewetting [6; 31] and geometric flow for surface evolution [12]. Recently, based on the BGN formulation (1.2), structure-preserving schemes have been proposed for axisymmetric geometric equations [4] and surface diffusion [5; 7], respectively. In practical simulations, ample numerical results have demonstrated the high performance of the BGN scheme, due to inheriting the variational structure of the original problem and introducing an appropriate tangential velocity to help mesh points maintain a good distribution. However, for the original BGN scheme, because its formal truncation error is \(\mathcal{O}(\tau)\), where \(\tau\) is the time step size, the temporal convergence order of the scheme is limited to the first-order. This has been confirmed by extensive numerical experiments [6; 7; 10; 11]. Therefore, how to design a temporal high-order scheme which is based on the BGN formulation (1.2) is challenging and still open. It is also worth noting that rigorous numerical analysis for BGN schemes remains an open problem [14]. In this paper, based on the BGN formulation (1.2), we propose a novel temporal second-order parametric finite element method for solving geometric flows of curves, i.e., CSF, AP-CSF and SDF. Specifically, to discretize the same continuous-in-time semi-discrete formulation as the classical BGN scheme [10], we begin by fixing the unit normal as that on the current curve \(\Gamma^{m}\) and then discretize other terms using the Crank-Nicolson leap-frog scheme [22]. The resulting scheme is a second-order semi-implicit scheme, which only requires solving a system of linear algebraic equations at each time step. Furthermore, the well-posedness and mild energy stability of the fully discrete scheme can be established under suitable assumption conditions. Numerical results have demonstrated that the proposed scheme achieves second-order accuracy in time, as measured by the shape metrics, outperforming the classical BGN scheme in terms of accuracy and efficiency. It is worth mentioning that there exist several temporal higher-order numerical schemes based on other formulations which have been proposed for simulating geometric flows. For the specific case of curve-shortening flow, a Crank-Nicolson-type scheme combined with tangential redistribution [3] and an adaptive moving mesh method [28] have been developed. Both of the schemes are convergent quadratically in time and fully implicit, requiring to solve a system of nonlinear equations at each time step. Recently, an evolving surface finite element method together with linearly implicit backward difference formulae for time integration for simulating the mean curvature flow has been proposed in [26, 27]. In comparison to these existing approaches, our newly proposed scheme is based on the BGN formulation (1.2), then it inherits the variational structure of the original geometric flows, and has very good property with respect to mesh distribution. The new scheme exhibits comparable computational cost to the classical BGN scheme while surpassing it in terms of accuracy. Furthermore, it can be extended easily to other geometric flows with applications to various fields. The main reason why we have successfully proposed a temporal high-order, BGN-based parametric finite element method for solving geometric flows lies in the following two key points: (1). we choose an appropriate metric (i.e., shape metrics) to measure numerical errors of the proposed schemes; (2). we use the classical first-order BGN scheme as "a good partner" of the proposed scheme to help mesh points maintain a good distribution without sacrificing the accuracy. How to measure the errors of numerical solutions for geometric flows is an important issue. A natural approach is to use classical Sobolev norms, such as \(L^{2}\)-norm, \(L^{\infty}\)-norm or \(H^{1}\)-norm, which are widely used in the numerical analysis for geometric flows [18, 19, 26, 27]. However, when it comes to numerical schemes that involve in tangential movements, these function norms may not be suitable for quantifying the differences between two curves/surfaces. To address this issue, we consider an alternative approach using shape metrics, such as manifold distance (as used in [7, 32]) and Hausdorff distance [2]. These metrics provide a measure of how similar or different two curves/surfaces are in terms of their shape characteristics. Extensive numerical experiments have been conducted, and the results demonstrate that our proposed scheme achieves second-order accuracy when measured using shape metrics. On the other hand, the quality of mesh distribution is always a major concern when simulating geometric flows using parametric finite element methods. It is important to note that the original flow (1.1) requires the curve to evolve only in the normal direction, thus the numerical methods based on (1.1) which prevent tangential movement of mesh points might lead to mesh distortion or clustering during the evolution. To address this issue, various approaches have been proposed in the literature to maintain good mesh quality, e.g., artificial mesh regularization method [15], reparametrization by introducing a tangential velocity [16, 20, 25, 29, 30]. On the contrary, the BGN formulation (1.2) does not enforce any condition on the tangential velocity, which allows for an intrinsic tangential motion of mesh points, as demonstrated by the standard BGN scheme [10, 11] constructed based on this formulation (1.2). Though the semi-discrete scheme of (1.2), where only spatial discretization is performed, results in precise equidistribution of mesh points, our proposed fully discrete second-order BGN-based scheme exhibits oscillations in terms of mesh ratio and other geometric quantities, which may lead to instability in certain situations. To overcome this problem, we employ the classical first-order BGN scheme as a mesh regularization procedure to improve mesh quality once poorly distributed polygonal approximations are observed. Extensive numerical experiments indicate that this mesh regularization remedy enhances the stability of the new scheme and improves mesh quality significantly. Fortunately, numerous numerical experiments have also demonstrated that this mesh regularization only occurs a few times during the evolution, thus not compromising the temporal second-order accuracy. accuracy of the proposed scheme. The remaining of the paper is organized as follows. In Section 2, taking CSF as an example, we begin by recalling the standard BGN scheme, and then propose a second-order in time, BGN-based parametric finite element method for solving CSF. Section 3 is devoted to explaining the importance of using shape metrics, such as manifold distance and Hausdorff distance, to accurately measure the errors of two curves. We extend the proposed second-order scheme to other geometric flows such as AP-CSF and the fourth-order flow SDF in Section 4. Extensive numerical results are provided to demonstrate the accuracy and efficiency of the proposed schemes in Section 5. Finally, we draw some conclusions in Section 6. ## 2 For curve shortening flow (CSF) In this section, we propose a parametric finite element method with second-order temporal accuracy for numerically solving the CSF. The same idea can be easily extended to other geometric flows (cf. Section 4). To provide a comprehensive understanding, we first review the classical first-order BGN scheme proposed by Barrett, Garcke and Nurnberg [10; 11; 14]. ### Weak formulation and BGN scheme To begin with, we rewrite the CSF into the following formulation as presented in Eqs. (1.2): \[\begin{split}\partial_{t}\mathbf{X}\cdot\mathbf{n}& =-\kappa,\\ \kappa\mathbf{n}&=-\partial_{ss}\mathbf{X}.\end{split} \tag{2.1}\] We introduce the following finite element approximation. Let \(\mathbb{I}=[0,1]=\bigcup_{j=1}^{N}I_{j}\), \(N\geq 3\), be a decomposition of \(\mathbb{I}\) into intervals given by the nodes \(q_{j}\), \(I_{j}=[\rho_{j-1},\rho_{j}]\). Let \(h=\max\limits_{1\leq j\leq N}|\rho_{j}-\rho_{j-1}|\) be the maximal length of a grid element. Define the linear finite element space as \[V^{h}:=\{u\in C(\mathbb{I}):u|_{I_{j}}\;\;\text{is}\;\;\text{linear},\;\forall j =1,2,\ldots,N;\quad u(\rho_{0})=u(\rho_{N})\}\subseteq H^{1}(\mathbb{I}).\] The mass lumped inner product \((\cdot,\cdot)_{\Gamma^{h}}^{h}\) over the polygonal curve \(\Gamma^{h}\), which is an approximation of \((\cdot,\cdot)_{\Gamma^{h}}\) by using the composite trapezoidal rule, is defined as \[(u,v)_{\Gamma^{h}}^{h}:=\frac{1}{2}\sum_{j=1}^{N}|\mathbf{X}^{h}(\rho_{j},t)- \mathbf{X}^{h}(\rho_{j-1},t)|\left[(u\cdot v)(\rho_{j}^{-})+(u\cdot v)(\rho_{j- 1}^{+})\right],\] where \(u,v\) are two scalar/vector piecewise continuous functions with possible jumps at the nodes \(\{\rho_{j}\}_{j=1}^{N}\), and \(u(\rho_{j}^{\pm})=\lim_{\rho\to\rho_{j}^{\pm}}u(\rho)\). Subsequently, the semi-discrete scheme of the formulation (2.1) is as follows: given initial polygon \(\Gamma^{h}(0)\) with vertices lying on the initial curve \(\Gamma(0)\) clockwise, parametrized by \(\mathbf{X}^{h}(\cdot,0)\in[V^{h}]^{2}\), find \((\mathbf{X}^{h}(\cdot,t),\kappa^{h}(\cdot,t))\in[V^{h}]^{2}\times V^{h}\) such that \[\left\{\begin{array}{l}\left(\partial_{t}\mathbf{X}^{h}\cdot\mathbf{n}^{h}, \varphi^{h}\right)_{\Gamma^{h}}^{h}+\left(\kappa^{h},\varphi^{h}\right)_{ \Gamma^{h}}^{h}=0,\quad\forall\ \varphi^{h}\in V^{h},\\ \left(\kappa^{h},\mathbf{n}^{h}\cdot\boldsymbol{\omega}^{h}\right)_{\Gamma^{h }}^{h}-\left(\partial_{s}\mathbf{X}^{h},\partial_{s}\boldsymbol{\omega}^{h} \right)_{\Gamma^{h}}=0,\quad\forall\ \boldsymbol{\omega}^{h}\in[V^{h}]^{2},\end{array}\right. \tag{2.2}\] where we always integrate over the current curve \(\Gamma^{h}\) described by \(\mathbf{X}^{h}\), the outward unit normal \(\mathbf{n}^{h}\) is a piecewise constant vector given by \[\mathbf{n}^{h}|_{I_{j}}=-\frac{\mathbf{h}_{j}^{\perp}}{|\mathbf{h}_{j}|}, \quad\mathbf{h}_{j}=\mathbf{X}^{h}(\rho_{j},t)-\mathbf{X}^{h}(\rho_{j-1},t), \quad j=1,\ldots,N,\] with \(\cdot^{\perp}\) denoting clockwise rotation by \(\frac{\pi}{2}\), and the partial derivative \(\partial_{s}\) is defined piecewisely over each side of the polygon \(\partial_{s}f|_{I_{j}}=\frac{\partial_{\rho}f}{|\partial_{\rho}\mathbf{X}^{h}| }|_{I_{j}}=\frac{(\rho_{j}-\rho_{j-1})\partial_{\rho}f|_{I_{j}}}{|\mathbf{h}_{ j}|}\). It was shown that the scheme (2.2) will always equidistribute the vertices along \(\Gamma^{h}\) for \(t>0\) if they are not locally parallel (see Remark 2.4 in [10]). For a full discretization, we fix \(\tau>0\) as a uniform time step size for simplicity, and let \(\mathbf{X}^{m}\in[V^{h}]^{2}\) and \(\Gamma^{m}\) be the approximations of \(\mathbf{X}(\cdot,t_{m})\) and \(\Gamma(t_{m})\), respectively, for \(m=0,1,2,\ldots\), where \(t_{m}:=m\tau\). We define \(\mathbf{h}_{j}^{m}:=\mathbf{X}^{m}(\rho_{j})-\mathbf{X}^{m}(\rho_{j-1})\) and assume \(|\mathbf{h}_{j}^{m}|>0\) for \(j=1,\ldots,N\), \(\forall\ m>0\). The discrete unit normal vector \(\mathbf{n}^{m}\), the discrete inner product \((\cdot,\cdot)_{\Gamma^{m}}^{h}\) and the discrete operator \(\partial_{s}\) are defined similarly as in the semi-discrete case. Barrett, Garcke and Nurnberg used a formal first-order approximation [10, 11] to replace the velocity \(\partial_{t}\mathbf{X}\), \(\kappa\) and \(\partial_{s}\mathbf{X}\) by \[\partial_{t}\mathbf{X}(\cdot,t_{m}) =\frac{\mathbf{X}(\cdot,t_{m+1})-\mathbf{X}(\cdot,t_{m})}{\tau}+ \mathcal{O}(\tau),\] \[\kappa(\cdot,t_{m}) =\kappa(\cdot,t_{m+1})+\mathcal{O}(\tau),\] \[\partial_{s}\mathbf{X}(\cdot,t_{m}) =\partial_{s}\mathbf{X}(\cdot,t_{m+1})+\mathcal{O}(\tau),\] and the fully discrete semi-implicit BGN scheme (denoted as BGN1 scheme) reads as: (**BGN1, First-order in time BGN scheme for CSF**): For \(m\geq 0\), find \(\mathbf{X}^{m+1}\in[V^{h}]^{2}\) and \(\kappa^{m+1}\in V^{h}\) such that \[\left\{\begin{aligned} &\left(\frac{\mathbf{X}^{m+1}-\mathbf{X}^{m}}{ \tau},\varphi^{h}\mathbf{n}^{m}\right)_{\Gamma^{m}}^{h}+\left(\kappa^{m+1}, \varphi^{h}\right)_{\Gamma^{m}}^{h}=0,\quad\forall\ \varphi^{h}\in V^{h},\\ &\left(\kappa^{m+1},\mathbf{n}^{m}\cdot\mathbf{\omega}^{h}\right)_{ \Gamma^{m}}^{h}-\left(\partial_{s}\mathbf{X}^{m+1},\partial_{s}\mathbf{\omega}^{ h}\right)_{\Gamma^{m}}=0,\quad\forall\ \mathbf{\omega}^{h}\in[V^{h}]^{2}.\end{aligned}\right. \tag{2.3}\] The well-posedness and energy stability were established under some mild conditions. In practice, numerous numerical results show that the BGN1 scheme (2.3) converges quadratically in space [11] and linearly in time (cf. Fig. 1 in Section 5.1). ### A second-order in time, BGN-based scheme Instead of using the first-order Euler method, we apply the Crank-Nicolson leap-frog time stepping discretization in (2.2) based on the following simple calculation \[\partial_{t}\mathbf{X}(\cdot,t_{m}) =\frac{\mathbf{X}(\cdot,t_{m+1})-\mathbf{X}(\cdot,t_{m-1})}{2\tau }+\mathcal{O}(\tau^{2}), \tag{2.4}\] \[\kappa(\cdot,t_{m}) =\frac{\kappa(\cdot,t_{m+1})+\kappa(\cdot,t_{m-1})}{2}+\mathcal{ O}(\tau^{2}),\] \[\partial_{s}\mathbf{X}(\cdot,t_{m}) =\frac{\partial_{s}\mathbf{X}(\cdot,t_{m+1})+\partial_{s}\mathbf{ X}(\cdot,t_{m-1})}{2}+\mathcal{O}(\tau^{2}),\] then the corresponding second-order scheme (denoted as BGN2 scheme) is as follows: (**BGN2, Second-order in time BGN-based scheme for CSF**): For \(\mathbf{X}^{0}\in[V^{h}]^{2}\), \(\kappa^{0}\in V^{h}\) and \((\mathbf{X}^{1},\kappa^{1})\in[V^{h}]^{2}\times V^{h}\) which are the appropriate approximations at the time levels \(t_{0}=0\) and \(t_{1}=\tau\), respectively, find \(\textbf{X}^{m+1}\in[V^{h}]^{2}\) and \(\kappa^{m+1}\in V^{h}\) for \(m\geq 1\) such that \[\left\{\begin{aligned} &\left(\frac{\textbf{X}^{m+1}-\textbf{X}^{m-1}} {2\tau},\varphi^{h}\textbf{n}^{m}\right)^{h}_{\Gamma^{m}}+\left(\frac{\kappa^ {m+1}+\kappa^{m-1}}{2},\varphi^{h}\right)^{h}_{\Gamma^{m}}=0,\\ &\left(\frac{\kappa^{m+1}+\kappa^{m-1}}{2},\textbf{n}^{m}\cdot \boldsymbol{\omega}^{h}\right)^{h}_{\Gamma^{m}}-\left(\frac{\partial_{s} \textbf{X}^{m+1}+\partial_{s}\textbf{X}^{m-1}}{2},\partial_{s}\boldsymbol{ \omega}^{h}\right)_{\Gamma^{m}}=0,\end{aligned}\right. \tag{2.5}\] for all \((\varphi^{h},\boldsymbol{\omega}^{h})\in V^{h}\times[V^{h}]^{2}\). The scheme (2.5) is semi-implicit and the computational cost is comparable to that of the BGN1 scheme (2.3). Moreover, as a temporal discretization of the semi-discrete version (2.2), it can be easily derived from (2.4) that the truncation error is of order \(\mathcal{O}(\tau^{2})\). **Remark 2.1**.: _To begin the BGN2 scheme (2.5), we need to first prepare the data \(\kappa^{0}\) and \((\textbf{X}^{1},\kappa^{1})\). In practical simulations, this can be easily achieved without sacrificing the accuracy of the scheme by utilizing the standard BGN1 scheme (2.3) to get \((\textbf{X}^{1},\kappa^{1})\), and the following formula of discrete curvature was proposed in [10, Page 461] to prepare \(\kappa^{0}\) (note the the sign convention of the curvature is opposite to [10])_ \[\kappa^{0}=(N_{0}^{\top}N_{0})^{-1}N_{0}^{\top}A_{0}\textbf{X}^{0}, \tag{2.6}\] _where \(N_{0}\) is a \(2N\times N\) matrix, \(\textbf{X}^{0}\) is a \(2N\times 1\) vector and \(A_{0}\) is a \(2N\times 2N\) matrix given by_ \[N_{0} =\begin{pmatrix}(\varphi_{i},(\textbf{n}^{0})^{[1]}\varphi_{j})^{ h}_{\Gamma^{0}}\\ (\varphi_{i},(\textbf{n}^{0})^{[2]}\varphi_{j})^{h}_{\Gamma^{0}}\end{pmatrix},\quad\textbf{X}^{0}=\begin{pmatrix}\textbf{x}^{0}\\ \textbf{y}^{0}\end{pmatrix},\] \[A_{0} =\begin{pmatrix}(\partial_{s}\varphi_{i},\partial_{s}\varphi_{j} )_{\Gamma^{0}}&0\\ 0&(\partial_{s}\varphi_{i},\partial_{s}\varphi_{j})_{\Gamma^{0}}\end{pmatrix},\] _where \(\varphi_{i},1\leq i\leq N\) are the standard Lagrange basis over \(\mathbb{I}\), and \(\textbf{a}^{[1]},\textbf{a}^{[2]}\) are the first and second component of vector \(\textbf{a}\in\mathbb{R}^{2}\), and \(\textbf{x}^{0}_{j}=(\textbf{X}^{0})^{[1]}(\rho_{j})\), \(\textbf{y}^{0}_{j}=(\textbf{X}^{0})^{[2]}(\rho_{j})\) for \(j=1,\ldots,N\). Note that this formula can be derived by solving the finite element approximation of the equation \(\kappa\textbf{n}=-\partial_{ss}\textbf{X}\) and using the least square method. We can summarize the process as Algorithm 2.1, which outlines the steps to prepare the required data \(\kappa^{0}\) and \((\textbf{X}^{1},\kappa^{1})\). Once we have obtained these data, we can directly apply the BGN2 scheme (2.5) to calculate \((\mathbf{X}^{m},\kappa^{m})\), for \(m\geq 2\)._ **Algorithm 2.1**.: _(Preparation for the initial data of BGN2 for CSF)_ _Step_ **0.** _Given the initial curve \(\Gamma(0)\), the number of grid points \(N\) and the time step size \(\tau\). We choose the polygon \(\Gamma^{0}\) with \(N\) vertices lying on \(\Gamma(0)\) such that \(\Gamma^{0}\) is (almost) equidistributed, i.e., each side of the polygon is (nearly) equal in length. We parameterize \(\Gamma^{0}\) with \(\mathbf{X}^{0}\in[V^{h}]^{2}\) and the grid points \(\rho_{j}\) can be determined correspondingly._ _Step_ **1.** _Using \(\mathbf{X}^{0}\) as the input, we compute \(\kappa^{0}\) using the discrete curvature formula (2.6)._ _Step_ **2.** _Using \(\mathbf{X}^{0}\) as the input, we obtain \((\mathbf{X}^{1},\kappa^{1})\) by solving the BGN1 scheme (2.3) for one time step._ **Remark 2.2**.: _When dealing with an initial curve which is not regular, an alternative approach for initialization is to solve the BGN1 scheme twice and start the BGN2 scheme from \(m=2\). Specifically, for given \(\mathbf{X}^{0}\), we can compute \((\mathbf{X}^{1},\kappa^{1})\) and \((\mathbf{X}^{2},\kappa^{2})\), which are the appropriate approximations at time levels \(t_{1}=\tau\) and \(t_{2}=2\tau\), by solving the BGN1 scheme (2.3) twice. These approximations can be used as initial values to implement the BGN2 scheme (2.3) for \(m\geq 2\). For the superiority of this approach, see Fig. 7 in Section 5.3._ Similar to the BGN1 scheme (2.3), we can show the well-posedness of the BGN2 scheme (2.5) under some mild conditions as follows. **Theorem 2.1** (Well-posedness).: _For \(m\geq 0\), we assume that the following two conditions are satisfied:_ 1. _There exist at least two vectors in_ \(\{\mathbf{h}_{j}^{m}\}_{j=1}^{N}\) _which are not parallel, i.e.,_ \[\dim\left(\operatorname{Span}\left\{\mathbf{h}_{j}^{m}\right\}_{j=1}^{N} \right)=2.\] 2. _No degenerate vertices exist on_ \(\Gamma^{m}\)_, i.e.,_ \[\min_{1\leq j\leq N}|\mathbf{h}_{j}^{m}|>0.\] _Then the full discretization (2.5) is well-posed, i.e., there exists a unique solution \((\mathbf{X}^{m+1},\kappa^{m+1})\in[V^{h}]^{2}\times V^{h}\) of (2.5)._ Proof.: It suffices to prove the following algebraic system for \((\mathbf{X},\kappa)\in[V^{h}]^{2}\times V^{h}\) has only zero solution, \[\left\{\begin{aligned} &\left(\frac{\mathbf{X}}{\tau},\varphi^{h} \mathbf{n}^{m}\right)_{\Gamma^{m}}^{h}+\left(\kappa,\varphi^{h}\right)_{ \Gamma^{m}}^{h}=0,\quad\forall\ \varphi^{h}\in V^{h},\\ &\left(\kappa,\mathbf{n}^{m}\cdot\mathbf{\omega}^{h}\right)_{\Gamma^ {m}}^{h}-\left(\partial_{s}\mathbf{X},\partial_{s}\mathbf{\omega}^{h}\right)_{ \Gamma^{m}}=0,\quad\forall\ \mathbf{\omega}^{h}\in[V^{h}]^{2}.\end{aligned}\right.\] Indeed, the stiffness matrix is exactly the same as the standard BGN1 scheme (2.3) and thus the same argument in [11, Theorem 2.9] yields the conclusion under the assumptions (1) and (2). Subsequently, we prove the following energy stability property, assuming a mild condition regarding the upper bound of the mesh ratio. Furthermore, the numerical results presented in Section 5 demonstrate that the proposed BGN2 scheme remains stable even for very large time step \(\tau\). **Theorem 2.2** (Mild energy stability).: _Assume that the mesh ratio of \(\mathbf{X}^{m}\) satisfies_ \[\Psi^{m}:=\frac{\max_{j}|\mathbf{h}_{j}^{m}|}{\min_{j}|\mathbf{h}_{j}^{m}|} \leq c, \tag{2.7}\] _where \(c\) is a constant independent of \(\tau\), \(h\) and \(m\). Then for any \(\tau>0\) and \(m\geq 1\), the energy stability holds in the following sense,_ \[E^{m+1}\leq cE^{m-1}, \tag{2.8}\] _where \(E^{m+1}:=\sum\limits_{j=1}^{N}|\mathbf{h}_{j}^{m+1}|^{2}\)._ Proof.: Taking \(\mathbf{\omega}^{h}=\frac{\mathbf{X}^{m+1}-\mathbf{X}^{m-1}}{2\tau}\) and \(\varphi^{h}=\frac{\kappa^{m+1}+\kappa^{m-1}}{2}\) in (2.5), we get \[\left(\frac{\kappa^{m+1}+\kappa^{m-1}}{2},\frac{\kappa^{m+1}+ \kappa^{m-1}}{2}\right)_{\Gamma^{m}}^{h}\] \[=-\left(\frac{\mathbf{X}^{m+1}-\mathbf{X}^{m-1}}{2\tau},\left( \frac{\kappa^{m+1}+\kappa^{m-1}}{2}\right)\mathbf{n}^{m}\right)_{\Gamma^{m}}^ {h}\] \[=-\left(\frac{\partial_{s}\mathbf{X}^{m+1}+\partial_{s}\mathbf{X} ^{m-1}}{2},\frac{\partial_{s}\mathbf{X}^{m+1}-\partial_{s}\mathbf{X}^{m-1}}{2 \tau}\right)_{\Gamma^{m}}\] \[=-\frac{1}{4\tau}\left(\left(\partial_{s}\mathbf{X}^{m+1}, \partial_{s}\mathbf{X}^{m+1}\right)_{\Gamma^{m}}-\left(\partial_{s}\mathbf{X} ^{m-1},\partial_{s}\mathbf{X}^{m-1}\right)_{\Gamma^{m}}\right).\] Noticing \[\left(\partial_{s}\mathbf{X}^{m+1},\partial_{s}\mathbf{X}^{m+1}\right)_{\Gamma^{m} }=\sum_{j=1}^{N}\frac{|\mathbf{h}_{j}^{m+1}|}{|\mathbf{h}_{j}^{m}|}\frac{| \mathbf{h}_{j}^{m+1}|}{|\mathbf{h}_{j}^{m}|}|\mathbf{h}_{j}^{m}|=\sum_{j=1}^{N} \frac{|\mathbf{h}_{j}^{m+1}|^{2}}{|\mathbf{h}_{j}^{m}|},\] we can estimate for any \(\tau>0\), \[E^{m+1}-cE^{m-1} =\sum_{j=1}^{N}|\mathbf{h}_{j}^{m+1}|^{2}-c\sum_{j=1}^{N}|\mathbf{ h}_{j}^{m-1}|^{2}\] \[\leq\left(\max_{j}|\mathbf{h}_{j}^{m}|\right)\sum_{j=1}^{N}\frac{ |\mathbf{h}_{j}^{m+1}|^{2}}{|\mathbf{h}_{j}^{m}|}-\left(\min_{j}|\mathbf{h}_{ j}^{m}|\right)c\sum_{j=1}^{N}\frac{|\mathbf{h}_{j}^{m-1}|^{2}}{|\mathbf{h}_{j} ^{m}|}\] \[\leq\left(\max_{j}|\mathbf{h}_{j}^{m}|\right)\left(\sum_{j=1}^{N }\frac{|\mathbf{h}_{j}^{m+1}|^{2}}{|\mathbf{h}_{j}^{m}|}-\sum_{j=1}^{N}\frac{ |\mathbf{h}_{j}^{m-1}|^{2}}{|\mathbf{h}_{j}^{m}|}\right)\] \[=-4\tau\max_{j}|\mathbf{h}_{j}^{m}|\left(\frac{\kappa^{m+1}+ \kappa^{m-1}}{2},\frac{\kappa^{m+1}+\kappa^{m-1}}{2}\right)_{\Gamma^{m}}^{h} \leq 0,\] and the proof is completed. ### Mesh regularization As was mentioned earlier, the semi-discrete scheme (2.2) possesses the mesh equidistribution property (14, Theorem 79). In practice, the fully-discrete BGN1 scheme (2.3) can maintain the asymptotic long-time mesh equidistribution property. However, the BGN2 scheme (2.5) may have oscillating mesh ratio due to the structure of two-step method, which can potentially amplify the mesh ratio and cause mesh distortion or clustering during the evolution, especially for some initial curves which are not so regular, e.g., a 'flower' curve (see the second row of Fig. 8). Therefore, a mesh regularization procedure is necessary in real simulations to help the mesh maintain a good distribution property during the evolution, when the mesh ratio exceeds a given threshold value. Inspired by the good mesh distribution property of the BGN1 scheme, we utilize the BGN1 scheme as the mesh regularization technique. In the following, we denote \(n_{\text{MR}}\) as the threshold value chosen initially. If the mesh ratio \(\Psi^{m}>n_{\text{MR}}\), then we use the mesh regularization procedure to improve the mesh distribution. We present a summary of the complete algorithm of BGN2 scheme for solving the CSF in Algorithm 2.2. **Algorithm 2.2**.: _(BGN2 scheme for CSF)_ _Step_ **0.** _Given the initial curve \(\Gamma(0)\), and \(N,T,n_{\rm MR}\), \(\tau\), compute \({\bf X}^{0}\) as in_ Step 0 _in Algorithm 2.1._ _Step_ **1.** _Using \({\bf X}^{0}\) as the input, we compute \(\kappa^{0}\) using the discrete curvature formula (2.6) and solve \(({\bf X}^{1},\kappa^{1})\) via the BGN1 scheme (2.3). Set \(m=1\)._ _Step_ **2.** _Calculate the mesh ratio \(\Psi^{m}\) of \({\bf X}^{m}\), \(m\geq 1\)._ _Step_ **3.** _If the mesh ratio \(\Psi^{m}>n_{\rm MR}\), then replace \(({\bf X}^{m},\kappa^{m})\) with the solution of the BGN1 scheme (2.3) with \({\bf X}^{m-1}\) as the input by one run; otherwise, skip this step._ _Step_ **4.** _Use the BGN2 scheme (2.5) to obtain \(({\bf X}^{m+1},\kappa^{m+1})\)._ _Step_ **5.** _Update \(m=m+1\). If \(m<T/\tau\), then go back to **Step 2**; otherwise, stop the algorithm and output the data._ As shown in _Step 3_ of Algorithm 2.2, if the mesh ratio \(\Psi^{m}>n_{\rm MR}\), we replace \(({\bf X}^{m},\kappa^{m})\) with the solution of the BGN1 scheme (2.3) with \({\bf X}^{m-1}\) as the input by one run, to help us realize the mesh regularization. Extensive numerical experiments suggest that the mesh regularization procedure is very effective, and the mesh ratio decreases immediately to a small value after this procedure (cf. Fig. 5(d) in Section 5). The BGN2 scheme with the aid of the BGN1 scheme as the mesh regularization is very efficient and stable in practical simulations. The reason comes from that the BGN1 scheme (2.3) can intrinsically lead to a good mesh distribution property, which was explained in [10, 14], but a more convincing explanation needs further rigorous numerical analysis for the scheme. One concern that may arise is whether the BGN2 scheme with necessary mesh regularization can still achieve second-order accuracy, considering that the BGN1 scheme is only first-order accurate. It is important to note that for certain smooth initial curves, such as elliptic curves, the mesh regularization procedure is never required during the evolution. In such cases, the numerical evolution remains remarkably stable and the mesh ratio remains bounded. While for certain special initial curves, like a 'flower' curve or a 'tube' curve, the mesh regularization procedure may be needed only a few times (cf. Section 5.3). Nevertheless, this does not compromise the temporal second-order accuracy of the BGN2 scheme (2.5). ## 3 Shape metric is a better choice As we are aware, it is an interesting and thought-provoking problem to determine how to quantify the difference between two curves in 2D or two surfaces in 3D. Given two closed curves \(\Gamma_{1}\) and \(\Gamma_{2}\), we assume that the two curves are parametrized by \(\mathbf{X}(\rho)\) and \(\mathbf{Y}(\rho)\), respectively, over the same interval \(\mathbb{I}\). Consequently, we can define the following four metrics for measurement: * (\(L^{2}\)**-error**) The \(L^{2}\)-norm between the parameterized functions \(\mathbf{X}(\rho)\) and \(\mathbf{Y}(\rho)\) is defined in a classical way \[A(\mathbf{X},\mathbf{Y}):=\|\mathbf{X}(\rho)-\mathbf{Y}(\rho)\|_{L^{2}( \mathbb{I})}.\] * (\(L^{\infty}\)**-error**) The \(L^{\infty}\)-norm between the parameterized functions \(\mathbf{X}(\rho)\) and \(\mathbf{Y}(\rho)\) is defined as \[B(\mathbf{X},\mathbf{Y}):=\|\mathbf{X}(\rho)-\mathbf{Y}(\rho)\|_{L^{\infty}( \mathbb{I})}.\] * (**Manifold distance**) The manifold distance between the curves \(\Gamma_{1}\) and \(\Gamma_{2}\) is defined as [32] \[\mathrm{M}\left(\Gamma_{1},\Gamma_{2}\right):=|(\Omega_{1}\setminus\Omega_{2 })\cup(\Omega_{2}\setminus\Omega_{1})|=|\Omega_{1}|+|\Omega_{2}|-2|\Omega_{1} \cup\Omega_{2}|,\] where \(\Omega_{1}\) and \(\Omega_{2}\) represent the regions enclosed by \(\Gamma_{1}\) and \(\Gamma_{2}\), respectively, and \(|\Omega|\) denotes the area of \(\Omega\). * (**Hausdorff distance**) The Hausdorff distance between the curves \(\Gamma_{1}\) and \(\Gamma_{2}\) is defined as [2] \[H(\Gamma_{1},\Gamma_{2})=\max\{\widetilde{H}(\Gamma_{1},\Gamma_{2}), \widetilde{H}(\Gamma_{2},\Gamma_{1})\},\] where \(\widetilde{H}(\Gamma_{1},\Gamma_{2})=\max\limits_{a\in\Gamma_{1}}\min \limits_{b\in\Gamma_{2}}d(a,b)\), and \(d\) is the Euclidean distance. **Remark 3.1**.: _The \(L^{2}\)-error and \(L^{\infty}\)-error fall within the domain of function metrics, which rely on the parametrization of curves. On the other hand, as demonstrated in [32, Proposition 5.1] and [2], it has been easily proven that both manifold distance and Hausdorff distance fulfill the properties of symmetry, positivity and the triangle inequality. Therefore, they belong to the category of shape metrics and not influenced by the specific parametrization._ **Remark 3.2**.: _It should be noted that the aforementioned shape metrics can be easily calculated using simple algorithms. As the numerical solutions are represented as polygons, it is very easy to calculate the area of the symmetric difference region, i.e., the manifold distance, between two polygonal curves. Additionally, a polygon-based approach proposed in the literature [2] can be employed to calculate the Hausdorff distance between planar curves._ In order to test the convergence rate of numerical schemes, for example, we consider the evolution of the CSF with an initial ellipse defined by \[\{(x,y)\in\mathbb{R}^{2}:\quad x^{2}+4y^{2}=4\}.\] This initial ellipse is approximated using an equidistributed polygon \(\mathbf{X}^{0}\) with \(N\) vertices. Here, we simulate the CSF by using three different numerical schemes: Dziuk's scheme [18, Section 6], BGN1 scheme (2.3) and BGN2 scheme (2.5). Since the exact solution of the CSF for an elliptical curve is unknown, we first compute a reference solution \(\mathbf{X}_{\mathrm{ref}}\) by Dziuk's scheme (to test the convergence of Dziuk's scheme) or the BGN2 scheme (to test the convergence of BGN-type schemes) with a fine mesh and a tiny time step size, e.g., \(N=10000\) and \(\tau=10^{-1}*2^{-11}\). To test the temporal error, we still take a large number of grid points, e.g., \(N=10000\), such that the spatial error is ignorable. The numerical error and the corresponding convergence order are then determined as follows \[\mathcal{E}_{\mathcal{M}}:=\mathcal{E}_{\tau}(T)=\mathcal{M}(\mathbf{X}_{\tau }^{k},\mathbf{X}_{\mathrm{ref}}),\quad\mathrm{Order}=\log\Big{(}\frac{ \mathcal{E}_{\tau}(T)}{\mathcal{E}_{\tau/2}(T)}\Big{)}\Big{/}\log 2, \tag{3.1}\] where \(k=T/\tau\), and \(\mathcal{M}\) represents any one of the four metrics defined above. Tables 1-3 display the numerical errors at time \(T=0.25\) measured by the four different metrics for Dziuk's scheme [18], the BGN1 scheme (2.3) and the BGN2 scheme (2.5), respectively. As anticipated, we easily observe linear convergence in time for Dziuk's scheme across all four different metrics. While linear and quadratic convergence for both shape metrics (i.e., the manifold distance and Hausdorff distance) are observed for the BGN1 scheme in Table 2 and the BGN2 scheme in Table 3, respectively. It is worth noting that unlike Dziuk's scheme, the convergence of the BGN1 scheme and BGN2 scheme under function metrics (the \(L^{2}\)-norm and \(L^{\infty}\)-norm) is not as satisfactory. This is not surprising since the error in classical Sobolev space depends on the specific parametrization of the curve. In contrast, the BGN formulation (2.1) allows tangential motion to make the mesh points equidistribute, which indeed affects the parametrization while preserving the shape of the curve. Thus it is not appropriate to use the classical function metrics to quantify the errors of the BGN-type schemes which are based on the BGN formulation. Instead, as observed from Tables 2 and 3, the shape metrics are much more suitable for quantifying the numerical errors of the schemes that al \begin{table} \begin{tabular}{l l l l l} \hline \hline Errors & \(\tau=\tau_{0}\) & \(\tau_{0}/2\) & \(\tau_{0}/2^{2}\) & \(\tau_{0}/2^{3}\) \\ \hline \(L^{2}\)-norm & 1.17E-2 & 6.31E-3 & 3.26E-3 & 1.62E-3 \\ Order & – & 0.89 & 0.95 & 1.01 \\ \hline \(L^{\infty}\)-norm & 3.05E-2 & 1.63E-2 & 8.41E-3 & 4.19E-3 \\ Order & – & 0.90 & 0.96 & 1.00 \\ \hline Manifold distance & 6.89E-2 & 3.65E-2 & 1.86E-2 & 9.17E-3 \\ Order & – & 0.92 & 0.97 & 1.02 \\ \hline Hausdorff distance & 3.04E-2 & 1.62E-2 & 8.29E-3 & 4.09E-3 \\ Order & – & 0.91 & 0.97 & 1.02 \\ \hline \hline \end{tabular} \end{table} Table 1: Numerical errors quantified by various metrics for Dziuk’s scheme [18, Section 6], with the parameters \(N=10000,\tau_{0}=1/40\), and \(T=0.25\). low intrinsic tangential velocity. In the remaining of the article, we will employ the manifold distance or the Hausdorff distance when measuring the difference between two curves. \begin{table} \begin{tabular}{l l l l l} \hline \hline Errors & \(\tau=\tau_{0}\) & \(\tau_{0}/2\) & \(\tau_{0}/2^{2}\) & \(\tau_{0}/2^{3}\) \\ \hline \(L^{2}\)-norm & 4.25E-3 & 3.98E-3 & 4.05E-3 & 4.15E-3 \\ Order & – & 0.10 & \(-0.03\) & \(-0.03\) \\ \hline \(L^{\infty}\)-norm & 1.00E-2 & 9.17E-3 & 9.47E-3 & 9.79E-3 \\ Order & – & 0.12 & \(-0.05\) & \(-0.05\) \\ \hline Manifold distance & 3.11E-2 & 1.58E-2 & 7.96E-3 & 4.00E-3 \\ Order & – & 0.98 & 0.99 & 0.99 \\ \hline Hausdorff distance & 8.23E-3 & 4.18E-3 & 2.11E-3 & 1.06E-3 \\ Order & – & 0.98 & 0.99 & 0.99 \\ \hline \hline \end{tabular} \end{table} Table 2: Numerical errors quantified by various metrics for the BGN1 scheme (2.3), with the parameters \(N=10000,\tau_{0}=1/40\), \(T=0.25\). \begin{table} \begin{tabular}{l l l l l} \hline \hline Errors & \(\tau=\tau_{0}\) & \(\tau_{0}/2\) & \(\tau_{0}/2^{2}\) & \(\tau_{0}/2^{3}\) \\ \hline \(L^{2}\)-norm & 1.49E-2 & 1.45E-2 & 1.45E-2 & 1.43E-2 \\ Order & – & 0.04 & 0.00 & 0.02 \\ \hline \(L^{\infty}\)-norm & 3.32E-2 & 3.30E-2 & 3.29E-2 & 3.29E-2 \\ Order & – & 0.01 & 0.00 & 0.00 \\ \hline Manifold distance & 8.44E-4 & 2.11E-4 & 5.27E-5 & 1.32E-5 \\ Order & – & 2.00 & 2.00 & 1.99 \\ \hline Hausdorff distance & 2.00E-4 & 4.98E-5 & 1.26E-5 & 3.29E-6 \\ Order & – & 2.01 & 1.98 & 1.94 \\ \hline \hline \end{tabular} \end{table} Table 3: Numerical errors quantified by various metrics for the BGN2 scheme (2.5), with the parameters \(N=10000,\tau_{0}=1/40\), \(T=0.25\). ## 4 Applications to other geometric flows In this section, we extend the above proposed BGN2 scheme to other geometric flows. ### For area-preserving curve-shortening flow (AP-CSF) As is known, the AP-CSF can be viewed as the \(L^{2}\)-gradient flow with respect to the length functional under the constraint of total area preservation [14; 24]. Similar to (2.1), we rewrite the AP-CSF as the following coupled equations \[\begin{split}\partial_{t}\mathbf{X}\cdot\mathbf{n}& =-\kappa+\left\langle\kappa\right\rangle,\\ \kappa\mathbf{n}&=-\partial_{ss}\mathbf{X},\end{split} \tag{4.1}\] where the average of curvature is defined as \(\left\langle\kappa\right\rangle:=\int_{\Gamma(t)}\kappa\mathrm{d}s/\int_{ \Gamma(t)}1\mathrm{d}s\). The fully-discrete, first-order in time semi-implicit BGN scheme for AP-CSF reads as [14]: (**BGN1 scheme for AP-CSF**): For \(m\geq 0\), find \(\mathbf{X}^{m+1}\in[V^{h}]^{2}\) and \(\kappa^{m+1}\in V^{h}\) such that \[\left\{\begin{array}{l}\left(\frac{\mathbf{X}^{m+1}-\mathbf{X}^{m}}{\tau}, \varphi^{h}\mathbf{n}^{m}\right)_{\Gamma^{m}}^{h}+\left(\kappa^{m+1}-\left\langle \kappa^{m+1}\right\rangle_{\Gamma^{m}},\varphi^{h}\right)_{\Gamma^{m}}^{h}=0, \\ \left(\kappa^{m+1},\mathbf{n}^{m}\cdot\mathbf{\omega}^{h}\right)_{\Gamma^{m}}^{h }-\left(\partial_{s}\mathbf{X}^{m+1},\partial_{s}\mathbf{\omega}^{h}\right)_{ \Gamma^{m}}=0,\end{array}\right. \tag{4.2}\] for all \((\varphi^{h},\mathbf{\omega}^{h})\in V^{h}\times[V^{h}]^{2}\), where \(\left\langle\kappa^{m+1}\right\rangle_{\Gamma^{m}}:=\left(\kappa^{m+1},1 \right)_{\Gamma^{m}}^{h}/\left(1,1\right)_{\Gamma^{m}}^{h}\). Based on the same spirit, we can propose the following second-order BGN2 scheme. (**BGN2 scheme for AP-CSF**): For \(m\geq 1\), find \((\mathbf{X}^{m+1},\kappa^{m+1})\in[V^{h}]^{2}\times V^{h}\) such that \[\left\{\begin{array}{l}\left(\frac{\mathbf{X}^{m+1}-\mathbf{X}^{m-1}}{2\tau},\varphi^{h}\mathbf{n}^{m}\right)_{\Gamma^{m}}^{h}=-\left(\frac{\kappa^{m+1} +\kappa^{m-1}}{2}-\left\langle\frac{\kappa^{m+1}+\kappa^{m-1}}{2}\right\rangle _{\Gamma^{m}},\varphi^{h}\right)_{\Gamma^{m}}^{h},\\ \left(\frac{\kappa^{m+1}+\kappa^{m-1}}{2},\mathbf{n}^{m}\cdot\mathbf{\omega}^{h} \right)_{\Gamma^{m}}^{h}-\left(\frac{\partial_{s}\mathbf{X}^{m+1}+\partial_{s }\mathbf{X}^{m-1}}{2},\partial_{s}\mathbf{\omega}^{h}\right)_{\Gamma^{m}}=0,\end{array}\right. \tag{4.3}\] for all \((\varphi^{h},\mathbf{\omega}^{h})\in V^{h}\times[V^{h}]^{2}\). Similarly, the stiffness matrix of the linear system to be solved in (4.3) is exactly the same as the BGN1 scheme (4.2), whose well-posedness has been established in [14; Theorem 90]. Additionally, a mild energy stability can be easily obtained under the same conditions as stated in Theorem 2.2. ### For surface diffusion flow (SDF) We consider the fourth-order flow--SDF, which can be viewed as the \(H^{-1}\)-gradient flow with respect to the length functional [7; 14]. In a similar fashion, we rephrase the SDF as the subsequent system of equations \[\begin{split}\partial_{t}\mathbf{X}\cdot\mathbf{n}& =\partial_{ss}\kappa,\\ \kappa\mathbf{n}&=-\partial_{ss}\mathbf{X}.\end{split} \tag{4.4}\] The fully discrete, first-order in time semi-implicit BGN scheme for SDF reads as [10]: (**BGN1 scheme for SDF**): For \(m\geq 0\), find \(\mathbf{X}^{m+1}\in[V^{h}]^{2}\) and \(\kappa^{m+1}\in V^{h}\) such that \[\left\{\begin{split}&\left(\frac{\mathbf{X}^{m+1}-\mathbf{X}^{m}}{ \tau},\varphi^{h}\mathbf{n}^{m}\right)_{\Gamma^{m}}^{h}+\left(\partial_{s} \kappa^{m+1},\partial_{s}\varphi^{h}\right)_{\Gamma^{m}}=0,\quad\forall\ \varphi^{h}\in V^{h},\\ &\left(\kappa^{m+1},\mathbf{n}^{m}\cdot\mathbf{\omega}^{h}\right)_{ \Gamma^{m}}^{h}-\left(\partial_{s}\mathbf{X}^{m+1},\partial_{s}\mathbf{\omega}^{h }\right)_{\Gamma^{m}}=0,\quad\forall\ \mathbf{\omega}^{h}\in[V^{h}]^{2}.\end{split}\right. \tag{4.5}\] In line with the same approach, we can put forward the subsequent second-order BGN2 scheme: (**BGN2 scheme for SDF**): For \(m\geq 1\), find \((\mathbf{X}^{m+1},\kappa^{m+1})\in[V^{h}]^{2}\times V^{h}\) such that \[\left\{\begin{split}&\left(\frac{\mathbf{X}^{m+1}-\mathbf{X}^{m-1}}{2 \tau},\varphi^{h}\mathbf{n}^{m}\right)_{\Gamma^{m}}^{h}+\left(\frac{\partial_ {s}\kappa^{m+1}+\partial_{s}\kappa^{m-1}}{2},\partial_{s}\varphi^{h}\right)_ {\Gamma^{m}}=0,\\ &\left(\frac{\kappa^{m+1}+\kappa^{m-1}}{2},\mathbf{n}^{m}\cdot \mathbf{\omega}^{h}\right)_{\Gamma^{m}}^{h}-\left(\frac{\partial_{s}\mathbf{X}^{m +1}+\partial_{s}\mathbf{X}^{m-1}}{2},\partial_{s}\mathbf{\omega}^{h}\right)_{ \Gamma^{m}}=0,\end{split}\right. \tag{4.6}\] for all \((\varphi^{h},\mathbf{\omega}^{h})\in V^{h}\times[V^{h}]^{2}\). The well-posedness and energy stability of the above scheme can be shown similarly under certain mild conditions. For the schemes (4.3) and (4.6), we consistently set \(\mathbf{X}^{0}\in[V^{h}]^{2}\) as specified in Algorithm 2.1, that is, \(\mathbf{X}^{0}\) is a parametrization of an (almost) equidistributed interpolation polygon with \(N\) vertices for the initial curve \(\Gamma(0)\). Similar as the case of CSF, to start the BGN2 schemes, we need to prepare the initial data and \((\mathbf{X}^{1},\kappa^{1})\), which can be achieved by using the similar approach as Algorithm 2.1 by using the corresponding BGN1 scheme. A complete second-order scheme can be obtained as in Algorithm 2.2 with the corresponding BGN1 scheme as a mesh regularization when necessary. ## 5 Numerical results ### Convergence tests In this subsection, we test the temporal convergence of the second-order schemes (2.5), (4.3) and (4.6) for solving the three geometric flows: CSF, APCSF and SDF, respectively. As previously discussed in Section 3, we quantify the numerical errors of the curves using the shape metrics, such as the manifold distance and Hausdorff distance. For the following simulations, we select four distinct types of initial shapes: * (**Shape 1**): a unit circle; * (**Shape 2**): an ellipse with semi-major axis 2 and semi-minor axis 1; * (**Shape 3**): a 'tube' shape, which is a curve comprising a \(4\times 1\) rectangle with two semicircles on its left and right sides; * (**Shape 4**): a 'flower' shape, which is parameterized by \(\mathbf{X}(\rho)=((2+\cos(12\pi\rho))\cos(2\pi\rho),(2+\cos(12\pi\rho))\sin(2 \pi\rho)),\quad\rho\in\mathbb{I}=[0,1]\). We note that for the CSF with Shape 1 as its initial shape has the following true solution, i.e., \[\mathbf{X}_{\text{true}}(\rho,t)=\sqrt{1-2t}(\cos(2\pi\rho),\sin(2\pi\rho)), \quad\rho\in\mathbb{I},\quad t\in[0,0.5).\] For this particular case, we compute the numerical error by comparing it with the true solution. However, for all other cases, we utilize the reference solutions which are obtained by the BGN2 scheme with large \(N\) and a tiny time step size \(\tau\). In addition, the mesh regularization threshold is consistently set to \(n_{\rm MR}=10\). We begin our test by calculating the convergence of the BGN2 scheme (2.5) for the CSF with either Shape 1 or Shape 2 as initial data. Fig. 1 presents a log-log plot of the numerical errors at time \(T=0.25\), measured by the manifold distance. The errors for the Hausdorff distance, which are similar, are not included here for brevity. To ensure a fair comparison, we also include the numerical results of the BGN1 scheme (2.3) under the same computational parameters, with a fixed number of grid points \(N=10000\). As clearly shown in Fig. 1, the numerical error of the BGN2 scheme reduces very rapidly with second-order accuracy in time, while the BGN1 scheme only achieves first-order convergence. Fig. 2 shows the temporal errors of the BGN2 scheme (4.3) for solving the AP-CSF and SDF with Shape 2 as initial data. It is clear that the numerical error of the BGN2 scheme converges quadratically, whereas the BGN1 scheme (4.2) converges only linearly. Moreover, since both the AP-CSF and SDF eventually evolve into a circle, we also investigate the convergence of the BGN2 scheme over long-time simulations. As illustrated in Fig. 3, the numerical error of the BGN1 scheme is much smaller than the BGN1 scheme. Figure 1: Log-log plot of the numerical errors at time \(T=0.25\) measured by the manifold distance for BGN1 (2.3) and BGN2 (2.5) schemes for solving the CSF with two different initial curves: (a) Shape 1 and (b) Shape 2, respectively, where the number of nodes is fixed as \(N=10000\). rors at three different times \(T=0.25,0.5,2\) of the BGN2 scheme all display quadratic convergence. ### Comparison of computational costs In order to show that the computational cost of the proposed BGN2 scheme is comparable to that of the BGN1 scheme, we present two examples about solving the CSF and SDF, respectively. The numerical codes were written by Figure 3: Log-log plot of the numerical errors measured by the manifold distance, at three different times (i.e., \(T=0.25,0.5,2\)) for solving two different flows with Shape 2 as the initial curve: (a) AP-CSF and (b) SDF, respectively. Figure 2: Log-log plot of the numerical errors at time \(T=0.25\), measured by the manifold distance, for solving two different flows with Shape 2 as the initial curve: (a) AP-CSF and (b) SDF, respectively. using MATLAB 2021b, and they were implemented in a MacBook Pro with 1.4GHz quad-core Intel Core i5 and 8GB RAM. Table 4 displays a comparison of CPU times in seconds and numerical errors at time \(T=0.05\), as measured by the manifold distance \(\mathcal{E}_{M}(T)\) and Hausdorff distance \(\mathcal{E}_{H}(T)\), using the BGN2 scheme (2.5) and the BGN1 scheme (2.3) for solving the CSF, where the initial shape is chosen as Shape 1. Table 5 provides similar results for solving the SDF with Shape 3 as its initial shape. Based on the findings presented in Tables 4 and 5, the following conclusions can be drawn: (i) On the same mesh, the computational cost of the BGN2 scheme is slightly higher compared to the BGN1 scheme, as it involves additional calculations for the initial values and the right-hand side of the linear system at each time level. However, the numerical solution obtained using the BGN2 scheme is significantly more accurate than the BGN1 scheme. (ii) Achieving the same level of accuracy requires a much higher computational cost for the BGN1 scheme. For instance, as demonstrated in Table 4, when comparing the results of the BGN1 scheme with \(N=5120\) and the BGN2 scheme with \(N=1280\), it is evident that the BGN2 scheme is not only more accurate but also more than 100 times faster than the BGN1 scheme. Similar trends can be observed in Table 5 for the SDF. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{BGN1 scheme (2.3)} & \multicolumn{6}{|c|}{BGN2 scheme (2.5)} \\ \hline \(N\) & \(\mathcal{E}_{M}(T)\) & \(\mathcal{E}_{H}(T)\) & Time & \(N\) & \(\mathcal{E}_{M}(T)\) & \(\mathcal{E}_{H}(T)\) & Time \\ \hline 320 & 5.61E-4 & 1.25E-4 & 0.350s & 160 & 8.35E-4 & 2.02E-4 & 0.200s \\ \hline 640 & 3.34E-4 & 6.37E-5 & 1.70s & 320 & 2.09E-4 & 5.04E-5 & 0.430s \\ \hline 1280 & 1.81E-4 & 3.22E-5 & 9.85s & 640 & 5.20E-5 & 1.27E-5 & 2.30s \\ \hline 2560 & 9.38E-5 & 1.62E-5 & 110s & 1280 & 1.29E-5 & 3.20E-6 & 12.9s \\ \hline 5120 & 4.78E-5 & 8.16E-5 & 1893s & 2560 & 3.08E-6 & 8.16E-7 & 130s \\ \hline \end{tabular} \end{table} Table 4: Comparisons of the CPU times (seconds) and the numerical errors measured from the manifold distance \(\mathcal{E}_{M}(T)\) and Hausdorff distance \(\mathcal{E}_{H}(T)\) for the BGN2 scheme (2.5) and the BGN1 scheme (2.3) applied to CSF, where the initial shape is chosen as Shape 1, with \(\tau=0.5/N\) and \(T=0.05\). ### Applications to the curve evolution As is well-known, the AP-CSF and SDF possess some structure-preserving properties, such as the perimeter decreasing and area conserving properties [7; 23; 24]. In this subsection, we investigate the structure-preserving properties of the proposed BGN2 schemes (4.3) and (4.6) applied to AP-CSF and SDF, respectively. As an example, we mainly focus on the SDF here. Moreover, we will discuss the importance of the mesh regularization procedure. Fig. 4 (a) illustrates the evolution of an initially elliptic curve, referred to as Shape 2, driven by SDF towards its equilibrium state. Fig. 4(b)-(d) show the evolution of various geometric quantities during the process: the relative area loss \(\Delta A(t)\), the normalized perimeter \(L(t)/L(0)\), and the mesh distribution function \(\Psi(t)\), which are defined respectively as \[\Delta A(t)|_{t=t_{m}}=\frac{A^{m}-A^{0}}{A^{0}},\quad\left.\frac{L(t)}{L(0)} \right|_{t=t_{m}}=\frac{L^{m}}{L^{0}},\quad\Psi(t)|_{t=t_{m}}=\Psi^{m},\quad m \geq 0,\] where \(A^{m}\) is the area enclosed by the polygon determined by \(\mathbf{X}^{m}\), \(L^{m}\) represents the perimeter of the polygon, and the mesh ratio \(\Psi^{m}\) is defined in (2.7). As depicted in Fig. 4(b), the area loss exhibits a weakly oscillating behavior, which may result from the two-step structure of the BGN2 scheme. It is worth \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{BGN1 scheme (4.5)} & \multicolumn{6}{|c|}{BGN2 scheme (4.6)} \\ \hline \(N\) & \(\mathcal{E}_{M}(T)\) & \(\mathcal{E}_{H}(T)\) & Time & \(N\) & \(\mathcal{E}_{M}(T)\) & \(\mathcal{E}_{H}(T)\) & Time \\ \hline 320 & 4.73E-3 & 6.91E-4 & 0.470s & 160 & 7.51E-3 & 2.62E-3 & 0.260s \\ \hline 640 & 2.24E-3 & 3.38E-4 & 2.03s & 320 & 2.53E-3 & 1.14E-3 & 0.610s \\ \hline 1280 & 1.10E-3 & 1.67E-4 & 12.6s & 640 & 8.28E-4 & 4.17E-4 & 2.270s \\ \hline 2560 & 5.53E-4 & 8.34E-5 & 132.6s & 1280 & 2.30E-4 & 1.12E-4 & 15.1s \\ \hline 5120 & 2.80E-4 & 4.16E-5 & 2180s & 2560 & 5.42E-5 & 2.82E-5 & 153s \\ \hline \end{tabular} \end{table} Table 5: Comparisons of the CPU times (seconds) and the numerical errors measured by the manifold distance \(\mathcal{E}_{M}(T)\) and Hausdorff distance \(\mathcal{E}_{H}(T)\) using the BGN2 scheme (4.6) and the BGN1 scheme (4.5) for SDF, where the initial shape is chosen as Shape 3, with \(\tau=0.5/N\), and \(T=0.05\). noting that despite the oscillations, the normalized area loss remains very low, consistently below \(0.01\%\). By employing a smaller grid size, the area loss can be further reduced, and it is significantly lower than that of the BGN1 scheme under the same computational parameters. Furthermore, Fig. 4(c) shows the BGN2 scheme preserves the perimeter-decreasing property of the SDF numerically. Furthermore, in Fig. 4(d), it can be observed that the mesh distribution function \(\Psi(t)\) remains lower than \(1.2\) during the evolution. This indicates that the mesh distribution remains well-maintained and almost equidistributed during the process. Therefore, in this scenario, there is no need to perform the mesh regularization procedure because \(\Psi(t)\) is always smaller than the chosen threshold \(n_{\mathrm{MR}}\) (here we choose it as \(10\)) in the simulations. Figure 4: (a) Several snapshots of the curve evolution controlled by the SDF, starting with Shape 2 as its initial shape. (b) The normalized area loss as a function of time. (c) The normalized perimeter as a function of time. (d) The mesh ratio function \(\Psi(t)\) (in blue line) and the number of mesh regularizations (in red line). For (a)-(b), we used \(N=80\) and \(\tau=1/160\) while for (c)-(d), \(N=640\) and \(\tau=1/1280\). To provide a more comprehensive comparison, we conduct simulations of evolution of Shape 3 curve driven by the SDF. Fig. 5(b)-(c) demonstrates that the BGN2 scheme effectively preserves two crucial geometric properties of the SDF: the conservation of area and the reduction of perimeter properties [7; 23]. It should be noted that Fig. 5(d) reveals that without the implementation of mesh regularization, the mesh distribution function \(\Psi(t)\) can become very large. Therefore, in our algorithm, when \(\Psi(t)\) exceeds a threshold \(n_{\text{MR}}\), we employ the BGN1 scheme (4.5) for a single run to perform mesh regularization, similar to Step 3 of Algorithm 2.2. As clearly shown in Fig. 5(d), following this step, the mesh ratio rapidly decreases to a low value, which makes the method more stable. Importantly, this mesh regularization procedure is only required for Figure 5: (a) Several snapshots of the curve evolution controlled by the SDF, starting with Shape 3 as its initial shape. (b) The normalized area loss as a function of time. (c) The normalized perimeter as a function of time. (d) The mesh distribution function \(\Psi(t)\) (in blue line) and the number of mesh regularizations (in red line). For (a)-(b) we used \(N=80\) and \(\tau=1/160\) while \(N=640\) and \(\tau=1/1280\) for (c)-(d). times throughout the entire evolution, without sacrificing the accuracy of the BGN2 scheme (cf. Table 5). Next, we proceed to simulate the evolution of a nonconvex curve, referred Figure 6: Evolution of the three geometrical quantities when the initial data is prepared as in Algorithm 2.1: (a) the normalized area loss, (b) the normalied perimeter, (c) the mesh distribution function \(\Psi(t)\), with mesh regularization procedure. Figure 7: Evolution of the three geometrical quantities when the initial data is prepared as in Remark 2.2: (a) the normalized area loss, (b) the normalized perimeter, (c) the mesh distribution function \(\Psi(t)\), with mesh regularization procedure (shown in the top row) and without mesh regularization procedure (shown in the bottom row). to as Shape 4. Fig. 6 and Fig. 7 (top row) show the evolution of the geometric quantities based on two different initial data preparations: Algorithm 2.1 and Remark 2.2, respectively. A comparison of the results reveals the superiority of the latter approach for several reasons: (i) the magnitude of area loss is significantly lower when using the approach in Remark 2.2; (ii) the perimeter-decreasing property is preserved while the perimeter oscillates at the beginning when using Algorithm 2.1; (iii) the number of mesh regularization implementations is smaller with the approach in Remark 2.2. Thus we recommend preparing the data for a nonconvex initial curve following the approach outlined in Remark 2.2. Fig. 7 (bottom row) illustrates the evolution of the same quantities without implementation of mesh regularization. In this case, all three quantities exhibit significant oscillations after a certain time period, and the area loss and mesh ratio of the polygon becomes excessively large, resulting in the breakdown of the BGN2 scheme. Notably, mesh clustering has happened at \(t=1\) (see Fig. 8 (c2)), eventually leading to mesh distortion at \(t=2\) (see Fig. 8 (d2)). These issues can be avoided by implementing mesh regularization (see 7 (a1)-(c1) and Fig. 8 (a1)-(d1)). This demonstrates the essential role of mesh regularization in the effectiveness of the BGN2 scheme and the BGN1 scheme can greatly improve the mesh distribution. We close this section by simulating the evolution of a nonconvex initial curve [3, 28, 30] driven by CSF, AP-CSF and SDF using the BGN2 schemes. The initial curve can be parametrized as \[\mathbf{X}_{0}(\rho)=(\cos(2\pi\rho),\sin(\cos(2\pi\rho))+\sin(2\pi\rho)(0.7+ \sin(2\pi\rho)\sin^{2}(6\pi\rho))),\] for \(\rho\in\mathbb{I}=[0,1]\). The numerical results are depicted in Fig. 9. As shown in this figure, the CSF initially transforms the intricate curve into a circle before it disappear. Both the AP-CSF and SDF drive the curve to evolve into a perfect circle as its equilibrium shape. ## 6 Conclusions We proposed a novel temporal second-order, BGN-based parametric finite element method (BGN2 scheme) for solving geometric flows of curves such as CSF, AP-CSF and SDF. Based on the BGN formulation and the corresponding semi-discrete FEM approximation [10; 11; 14], our numerical method employs a Crank-Nicolson leap-frog method to discretize in time and the key innovation lies in choosing a discrete inner product over the curve \(\Gamma^{m}\), such that the time level \(t_{m}\) coincides with when all quantities have approximations with an error of \(\mathcal{O}(\tau^{2})\). We established the well-posedness and mild energy stability of the fully-discrete scheme, subject to suitable assumptions. We emphasized the use of shape metrics (manifold distance and Hausdorff distance) rather than function norms (e.g., \(L^{2}\)-norm, \(L^{\infty}\)-norm) to measure the numerical errors of the BGN-based schemes. In the case of certain initial curves, such as a 'flower' shape, we found that the BGN2 scheme, in conjunction with the BGN1 scheme for Figure 8: Evolution of the curve driven by SDF starting with Shape 4 as initial data by using the BGN2 scheme (4.6) with mesh regularization procedure (shown in the top row), and without mesh regularization procedure (shown in the bottom row). The simulations are conducted with a grid number of \(N=80\) and a time step size \(\tau=1/160\). mesh regularization, exhibited remarkable efficiency and stability in practical simulations. Extensive numerical experiments demonstrated that the proposed BGN2 scheme achieves second-order accuracy in time, as measured by the shape metrics, outperforming the BGN1 scheme in terms of accuracy. Furthermore, it is worth mentioning that the approach we have presented for constructing a temporal high-order BGN-based scheme can be readily extended to address various other problems, such as anisotropic geometric flows [5], Willmore flow [13], two-phase flow [21], solid-state dewetting [32] and geometric flows in 3D [31]. In our future research, we will further investigate the development of structure-preserving temporal high-order BGN-based schemes [7; 23] and conduct the numerical analysis of the BGN-based schemes with respect to the shape metric. These investigations will contribute to enhancing the overall understanding and applicability of the BGN type scheme in different contexts. Figure 9: Snapshots of the curve evolution using the proposed BGN2 schemes for three distinct geometric flows: CSF (first row), AP-CSF (second row) and SDF (third row). The simulations are conducted with \(N=80\) and \(\tau=1/640\). **CRediT authorship contribution statement** **Wei Jiang**: Conceptualization, Methodology, Supervision, Writing. **Chumei Su**: Conceptualization, Methodology, Supervision, Writing. **Ganghui Zhang**: Methodology, Numerical experiments, Visualization and Writing. **Declaration of competing interest** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. **Data availability** No data was used for the research described in the article. **Acknowledgement** This work was partially supported by the NSFC 12271414 and 11871384 (W. J.), the Natural Science Foundation of Hubei Province Grant No. 2022CFB245 (W. J.), and NSFC 12201342 (C. S. and G. Z.). The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of Wuhan University. **References**
2303.17942
Benchmarking FedAvg and FedCurv for Image Classification Tasks
Classic Machine Learning techniques require training on data available in a single data lake. However, aggregating data from different owners is not always convenient for different reasons, including security, privacy and secrecy. Data carry a value that might vanish when shared with others; the ability to avoid sharing the data enables industrial applications where security and privacy are of paramount importance, making it possible to train global models by implementing only local policies which can be run independently and even on air-gapped data centres. Federated Learning (FL) is a distributed machine learning approach which has emerged as an effective way to address privacy concerns by only sharing local AI models while keeping the data decentralized. Two critical challenges of Federated Learning are managing the heterogeneous systems in the same federated network and dealing with real data, which are often not independently and identically distributed (non-IID) among the clients. In this paper, we focus on the second problem, i.e., the problem of statistical heterogeneity of the data in the same federated network. In this setting, local models might be strayed far from the local optimum of the complete dataset, thus possibly hindering the convergence of the federated model. Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv), aiming at tackling the non-IID setting, have already been proposed. This work provides an empirical assessment of the behaviour of FedAvg and FedCurv in common non-IID scenarios. Results show that the number of epochs per round is an important hyper-parameter that, when tuned appropriately, can lead to significant performance gains while reducing the communication cost. As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
Bruno Casella, Roberto Esposito, Carlo Cavazzoni, Marco Aldinucci
2023-03-31T10:13:01Z
http://arxiv.org/abs/2303.17942v1
# Benchmarking FedAvg and FedCurv for Image Classification Tasks ###### Abstract Classic Machine Learning (ML) techniques require training on data available in a single data lake (either centralized or distributed). However, aggregating data from different owners is not always convenient for different reasons, including security, privacy and secrecy. Data carry a value that might vanish when shared with others; the ability to avoid sharing the data enables industrial applications where security and privacy are of paramount importance, making it possible to train global models by implementing only local policies which can be run independently and even on air-gapped data centres. Federated Learning (FL) is a distributed machine learning approach which has emerged as an effective way to address privacy concerns by only sharing local AI models while keeping the data decentralized. Two critical challenges of Federated Learning are managing the heterogeneous systems in the same federated network and dealing with real data, which are often not independently and identically distributed (non-IID) among the clients. In this paper, we focus on the second problem, i.e., the problem of statistical heterogeneity of the data in the same federated network. In this setting, local models might be strayed far from the local optimum of the complete dataset, thus possibly hindering the convergence of the federated model. Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv), aiming at tackling the non-IID setting, have already been proposed. This work provides an empirical assessment of the behaviour of FedAvg and FedCurv in common non-IID scenarios. Results show that the number of epochs per round is an important hyper-parameter that, when tuned appropriately, can lead to significant performance gains while reducing the communication cost. As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community. Federated Learning, Federated Curvature, FedCurv, non-IID 2022 The \(1^{st}\) Italian Conference on Big Data and Data Science, September 20-21, 2022, Milan, Italy [email protected] (B. Casella) [email protected] (R. Esposito) [email protected] (C. Cavazzoni) [email protected] (M. Aldinucci) [https://alpha.di.unito.it/bruno-casella/](https://alpha.di.unito.it/bruno-casella/) (B. Casella) [https://www.unito.it/persone/esposito](https://www.unito.it/persone/esposito) (R. Esposito) [https://alpha.di.unito.it/marco-aldinucci/](https://alpha.di.unito.it/marco-aldinucci/) (M. Aldinucci) 0000-0002-9513-6087 (B. Casella) 0000-0001-5366-292X (R. Esposito) 0000-0002-9589-4785 (C. Cavazzoni) 0000-0001-8788-0829 (M. Aldinucci) ## 1 Introduction The increasing availability of big data and computational resources supports the growth of accurate and reliable Machine Learning (ML) and Deep Learning (DL) models. More and more sectors, from the public to companies, follow data-driven approaches in their everyday decisions. One of the difficulties in building systems that support these decisions is the necessity to obtain large datasets for training the models. In fact, in many situations, data is scattered between many parties, and it would be beneficial to merge it into a single place in order to gather the information needed to build high-quality models. However, due to increasing privacy as well as security concerns merging the data might not be a feasible approach. For instance, the European GDPR [1] regulation adds strong constraints on the possibility of sharing data between parties when sensible pieces of information are at stake. Moreover, industrial data are often not shared even when allowed by the law because they represent an essential advantage over competitors or because of the security concerns deriving from the need to expose part of the data to other parties. Consequently, many players resort to train their models on local datasets, and the resulting AI models often suffer from low reliability, leading to generalizability and overfitting issues [2]. Federated Learning (FL) [3] is a machine learning setting that emerged as an effective way to train on distributed datasets without requiring any exchange of data-related information between the parties. This not only solves the privacy concerns, but also allows the parties to implement local policies that can be deployed more securely even in extreme settings such those required by air gapped systems. In a typical FL scenario, at each round, multiple clients take one or more steps of gradient descent on a shared model using their local data, and then a central node acts as an aggregator performing a weighted average (Federated Averaging [3]) of the resulting models. This approach generally performs well when the training data distributions are independent and identically distributed (IID). However, in a real-world scenario, due to the natural differences in the collection of data, FL faces difficulties due to data being non-independent and identically distributed (non-IID) among the involved parties. In these cases, the data of a single client does not represent the overall distribution of data among the federation, and this poses a key challenge for FL [4]. For example, in image recognition, models trained on sunny days may not be accurate on cloudy days due to an unrecognizable feature distribution. The first and most used FL algorithm is FedAvg, proposed by Google in 2017, which can actually work on non-IID data when strong assumptions hold on to the loss function. Specifically, a recent work [5] shows that if the optimization problem is strongly convex and smooth, then FedAvg would converge even non-IID data. FedAvg averages local gradients of the nodes of the federation. Recent years have seen the deployment of several FL algorithms for addressing the non-IID setting, like FedProx [6], FedNova [7] and SCAFFOLD [8]. All these algorithms have already been tested in [9], which provides valuable information about their behaviour in non-IID settings. In this paper, we compare the performance of FedAvg with FedCurv [10] on several common non-IID settings. FedCurv is an algorithm that addresses the problem of catastrophic forgetting [11] in FL by adding a penalty term to the loss function in order to compel the local models to a shared optimum. We tested both algorithms on three public image datasets, i.e. MNIST [12], CIFAR10 [13], and MedMNIST [14]. We manipulate the dataset to simulate common non-IID settings and show that performance is often related to the number of epochs performed locally in each round of the FL algorithm. The main contributions of this work are: * we provide a benchmark datasets for five different non-IID settings (see Section 4): quantity skew, three versions of prior shift and covariate shift; * we provide results of extensive experiments on FedAvg and FedCurv on the considered non-IID settings as well as on the IID case. To the best of our knowledge, this is the first work providing empirical evidence on the behaviour of FedCurv in common non-IID cases; * we discuss the results under several points of view, thus providing a faceted understanding of the behaviour of the algorithms; * as a result of the experimentation, we show that the number of epochs per round can be pivotal in obtaining good performances while limiting the communication costs. The rest of the paper is organized as follows. In Section 2, we present the recent related works. Section 3 reviews the FL algorithms tackling non-iidness. In Section 5 are given some of the most typical non-IID partition strategies, and experimental results are shown and discussed. Finally, in Section 6, conclusions are drawn. ## 2 Related Work Dealing with non-IID data represents one of the fundamental challenges in FL. [4] surveys this issue providing several non-IID data cases. To the best of our knowledge, there are only a few benchmarks for FL in the non-IID scenario. One work [15] tests only FedAvg on MNIST and CIFAR10 in the quantity skew and labels quantity skew settings. Another work [9] considers quantity skew, labels quantity skew and three types of feature distribution skew, i.e. noise-based, synthetic and real-world feature imbalance. It tests FedAvg, FedProx [6], FedNova [7] and SCAFFOLD [8] on 9 public image datasets, including MNIST and CIFAR10. This work points out how non-iidness represents a challenge for FL algorithms in terms of accuracy, showing how none of those algorithms outperforms others in all the tested cases. A more recent work [16] proposes three FL algorithms based on distributed boosting strategies rather than gradient-descent based methods: _DistBoost.F_ and _PreWeak.F_, that are the adaptation to the FL setting of the distributed boosting algorithms DistBoost [17] and PreWeak [18], and _AdaBoost.F_, that is a novel algorithm developed for FL. The authors performed experiments on ten tabular UCI datasets, comparing their proposed algorithms in the same non-iid settings of this paper. This work shows that in most cases, the aggregated model outperforms the models that could have been learned on local data, but that non-iidness can hurt the efficiency of the federation. Apart from these works, there are some benchmarks for FL that do not focus on non-IID data issues. LEAF [19] is a modular benchmarking framework for FL, which includes a set of both image and tabular datasets, an evaluation framework and a set of reference implementations. Some open-source datasets they provided are _FEMNIST_ (Federated Extended MNIST), which is built by partitioning the data in Extended MNIST [20] based on the writer of the digit/character, and _Shakespeare_, a dataset built from _The Complete Works of William Shakespeare_[21], where each speaking role in each play is considered a different device. The Open Application Repository for Federated Learning (OARF) [22] is a benchmark for FL mimicking realistic scenarios with publicly available datasets as different data silos in image, text and structured data. FedML [23] is an open research library and benchmark developed for fair algorithm performance comparison. It provides algorithm development of FedAvg, FedNova and FedOpt [24]. ## 3 FL Algorithms on non-IID data In a non-IID setting, the local data of a single client can not represent the overall distribution of the federation. In such a situation, the local models drift apart because local optima may be far from the global optima. This results in a reduction of accuracy of both local models, which are driven towards the local optima, and of the aggregated model because local updates may be large, in particular when each round has a lot of local epochs. In such conditions, the aggregated model can have a worse accuracy than models learnt using only local data. Several approaches have already been proposed to tackle the problem of non-iidness. The most popular algorithm for FL in the non-IID settings are FedAvg [3], FedCurv [10], FedProx [6], FedNova [7] and SCAFFOLD [8]. The main features of these algorithms are described below. **FedAvg:** Federated Averaging [3] has been the first algorithm to be proposed for the FL setting. Basically, the shared model parameters are initialized by the aggregator at the beginning of the training. Afterwards, each client trains a local copy of the model on its local data and sends the result to the server, which sets the shared model to be the (weighted) average of the received local models. Then the aggregator sends the shared model again to each client, and the process repeats. Typically in cross-device settings, the server sends the global model to a random subset of clients in order to cope with the high number of parties in the federation, which can be in the order of thousands or even more. Two parameters that can be controlled and might have a large impact on the results are the number of local training epochs per round (E) and the local batch size (B). If \(B=\infty\) and \(E=1\) the algorithm is called FedSGD [3]. With \(E>1\), the number of communication rounds can decrease; however, local models can be driven towards the local optima, leading to bad accuracy. In [5] it is shown that if the loss is strongly convex and smooth, the rate of convergence of FedAvg is \(O(1/T)\) (where \(T\) is the number of rounds) and that weight decay is a necessary condition for optimal convergence. **FedCurv:** Federated Curvature (FedCurv) [10] builds on the idea of Lifelong Learning [25] to prevent catastrophic forgetting [11] in FL. It is an adaptation for FL of Elastic Weight Consolidation (EWC) [26], an algorithm for sequentially training a model on new tasks without forgetting old ones. The basic assumption of EWC is that neural networks are sufficiently over-parameterized that there are good chances of finding an optimal solution \(B^{*}\) to task B in the neighbourhood of a solution \(A^{*}\) learned on a previous task \(A\). EWC uses the diagonal of the Fisher information matrix to choose the most important weights of the previous task. EWC adds a penalty term to the optimization function in order to force the model parameters with the higher Fisher information for task A to maintain their actual value while learning task B. FedCurv adds this EWC penalty term for minimizing the model disparity across the clients of a federation. During each round, FedCurv works just like FedAvg, but each client sends its local model together with the diagonal of Fisher's information matrix. Since this method allows for less frequent communication fewer steps are required in order to reach the desired accuracy. However, in each round, the number of parameters to be transmitted is about three times that in FedAvg. **FedProx:** FedProx [6] is a re-parametrization of FedAvg aiming to tackle two key challenges in FL: systems heterogeneity (variability of the devices) and statistical heterogeneity (non-iidness). FedProx alleviates systems heterogeneity starting from the idea of allowing a variable amount of work across devices based on their resource constraints. In the case of statistical heterogeneity, local models may drift apart. FedProx restricts the size of local updates by adding an L2 regularization term in the cost function compelling the local model to stay close to the global model. L2 regularization is controlled by an additional term \(\mu\) that needs to be tuned carefully. FedProx does not introduce communication overhead, but it increases the computation overhead of the devices. **FedNova:** Federated Normalized Averaging (FedNova) [7] proposes a slightly modified version of FedAvg to overcome the problem of objective inconsistency while preserving fast error convergence. The paradigm is the same as FedAvg, but FedNova normalizes and scales local updates of the parties based on their number of local steps (mini-batches of local training). **SCAFFOLD:** Stochastic Controlled Averaging for federated learning (SCAFFOLD) [8] proposes a solution for client-drift. SCAFFOLD requires significantly fewer communication rounds, however, it doubles the communication size per round due to the additional control variates. ## 4 Non-IID data An existing study [4] identifies five different non-IID scenarios: 1) prior shift (label distribution skew), 2) covariate shift (feature distribution skew), 3) same label but different features, 4) same features but different labels, and 5) quantity skew. Many of these cases (1, 2 and 5) have already been tested in [6]. In our work, we focus on the iid case (uniform data distribution) and on five types of non-iidness: quantity skew, three versions of prior shift and covariate shift (feature distribution skew). These scenarios have already been tested in [16] with tabular datasets and non-gradient-descent methods. We briefly describe the adopted settings. **Quantity Skew:** a collection of datasets exhibits a quantity skew if different clients (datasets) can hold vastly different amounts of data. In this case, the sampling distribution may still be consistent among the parties; it is, however, interesting to see the effect of the quantity imbalance on the convergence of the FL algorithm. In this case the proportion of example to be assigned to client \(x\in\{0\dots N-1\}\) is determined by the Power distribution \(P(x;\alpha)=\alpha x^{\alpha-1}\). Here \(\alpha>0\) is a parameter affecting the distribution as it follows: if \(\alpha=1\) then examples are uniformly distributed across clients; if \(\alpha=2\) examples are "linearly" distributed across users (our case); \(if\alpha\geq 3\) the examples are power law distributed; in general, as \(\alpha\rightarrow\infty\), it is increasingly true that a single user obtains most of the examples, and the remaining users only get very few examples. **Prior shift:** let us consider the conjunct distribution of data and labels \(P(x_{i},y_{i})\ =P(x_{i}|y_{i})P(y_{i})\), a collection of datasets exhibit a prior shift if the labels' prior \(P(y_{i})\) varies across clients. This is a common scenario in many real-world FL applications. For instance, labels' prior can vary when devices are spread across different world regions, and certain elements are present only in some countries. The types of prior shift implemented are the following: **Labels quantity skew:** this is the simplest version of prior shift. Labels are partitioned between clients and each client receives samples that belongs to only a fixed number of classes. In our experiments, we fixed the number of classes per client to 2. **Dirichlet labels skew:**: in this case the number of examples of a given class that each client receives is distributed according to samples extracted from the Dirichlet distribution. More specifically, for each class \(k\), we extract an \(N\) dimensional vector \(p_{k}\) from \(\text{Dir}_{N}([\beta,\dots,\beta])\) and assign a proportion of \(p_{k,j}\) samples of class \(k\) to client \(j\). The N-dimensional vector of \(\beta\)s is the _concentration parameter_. The lower \(\beta\), the greater the imbalance. In our experiments we fixed \(\beta=0.5\) as in [16, 6]. **Pathological labels skew:**: this skewness was designed in [3]. First of all, data are sorted by label and divided into shards of size \(N\cdot shards\_per\_client\). Each client owns shards_per_client shards. We fixed \(shards\_per\_client=2\). Basically, in this case, most parties will only have a limited number of labels. **Covariate shift:**: in covariate shift, also known as distribution skew, the marginal distributions \(P(x_{i})\) may vary across clients, even if \(P(y|x)\) is shared. It is common to encounter covariate shift in several fields of machine learning. For example, in speech recognition, a model trained in a specific language can have trouble when it encounters particular accents or dialects of that language; in image recognition, a model trained on sunny days may not be accurate on cloudy days due to an unrecognizable feature distribution. To simulate covariate shift on our datasets, we followed the procedure outlined in [16], where examples are assigned to clients according to a distribution based on the results of a Principal Component Analysis (PCA) performed on the data. ## 5 Experiments To conduct our experiments, we adopted Open Federated Learning (OpenFL) [27], the new framework for FL developed by the Intel Internet of Things Group (IOTG) and Intel Labs. OpenFL is a Python 3 library for FL that is Deep Learning framework-agnostic. Training of statistical models may be done with any deep learning framework, such as TensorFlow or PyTorch, via a plugin mechanism. OpenFL employs an Aggregator-Collaborator workflow: a Collaborator is a client in the federation that trains a global model on a local dataset, while the Aggregator aggregates the model updates received from Collaborators. All the experiments were computed in a distributed environment encompassing \(N=10\) collaborators. Each collaborator is run on an Intel Xeon CPU (8 cores per CPU). **Dataset:** We compared FedAvg and FedCurv on MNIST [12], CIFAR10 [13] and MedMNIST [14]. CIFAR10 and MNIST are default benchmarks in NN literature; MedMNIST has been selected due to the increasing interest of DL and FL in medical domains. In particular, we used OrganAMNIST, a 2D dataset on Abdominal CT contained in MedMNIST. The details of the datasets are summarized in Table I. **Preprocessing:** 2D datasets MNIST and MedMNIST were kept at 28x28, while CIFAR10 was rescaled to 64x64. As for data augmentation, we performed random horizontal flips and angle rotation of 10deg with a probability of 80%. All the datasets were normalized according to their mean and standard deviation. **Model:** We employed ResNet-18 [28] as classification model, trained by minimizing the cross-entropy loss with mini-batch gradient descent using the Adam optimizer with learning rate \(10^{-4}\). The local batch size was 64. The algorithms have been compared using the standard classification metric of top-1 accuracy. Results are reported in tables 2 to 6, with table 2 reporting about a non-federated baseline, i.e., the typical AI scenario where the data are aggregated in a single device. The remaining tables reports the results about experiments in different non-iid settings of the two tested federated algorithms. ### Discussion The results can be analyzed from different points of view, showing interesting insights: **Epochs per round perspective**: It can be noted that the accuracy increases as the number of epochs per round \(E\) increases. This means that local optima may be close to the global optima, and so, with the same amount of rounds, training for more epochs can be beneficial. This pattern is clearly shown for each setting and algorithm, except for the FedAvg on MedMNIST in the uniform setting (Table 3) and in the case of the Labels Quantity Skew (Table 5). **Distribution perspective**: prior shift (see table 5) is the most challenging non-IID setting. In particular, among the different types of prior shift analyzed, the labels quantity skew is the most detrimental. Both FedAvg and FedCurv perform worse on the labels quantity skew. By design, the pathological labels skew is the most similar to the labels quantity skew because, in both cases, each client has examples that belong to only a small subset of the possible classes. Indeed, the pathological labels skew is the second hardest scenario after the labels quantity skew. The Dirichlet labels skew is the less challenging prior shift case. FedAvg and FedCurv perform well on the quantity skew setting (see table 4). This is reasonable because both algorithms adopt \begin{table} \begin{tabular}{l c c c} \hline \hline **Dataset** & **Train samples** & **Test samples** & **\# labels** \\ \hline MNIST & 60.000 & 10.000 & 10 \\ \hline CIFAR10 & 50.000 & 10.000 & 10 \\ \hline MedMNIST & 34.581 & 17.778 & 11 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the datasets. \begin{table} \begin{tabular}{l c c} \hline \hline **Datasets** & **10 epochs** & **100 epochs** \\ \hline MNIST & 98.67\(\%\) & 99.12\(\%\) \\ \hline CIFAR10 & 64.03\(\%\) & 71.25\(\%\) \\ \hline MedMNIST & 84.85\(\%\) & 89.13\(\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: Classification accuracy in the non-federated setting. a weighted averaging of the parameters, and the distribution of the examples (except for the quantity of the examples) is uniform among parties, which is the easiest setting and the one that is more similar to the non-federated scenario. The covariate shift (see table 6) presents only a low loss of accuracy. **Algorithm perspective**: despite FedCurv was born for tackling non-IID data in FL, it obtains better results than FedAvg in the uniform (Table 3) and in the covariate shift settings (Table 6). FedAvg performs well on quantity skew and prior shift scenarios, confirming that it can work on non-IID data [5]. It is interesting to note that most of the time, FedCurv wins after 100 rounds, showing that it may require more training steps to convergence. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{**10 rounds**} & \multicolumn{2}{c}{**100 rounds**} \\ \cline{3-6} **Datasets** & **Epochs** & **FedAvg** & **FedCurv** & **FedAvg** & **FedCurv** \\ \hline \multirow{4}{*}{MNIST} & 1 & 96.33\(\%\) & **96.72\(\%\)** & 96.84\(\%\) & **97.24\(\%\)** \\ & 10 & 99.07\(\%\) & **99.15\(\%\)** & 99.37\(\%\) & **99.40\(\%\)** \\ & 30 & **99.48\(\%\)** & 99.38\(\%\) & **99.49\(\%\)** & 99.42\(\%\) \\ \hline \multirow{4}{*}{CIFAR10} & 1 & **56.24\(\%\)** & 55.13\(\%\) & **56.10\(\%\)** & 54.73\(\%\) \\ & 10 & **70.49\(\%\)** & 70.35\(\%\) & **74.67\(\%\)** & 73.46\(\%\) \\ & 30 & **74.67\(\%\)** & 73.42\(\%\) & **76.91\(\%\)** & 76.24\(\%\) \\ \hline \multirow{4}{*}{MedMNIST} & 1 & **80.42\(\%\)** & 79.44\(\%\) & 83.24\(\%\) & **84.28\(\%\)** \\ & 10 & **87.91\(\%\)** & 87.23\(\%\) & **90.24\(\%\)** & 88.96\(\%\) \\ \cline{1-1} & 30 & 89.08\(\%\) & **89.19\(\%\)** & 89.88\(\%\) & **90.31\(\%\)** \\ \hline \# best performance & & 6 & 3 & 5 & 4 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison between FedAvg and FedCurv in the quantity skew setting. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{2}{c}{**10 rounds**} & \multicolumn{2}{c}{**100 rounds**} \\ \cline{3-6} **Datasets** & **Epochs** & **FedAvg** & **FedCurv** & **FedAvg** & **FedCurv** \\ \hline \multirow{4}{*}{MNIST} & 1 & 97.16\(\%\) & **97.17\(\%\)** & **97.66\(\%\)** & 97.53\(\%\) \\ & 10 & 99.05\(\%\) & **99.07\(\%\)** & **99.35\(\%\)** & 99.33\(\%\) \\ & 30 & 99.28\(\%\) & **99.33\(\%\)** & 99.47\(\%\) & **99.55\(\%\)** \\ \hline \multirow{4}{*}{CIFAR10} & 1 & **56.34\(\%\)** & 56.13\(\%\) & **54.89\(\%\)** & 54.78\(\%\) \\ & 10 & **70.95\(\%\)** & 70.40\(\%\) & 74.03\(\%\) & **75.57\(\%\)** \\ & 30 & 74.05\(\%\) & **74.07\(\%\)** & **78.89\(\%\)** & 78.57\(\%\) \\ \hline \multirow{4}{*}{MedMNIST} & 1 & 68.98\(\%\) & **79.70\(\%\)** & 72.42\(\%\) & **83.50\(\%\)** \\ & 10 & 68.19\(\%\) & **86.90\(\%\)** & 72.96\(\%\) & **89.49\(\%\)** \\ \cline{1-1} & 30 & 46.00\(\%\) & **88.77\(\%\)** & 71.16\(\%\) & **90.23\(\%\)** \\ \hline \# best performance & & 2 & 7 & 4 & 5 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison between FedAvg and FedCurv in the uniform setting. **Communication perspective**: it seems that, with the same amount of epochs, less communication achieves better results. For example, in each setting (apart from the labels quantity skew case), after ten rounds of ten epochs, i.e. 100 epochs, both FedAvg and FedCurv have better accuracy than 100 rounds of one epoch. This means that, perhaps counter-intuitively, training locally before performing aggregation can boost the model's accuracy. This seems to indicate that pursuing local optimizations can lead to better approximations of the local optima. Why this is the case is an interesting avenue for future investigation. ## 6 Conclusions In this paper, we experimented with two federated Learning algorithms in five different non-IID settings. In our experiments, neither of the two algorithms outperforms the other in all the partitioning strategies. However, somewhat unexpectedly, FedAvg produced better models in a majority of non-IID settings despite competing with an algorithm that was explicitly developed to improve in this scenario. Interestingly, both algorithms seem to perform better when the number of epochs per round is increased (which also has the benefit of reducing the communication cost). This is, to the best of our knowledge, a new observation, and we aim to \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & & \multicolumn{2}{c}{**10 rounds**} & \multicolumn{2}{c}{**100 rounds**} \\ \cline{3-6} **Category** & **Datasets** & **Epochs** & **FedAvg** & **FedCurv** & **FedAvg** & **FedCurv** \\ \hline \multirow{6}{*}{Labels Quantity Skew} & \multirow{3}{*}{CIFAR10} & 1 & **41.36\(\%\)** & 39.93\(\%\) & **47.60\(\%\)** & 37.06\(\%\) \\ & & 10 & **45.76\(\%\)** & 39.77\(\%\) & **57.65\(\%\)** & 46.08\(\%\) \\ & & 30 & **41.48\(\%\)** & 26.91\(\%\) & **62.52\(\%\)** & 39.65\(\%\) \\ \cline{2-6} & \multirow{3}{*}{MedMNIST} & 1 & **53.64\(\%\)** & 48.75\(\%\) & **61.82\(\%\)** & 54.01\(\%\) \\ & & 10 & **46.03\(\%\)** & 38.98\(\%\) & 60.03\(\%\) & **65.68\(\%\)** \\ & & 30 & 37.50\(\%\) & **53.90\(\%\)** & **58.12\(\%\)** & 56.65\(\%\) \\ \hline \multirow{6}{*}{Dirichlet Labels Skew} & \multirow{3}{*}{CIFAR10} & 1 & **48.68\(\%\)** & 47.70\(\%\) & **48.50\(\%\)** & 48.37\(\%\) \\ & & 10 & 62.39\(\%\) & **62.77\(\%\)** & 70.44\(\%\) & **71.50\(\%\)** \\ \cline{1-1} & & 30 & **67.10\(\%\)** & 67.01\(\%\) & **75.54\(\%\)** & 74.83\(\%\) \\ \cline{1-1} \cline{2-6} & \multirow{3}{*}{MedMNIST} & 1 & **76.27\(\%\)** & 76.25\(\%\) & **81.62\(\%\)** & 81.27\(\%\) \\ & & 10 & **85.54\(\%\)** & 84.92\(\%\) & 88.74\(\%\) & **89.40\(\%\)** \\ & & 30 & 86.92\(\%\) & **86.93\(\%\)** & 90.49\(\%\) & **90.64\(\%\)** \\ \hline \multirow{6}{*}{Pathological Labels Skew} & \multirow{3}{*}{CIFAR10} & 1 & **40.52\(\%\)** & 35.20\(\%\) & **47.42\(\%\)** & 45.60\(\%\) \\ & & 10 & 53.52\(\%\) & **60.57\(\%\)** & 64.05\(\%\) & **64.33\(\%\)** \\ \cline{1-1} & & 30 & 48.09\(\%\) & **59.62\(\%\)** & 61.58\(\%\) & **67.09\(\%\)** \\ \cline{1-1} \cline{2-6} & \multirow{3}{*}{MedMNIST} & 1 & **64.42\(\%\)** & 60.93\(\%\) & **72.27\(\%\)** & 70.52\(\%\) \\ \cline{1-1} & & 10 & **65.83\(\%\)** & 57.13\(\%\) & **78.81\(\%\)** & 71.23\(\%\) \\ \cline{1-1} & & 30 & **80.36\(\%\)** & 74.65\(\%\) & 83.41\(\%\) & **84.95\(\%\)** \\ \hline \multicolumn{6}{c}{\# best performance} & \multicolumn{2}{c}{13} & \multicolumn{2}{c}{5} & \multicolumn{2}{c}{11} & \multicolumn{2}{c}{7} \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison between FedAvg and FedCurv in the prior shift setting. investigate it in the future. Among the datasets we tested, the ones implementing the quantity and pathological labels skews are those posing the hardest challenges to the algorithms. Also, as expected, the quantity skew appears to be the less challenging type of skew. In future work, we aim to collect further datasets and expand the number of FL algorithms we test to provide a comprehensive picture of the state of the art of FL in non-IID settings. ## Acknowledgements This was has been partially supported by the _European PILOT_ project, which has received funding from the EuroHPC JU under grant agreement No.101034126. The JU receives support from the European Union's Horizon 2020 research and innovation programme and Spain, Italy, Switzerland, Germany, France, Greece, Sweden, Croatia and Turkey.
2309.06814
Comparative Analysis of Contextual Relation Extraction based on Deep Learning Models
Contextual Relation Extraction (CRE) is mainly used for constructing a knowledge graph with a help of ontology. It performs various tasks such as semantic search, query answering, and textual entailment. Relation extraction identifies the entities from raw texts and the relations among them. An efficient and accurate CRE system is essential for creating domain knowledge in the biomedical industry. Existing Machine Learning and Natural Language Processing (NLP) techniques are not suitable to predict complex relations from sentences that consist of more than two relations and unspecified entities efficiently. In this work, deep learning techniques have been used to identify the appropriate semantic relation based on the context from multiple sentences. Even though various machine learning models have been used for relation extraction, they provide better results only for binary relations, i.e., relations occurred exactly between the two entities in a sentence. Machine learning models are not suited for complex sentences that consist of the words that have various meanings. To address these issues, hybrid deep learning models have been used to extract the relations from complex sentence effectively. This paper explores the analysis of various deep learning models that are used for relation extraction.
R. Priyadharshini, G. Jeyakodi, P. Shanthi Bala
2023-09-13T09:05:09Z
http://arxiv.org/abs/2309.06814v1
# Comparative Analysis of Contextual Relation Extraction based on Deep Learning Models ###### Abstract Contextual Relation Extraction (CRE) is mainly used for constructing a knowledge graph with a help of ontology. It performs various tasks such as semantic search, query answering, and textual entailment. Relation extraction identifies the entities from raw texts and the relations among them. An efficient and accurate CRE system is essential for creating domain knowledge in the biomedical industry. Existing Machine Learning and Natural Language Processing (NLP) techniques are not suitable to predict complex relations from sentences that consist of more than two relations and unspecified entities efficiently. In this work, deep learning techniques have been used to identify the appropriate semantic relation based on the context from multiple sentences. Even though various machine learning models have been used for relation extraction, they provide better results only for binary relations, i.e., relations occurred exactly between the two entities in a sentence. Machine learning models are not suited for complex sentences that consist of the words that have various meanings. To address these issues, hybrid deep learning models have been used to extract the relations from complex sentence effectively. This paper explores the analysis of various deep learning models that are used for relation extraction. Contextual Relation Extraction Word Embeddings BERT Deep Learning Model ## 1 Introduction Contextual Relation Extraction (CRE) helps to understand the meaning of the entities and their relationship in a sentence. It can improve the performance of Natural Language Processing tasks such as information retrieval, question answering, and semantic search [1]. Named Entity Recognition aims to automatically identify and classify objects like people, products, organizations, locations, etc. The process of identifying the terms in a text and arranging in an appropriate group is a source for named entity recognition and a key component for text analysis. The analysis of common syntactic patterns is an important factor of NER. Many deep learning models solve entity recognition applications such as indexing documents, finding relationship among entities, and building an ontology [2-4]. The combination of NER and CRE can provide a rich understanding of the text by identifying both the entities and their relationships based on the context. The joint modeling of entity recognition and relation classification attained more focus recently [5]. Additionally, these end-to-end models have generated massively to improve the results. Information Extraction (IE) begins with the creation of knowledge graphs that transforms unformatted text into formatted data. Entity extraction and Relation extraction are the two subtasks of IE. Relation extraction is ongoing research for the recent years. Neural networks enabled technology is used to efficiently classify entities and relation. Natural Language Understanding (NLU) represents the associated relationship among the existing objects and a distinct relationship between two or more entities. Entity relationship is the basis for automatically creating a knowledge graph. Relation extraction instantly detect and categorizes the entities from the text during semantic relationship extraction. Example of binary and n-ary relation are shown in Figure.1. Binary relation consists of two entities and one relation and n-ary relation consists of more that two entities and many relations. Binary relation extraction models may have trouble in handling larger sentence and take lot of time for processing. Some of the common issues in binary relation extraction are ambiguity, incomplete data and noise in the text. To find and understand the connections between different established categories, RE makes use of a range of technologies. Recent joint extraction models work on fixed word vector format for word embedding that are unsuitable for a word that has multiple semantic meanings. To address this problem, Bo Qiao et al. developed a dynamic fine-tuning method to overcome the issues in static word embedding using the LSTM-LSTM-Bias method proposed by Zheng et al [6]. Bidirectional Encoder Representations from Transformers (BERT), is a machine language pre- training model to represent language. BERT uses joint conditions to compare each word context in forward and backward directions. The BERT model can be improved by adding a single additional output layer for tasks such as question answering and language inference. It does not require major changes in the architecture. Devlin et al. proposed the significance of bidirectional pre-training for language representations to eliminate the requirement of multiple task-specific architectures. The BERT model is based on the fine tune representation that outperforms multiple task-specific architectures and reaches cutting-edge performance on a variety of task levels, including token and sentence levels. The pre-training and fine-tuning steps in BERT architecture help to understand the semantic meaning of the words effectively. Before solving the joint extraction task, it pre-trains the BERT model using another corpus. BERT can be used for a wide range of linguistic activities and primarily adds a thin layer to the basic model [7]. Figure 2 shows the categorization of various BERT (Fine Tuning) based applications. ## 2 Related Work In this section, various models for Relation Extraction (RE) are explored. Relation extraction is used to understand the relationships among the various entities in an unlabeled text. There are various methods to perform relation extraction, from a simple string extraction to automated models. Figure 1: Example of binary and n-ary relation. Figure 2: BERT (Fine Tuning) based applications. ### Models for Relation Extraction Recently, many works such as document-level, pipelined and joint model is proposed to solve the Relation Extraction tasks. * Pipelined Method: The pipeline method treats NER and Relation Categorization as a distinct operation. Zexuan et al. suggested the new state-of-the-art for entity and RE using a straightforward pipelined strategy, and they obtain a relative improvement over the earlier joint models using a similar pre-trained encoder [8]. * Joint model: Joint extraction model recognizes entities and relations simultaneously and these models extract entities and relations using a single task. Feature-based structured systems compose the majority of joint techniques. Zheng et al suggested a tagging scheme to convert joint extraction of entities and relations [9]. * Document-level Relation Extraction Models: When compared to sentence-level Relation Extraction, document level Relation Extraction is a complex process. Because document may contain entity pairs with multiple relationships. The Sentence Importance Estimation and Focusing (SIEF) framework was presented for document-level. In various disciplines, the SIEF framework enhances the performance of basic models [10]. Zeng et al. proposed an architecture to distinguish the document level based on the intra and inter sequential reasoning techniques [11]. ### Contextual Word Embeddings for Relation Extraction Word embeddings are a method for finding similarities between words in a corpus by predicting the co-occurrence of words in a text using some sort of model. When it was proven that word embeddings could be used to find analogies, they became well-known in the field of automated text analysis. Table 1 illustrates various word embedding techniques. Contextual embeddings represent each word based on its context, capture the word usage across a range of situations and encode cross-linguistic knowledge. Contextual embeddings, such as ELMo and BERT, perform significantly better than generic word representations. The ELMo of the bidirectional Language Model combines the representations from its intermediary layer according to the task at hand. When ELMo combines both the representations of forward and backward LSTMs, the interactions between the left and right contexts are not taken into consideration [12]. BERT offers Masked Language Modeling (MLM) that involves randomly masking some of the tokens in input sequence. It employs a Transformer encoder during pre- training to focus on instances involving bi-directional communication and the other one is Next Sentence Prediction (NSP). RE with Distant Supervision and Transformers suggested by Despina et al. predicts better embeddings using fine-tuning BERT [13]. ELMo and BERT perform better than Word2Vec, and offer ground-breaking performance in a range of NLP applications. Using two input sentences, natural language processing \begin{table} \begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \hline S. No & Word Embeddings & Explanation & Feature \\ \hline 1 & TF-IDF & A statistical technique for determining a word’s relevance to the corpus of text. It doesn’t record word associations with semantic meaning. CBOW and Skip-gram architectures based on neural networks are superior at capturing semantic information. & Perform well on retrieving information and extracting keywords from documents. \\ 3 & GLoVe & Global word-word co-occurrence-based matrix factorization. It resolves Word2Vec’s local context issues. & Suitable for smaller and larger datasets. \\ 4 & BERT & High-quality contextual information can be captured via a transformer-based attention method. & Better in tasks that involve word analogies and named-entity recognition. Word2Vec is commonly used in semantic analysis tasks. Translation services and a question-and-answer platform are used in the Google Search engine to interpret search keywords. \\ \hline \hline \end{tabular} \end{table} Table 1: Word Embedding Techniques (NLP) determines whether the preceding sentence follows the first one. NLP helps to facilitate the tasks which needs sentence pairs analysis. ### Datasets for Relation Extraction Several datasets for relation extraction have been developed recently to enhance the relation extraction systems. Two examples of RE datasets created through human annotations with relation types are SemEval-2010 Task 8 and ACE05. The crowdsourcing method is used to build TACRED dataset to meet the demands of the large-scale dataset. To enhance document-level RE research, DocRED was developed. Ten thousand annotated examples and more than one hundred relations are included in FewRel. The issues with few-short relation extraction have been addressed with the development of FewRel and FewRel 2.0. HacRED consists of 65,225 relational facts that has been identified from 9,231 documents [14-18]. ## 3 Analysis of Deep Learning Models Deep learning uses artificial neural networks using representation learning. It can be supervised, semi-supervised, or unsupervised. The rapid growth and use of Artificial Intelligence based systems have elevated concerns regarding understandability [19]. Rahman et al. constructed artificial neural networks model for effectively forecast solar radiation [20]. Representation learning helps to reduce the data dimension to simplify in identifying patterns and anomalies. A neural network instructs computers to scrutinize data like human brain. The hidden layers are referred as the term "deep." Deep neural networks consist of 150 hidden layers, compared to the two or three layers that traditional neural networks normally have. The structure of deep neural network is depicted in Figure 3. Deep learning model helps to learn categorization that take input from the various sources such as images, text, and sounds. It can also attain high accuracy, occasionally even superior human performance. Large labeled data and multi-layered architectures are used to train the models to learn data characteristics automatically. Deep learning has the ability to achieve high levels of accuracy when trained on huge amounts of data. There are many complex problems to solve in natural language. In some specific natural language problems, deep learning achieves the best results. Table. 2 illustrates some of the deep learning techniques that are widely used in the task of RE. Survey on existing RE model on Deep learning Techniques using various dataset are mentioned in Table. 3. ## 4 Discussion The comparison of existing relation models with various techniques shows that BERT based relation extraction model provides significantly improved performance than other models such as CNN, RNN, KNN, etc. BERT reads text input in both left-to-right and right-to-left directions at once. Using this bidirectional capability, BERT is pretrained on two different NLP tasks such as Masked Language Modeling and Next Sentence Prediction. It is observed that the model can be used for various domains such as clinical, tourism, agriculture, and so on. Table 4 shows the performance evaluation of existing relation extraction models based on deep learning techniques. Table 5 lists the performance accuracy (F1 score) of the BERT, CNN, and RNN based models for the SemEval 2010 dataset. The F1 statistical metric is employed to calculate an estimation of the deep learning model's accuracy. From the literature survey, it has been identified that the BERT-BiLSTM-CRF model achieves better results for breast cancer concepts and their attributes extraction. Even though several BERT based relation extraction for different fields are developed, the overlapping of relation and partial entity overlapping are still in a development state. Figure 3: Structure of Deep Neural Network. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline \hline S. No & Models & Working Principle & Benifits & Issues \\ \hline 1 & CNN [21-23] & Convolutional Neural Network has Multiple layers to process and extract features. & human supervision is not required for features recognizing. & Overfitting, exploding gradient, and class imbalance. \\ 2 & Bi-GRU [24] & Model that combines the Gated Recurrent Unit (GRU) and the bidirectional Recurrent Neural Network. & Simple than LSTM & Only input and forget gate. \\ 3 & LSTM [25] & Long Short-Term Memory picks up and renditions and long-term retention of past knowledge. & Offers parameters like learning rates, input, and output biases. & Overfitting \\ 4 & CRF [26] & It’s a discriminative model to predict contextual information. & Perform well on NLP tasks such as part of speech tagging, NER. & More accurate but difficult to train. \\ 5 & BiLSTM [27] & It is a combination of two separate RNNs.The networks access both forward and backward information. & Better predictions compared to Auto Regressive Integrated Moving Average (ARIMA). & Slower and requires more time. \\ 6 & RNN [28] & RNN has connections that form directed cycles that allows the current phase to accept the LSTM outputs as inputs. & Remembers every piece of information through time. & Exploding gradient problem, and long-term dependency of words. \\ 7 & MLPs [29] & Made up of many layers of perceptron with activation capabilities. Layers of input and output are interconnected and have equal layers for the input and output. & Used to solve complex nonlinear problems. & Feature scaling, and Computational complexity. \\ 8 & DBNs [30] & Made up of a lot of latent and random layers. Latent variables, often called hidden units that are characterized by binary values. Boltzmann machines has connections between its layers. & Powerful and learn complex patterns. Process large amounts of data very quickly. & Hardware requirements, expensive to train. \\ 9 & RBM [31] & Consists of both visible and hidden components. All hidden units are linked to all visible units. & Computationally efficient and faster than a typical Boltzmann Machine. & Hard to evaluate or simulate. \\ \hline \hline \end{tabular} \end{table} Table 2: Deep Learning Techniques \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline S. No & Authors and Year & Objective & Techniques & Issues & Dataset \\ \hline 1 & Chen Gao et al 2022 [32] & It extracts the semantic mutuality between entity and relation extraction. & HBT, WDec and CasREL & Overlapping entities in the sentence cannot be resolved by this technique. & New York Times (NYT), WebNLG \\ 2 & O.A Tarasova et al 2022 [33] & Method to extract Clinical Named Entities from texts which combines the naive Bayes classifier with specially built filters. & Naive Bayes classifier & The result of CNER using naive-Bayes method is slightly worse. & CHEMDNE R \\ 3 & T.Bai et al 2022 [34] & Segment method based on CNN to extract local semantic properties through word embedding. & SVM, KNN, CNN, and SEGATT- CNN & This model applies only to supervised methods & Herbal- Disease and Herbal Chemistry, HD-HC \\ 4 & Qingbang W et al 2022 [35] & This model efficiently predicts the information and semantic context of the current text. & BERT- BiLSTM, BiLSTM- ATT & The BERT- BLSTM network does not function well when dealing with the issue of partial entity overlap. & Food public opinion field data \\ 5 & Hailin Wang et al 2022 [36] & Supervised and distant supervision methods for Relation Extraction. & DNN, RNN and PCNN & Error propagation in supervised methods. & SemEval 2010-task8, ACE series and NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase COVID-19 news database. & CEMVal 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. & SemEval 2010-task8, ACE series and NYT+Freebase NYT+Freebase COVID-19 news database. Relation extraction offers a wide range of applications including information retrieval, question answering, and knowledge base construction, etc. Creating models that can extract relationships in a multilingual and cross-lingual situation is another important area of focus. Additionally, the combination of relation extraction with other NLP tasks such as named entity recognition and event extraction is expected to lead to more wide-ranging and sophisticated NLP systems. The contexts such as syntax, pragmatics can be considered for improving the relation prediction accuracy. BERT variants such as RoBERTa, DistilBERT, and XLNet can be incorporated to enhance the contextual relation prediction. ## 5 Conclusion This paper provides information on conceptual relation extraction and the various techniques used. It affords information on various deep learning models which are used in different tasks such as building classification models, developing recommendation systems, learning behavior predictions, and so on. It has been identified that BERT- based models can provide better accuracy to identify relations based on their context from multiple sentences. While comparing to other models BERT- BiLSTM-CRF achieved 97% of accuracy with limited information. In future, the overlapping relations problems can be focused to improve prediction accuracy.
2307.00077
DECOR: Degree-Corrected Social Graph Refinement for Fake News Detection
Recent efforts in fake news detection have witnessed a surge of interest in using graph neural networks (GNNs) to exploit rich social context. Existing studies generally leverage fixed graph structures, assuming that the graphs accurately represent the related social engagements. However, edge noise remains a critical challenge in real-world graphs, as training on suboptimal structures can severely limit the expressiveness of GNNs. Despite initial efforts in graph structure learning (GSL), prior works often leverage node features to update edge weights, resulting in heavy computational costs that hinder the methods' applicability to large-scale social graphs. In this work, we approach the fake news detection problem with a novel aspect of social graph refinement. We find that the degrees of news article nodes exhibit distinctive patterns, which are indicative of news veracity. Guided by this, we propose DECOR, a novel application of Degree-Corrected Stochastic Blockmodels to the fake news detection problem. Specifically, we encapsulate our empirical observations into a lightweight social graph refinement component that iteratively updates the edge weights via a learnable degree correction mask, which allows for joint optimization with a GNN-based detector. Extensive experiments on two real-world benchmarks validate the effectiveness and efficiency of DECOR.
Jiaying Wu, Bryan Hooi
2023-06-30T18:31:48Z
http://arxiv.org/abs/2307.00077v1
# DECOR: Degree-Corrected Social Graph Refinement for Fake News Detection ###### Abstract. Recent efforts in fake news detection have witnessed a surge of interest in using graph neural networks (GNNs) to exploit rich social context. Existing studies generally leverage fixed graph structures, assuming that the graphs accurately represent the related social engagements. However, _edge noise_ remains a critical challenge in real-world graphs, as training on suboptimal structures can severely limit the expressiveness of GNNs. Despite initial efforts in graph structure learning (GSL), prior works often leverage node features to update edge weights, resulting in heavy computational costs that hinder the methods' applicability to large-scale social graphs. In this work, we approach the fake news detection problem with a novel aspect of _social graph refinement_. We find that the _degrees_ of news article nodes exhibit distinctive patterns, which are indicative of news veracity. Guided by this, we propose DECOR, a novel application of Degree-Corrected Stochastic Blockmodels to the fake news detection problem. Specifically, we encapsulate our empirical observations into a lightweight social graph refinement component that iteratively updates the edge weights via a learnable degree correction mask, which allows for joint optimization with a GNN-based detector. Extensive experiments on two real-world benchmarks validate the effectiveness and efficiency of DECOR. 1 Footnote 1: Data and code are available at: [https://github.com/jiayingwu19/DECOR](https://github.com/jiayingwu19/DECOR). Fake News; Graph Neural Networks; Social Network + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, USA + Footnote †: 2023: Long Beach, USA + Footnote †: 2023: Long Beach, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, CA, USA + Footnote †: 2023: Long Beach, USA + Footnote †: fidelity representation of the social context, the structure of which is kept unchanged throughout model training. However, we find that _noisy edges_ remain an inevitable challenge for learning on social graphs. A prominent cause is that influential news articles tend to share a large number of common readers (i.e. social users), regardless of their veracity labels. In Figure 1, we illustrate one such case via visualizing the adjacency matrix of a graph constructed with news articles, termed as the news engagement graph (detailed formulation relegated to Section 3.2). Here, the edge weights are positively correlated with the number of common readers, and larger edge weights imply closer connections between news articles. Figure 1 shows that the largest weights are assigned to the diagonal, and areas representing real news pairs and fake news pairs are also darker. This is expected, given the self-loops and frequent interactions within groups of real news and fake news spreaders. However, the figure also shows scattered dark spots representing noisy edges between real news and fake news. We observe that **edge noise is degree-related**, in that large edge weights are often distributed along certain rows and columns. Noisy edges severely undermine the effectiveness of GNNs, as the message passing mechanism (Krizhevsky et al., 2014) propagates noise and contaminates node representations. However, little effort has been made to mitigate this issue in the fake news detection scenario. Despite preliminary efforts of graph structure learning (GSL) methods in denoising edges for real-world graphs (e.g. citation networks) (Hinton et al., 2015; He et al., 2017), existing GSL methods cannot be readily applied to fake news detection, as they generally leverage pairwise node feature similarity to guide edge weight updates. Given the large scale of social graphs, similarity-guided GSL becomes less feasible and raises critical deployment challenges. In this work, we investigate the fake news detection problem from a novel aspect of **social graph refinement**. Given a set of news articles, we construct and refine a news engagement graph that connects the articles with common readers. Guided by our observation of degree-related edge noise, we explore veracity-related degree patterns on the news readership graph, and make two key findings: _(1)_ nodes representing fake news and real news exhibit distinctive degree distributions; and _(2)_ grouping edges by the veracity labels of the articles they connect, different edge groups demonstrate a clear difference regarding the relationship between degrees and the number of common readers. Motivated by our empirical findings on veracity-related degree and co-engagement patterns, we present **Degree-Cor**rected Social Graph Refinement (DECOR), a novel social graph refinement framework for fake news detection. DECOR is based on a flexible extension of the Degree-Corrected Stochastic Blockmodel (DCSBM) (Hinton et al., 2015), a graph generative model that allows us to simultaneously consider the effects of degree and node labels, in a tractable probabilistic manner. DECOR suppresses noise in the news engagement graph by downweighting the noisy edges, specifically via learning a social _degree correction mask_ based on a theoretically motivated likelihood ratio-based statistic under the DCSBM model, with a nonlinear relaxation to improve the flexibility of the model. DECOR utilizes the degree correction mask to adjust the edge weights of the news engagement graph, which is then jointly optimized with a GNN-based classifier to predict news veracity. In summary, our contributions are as follows: * **Empirical Findings**: We present two novel findings, on how both _degree_ and _co-engagement_ closely relate to news veracity. * **Principled DCSBM-based GSL**: Motivated by our empirical findings, we propose DECOR, a GSL approach for reducing edge noise, based on a theoretically motivated _likelihood ratio_-based statistic under the DCSBM model, combined with a nonlinear relaxation. * 34.1 times faster than existing GSL approaches. * **Effectiveness**: DECOR improves F1 score by 4.55% and 2.51% compared to the best baseline on two real-world fake news detection benchmarks, consistently improves the performance of multiple GNN baselines in a plug-and-play manner, and outperforms baselines under label scarcity. ## 2. Related Work ### Fake News Detection Fake news detection is commonly considered as a binary classification problem, with the goal of accurately predicting a given news article as real or fake. Among existing studies, **content-based methods** extract semantic patterns from the news content using a wide range of deep learning architectures that include RNNs (Hinton et al., 2015) and pre-trained language models (PLMs) (Hinton et al., 2015; He et al., 2017). Some methods also guide model prediction with auxiliary information including knowledge bases (Hinton et al., 2015; He et al., 2017; He et al., 2018; He et al., 2019), evidence from external sources (Hinton et al., 2015; He et al., 2019; He et al., 2019), visual information (Hinton et al., 2015; He et al., 2019; He et al., 2019), and signals from the news environment (Hinton et al., 2019). As fake news detection is often deeply rooted in the social context, **propagation-based methods** incorporate various social features including user responses and opinions (Hinton et al., 2015; He et al., 2019; He et al., 2019; He et al., 2019), user-user following relations (Hinton et al., 2015), news sources (Hinton et al., 2015), and user history posts (Hinton et al., 2015) to guide model prediction. Despite the rich social information incorporated, little effort has been made to explore direct relations between news articles and the properties of veracity-related news-news connections. Moreover, many methods are vulnerable to structural noise in social graphs, as they typically adopt fixed graph structures during training. ### Structure Learning for Robust GNNs Graph Neural Networks (GNNs) have demonstrated impressive potential in learning node and graph representations (He et al., 2017; He et al., 2017; He et al., 2019). Despite the prior success, extensive studies have demonstrated that GNNs are highly vulnerable to adversarial attacks in terms of structural noise (Hinton et al., 2015; He et al., 2019; He et al., 2019). To alleviate this issue, numerous works have focused on learning optimized structures for real-world graphs, specifically via edge denoising (Hinton et al., 2015; He et al., 2017; He et al., 2019; He et al., 2019). Motivated by the observation that noisy edges connect nodes with dissimilar features (He et al., 2019), existing methods are generally guided by feature similarity measures. For instance, (He et al., 2019) conducts edge pruning based on the Jaccard similarity between paired node features, Pro-GNN (He et al., 2019) employs the feature smoothness regularization alongside low-rank constraints, and RS-GNN (Hinton et al., 2015) utilizes node feature similarity to guide the link prediction process. Nevertheless, graph structure learning (GSL) remains underexplored under the social context of fake news detection. Existing GSL methods are not readily applicable to this task, given the high computational costs incurred in computing pairwise similarity measures between high-dimensional news article representations on large-scale social graphs. While initial efforts have been made in conditioning the edge metric with node degrees for coordination detection (Wang et al., 2017), the fixed adjustment formula adopted by existing work cannot fully capture the complex relations between degree-related properties, which may vary greatly across datasets. To the best of our knowledge, we propose the first learnable framework for social graph refinement, which leverages low-dimensional degree-related properties to flexibly adjust the edge weights of a news engagement graph for enhanced fake news detection. ## 3. Preliminary Analysis In this section, we formally define the fake news detection problem, establish a social context graph that encodes user engagements in disseminating news articles, and conduct preliminary analysis to explore the veracity-related structural patterns. ### Problem Formulation Let \(\mathcal{D}\) be a fake news detection dataset containing \(N\) samples. In the social media setting, we define the dataset as \[\mathcal{D}=\{\mathcal{P},\mathcal{U},\mathcal{R}\},\] where \(\mathcal{P}=\{p_{1},p_{2},\dots,p_{N}\}\) is a set of questionable **news articles**, \(\mathcal{U}=\{u_{1},u_{2},\dots\}\) is a set of related **social users** who have spread at least one article in \(\mathcal{P}\) via reposting on social media. \(\mathcal{R}\) represents the set of **social user engagements**, in which \(r\in\mathcal{R}\) is defined as a triple \(\{(u,p,k)|u\in\mathcal{U},p\in\mathcal{P}\}\) (i.e. user \(u\) has given \(k\) responses to the news article \(p\) in terms of _repos_). In line with most existing studies, we treat fake news detection on social media as a binary classification problem. Specifically, \(\mathcal{P}\) is split into training set \(\mathcal{P}_{train}\) and test set \(\mathcal{P}_{test}\). Article \(p\in\mathcal{P}_{train}\) is associated with a ground-truth label \(y\) of \(1\) if \(p\) is fake, and \(0\) otherwise. We formulate the problem as follows: Problem 1 (Fake News Detection on Social Media).: _Given a news dataset \(\mathcal{D}=\{\mathcal{P},\mathcal{U},\mathcal{R}\}\) and ground-truth training labels \(\mathcal{Y}_{train}\), the goal is to learn a classifier \(f\) that, given test articles \(\mathcal{P}_{test}\), is able to predict the corresponding veracity labels \(\mathcal{Y}_{test}\)._ ### News Engagement Graph The positive correlation between social user preferences and the user's news consumption habits has been acknowledged by prior research (Bang et al., 2017). Specifically, social media creates an _echo chamber_, where individual beliefs can be continuously reinforced by communication and repetition within like-minded social groups (Kendal et al., 2018). Motivated by this, we propose to capture the news veracity signals embedded in social user engagements. To distill a comprehensive representation of user preferences, we set a threshold to filter the users with less than 3 engagements with news articles, and focus on a subset \(\mathcal{U}_{A}\subset\mathcal{U}\) containing active users. Specifically, we construct a _user engagement matrix_\(\mathbf{E}\in\mathbb{R}^{|\mathcal{U}_{A}|\times N}\). Element \(\mathbf{E}_{ij}\) represents the number of interactions between user \(u_{i}\) and news article \(p_{j}\), the value of which is retrieved from the corresponding entry \((u_{i},p_{j},k_{ij})\in\mathcal{R}\). Given the news consumption patterns of active social users, we further propose to link the news articles that attract similar user groups via constructing an weighted undirected _news engagement graph_\(\mathcal{G}=\{\mathcal{P},\mathcal{E}\}\). The adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) of \(\mathcal{G}\) is formulated based on overlapping user engagement patterns in \(\mathbf{E}\), specifically as: \[\mathbf{A}=\mathbf{E}^{\top}\mathbf{E}. \tag{1}\] Intuitively, element \(\mathbf{A}_{nk}\) in \(\mathbf{A}\) can be interpreted as the number of 2-hop paths (i.e., news - user - news) between two news articles \(p_{n}\) and \(p_{k}\). Hence, a larger \(\mathbf{A}_{nk}\) value represents stronger common interest between the reader groups of news article, implying shared opinions or beliefs in the users' news consumption preferences. ### Empirical Observations In this subsection, we conduct preliminary analysis on real-world news to explore the veracity-related structural properties on the news engagement graph. We observe that fake and real news exhibit distinctive patterns in terms of weighted node **degrees**, motivated by which we design a degree-based social graph refinement framework to mitigate the edge noise issue in Section 4. Our analysis is based on the FakeNewsNet (Fak et al., 2017) benchmark, which consists of the PolitiFact and GossipCop datasets. #### 3.3.1. Degree-Veracity Correlations We first explore how the degree of a news article node is related to its veracity label. In other Figure 3. News co-engagement patterns of news article pairs. Edges in \(\mathcal{G}\) represent shared readership between articles, and are grouped based on the articles’ veracity labels. Figure 2. KDE plot of node degree distributions on the news engagement graph. words, **do fake news articles attract more or less user engagements than real news?** Recall that we have a news engagement graph \(\mathcal{G}=\{\mathcal{P},\mathcal{E}\}\) with adjacency matrix \(\mathbf{A}\). The weighted node degrees in \(\mathbf{A}\) can be used to measure the intensity of user engagements for each news article. In Figure 2, we visualize the degree distributions of fake and real news with a kernel distribution estimation (KDE) plot, which depicts the node degrees with a continuous probability density curve. We make the following observation: Observation 1 ().: _On the news engagement graph, the degree distributions of nodes representing fake and real news articles show a clear difference. Note that different datasets can exhibit varying domain-specific patterns; for instance, in the GossipCop dataset containing celebrity news, real news tend to attract more engagements from active social users. However, this pattern does not apply to the politics-related PolitiFact dataset._ #### 3.3.2. News Co-Engagement Next, we explore the degree-related properties of news article pairs connected by common readers (i.e. active social users in \(\mathcal{U}_{A}\)). Intuitively, given a pair of news articles \(p_{i}\) and \(p_{j}\) that share at least 1 reader, the corresponding edge \(e_{ij}\in\mathcal{E}\) in the news engagement graph can be divided into three groups according to the veracity labels of \(p_{i}\) and \(p_{j}\): (1) real news pairs; (2) real-fake pairs; and (3) fake news pairs. To quantify the shared user engagements between news article nodes w.r.t. the corresponding degrees, we compute a "_co-engagement_" score \(C_{ij}\) for news articles \(p_{i}\) and \(p_{j}\), formulated as: Definition 1 (News Co-Engagement).: \[C_{ij}=|\mathcal{U}_{i}\cap\mathcal{U}_{j}|,\] _where \(\mathcal{U}_{i}\subset\mathcal{U}\) and \(\mathcal{U}_{j}\subset\mathcal{U}\) are the sets of social users that engage with \(p_{i}\) and \(p_{j}\), respectively._ We investigate the following question: **given an edge, are there any associations between its group, and the news engagement of the two nodes it connects?** In Figure 3, we bucketize the edges by the value of \(d_{i}\times d_{j}\), and plot the news co-engagement scores w.r.t. the edge groups. Note that here we adopt the product of degrees to distinguish edges with high values for both \(d_{i}\) and \(d_{j}\), and also motivated by our theoretical results in Section 4.1. Across the buckets, we observe the following pattern on news co-engagement: Observation 2 ().: _Given the degrees, fake news pairs tend to have higher \(C_{ij}\) (i.e. more common users than expected given the degrees), while real-fake pairs tend to have lower \(C_{ij}\) than both real news pairs and fake news pairs._ Our two empirical observations provide distinctive degree-related cues pertaining to nodes (i.e. news articles) and edges (i.e. user engagements) on the news engagement graph (extended analysis and discussion are relegated to Appendix A). These patterns can guide a model in suppressing the noisy edges, as they can be leveraged to identify which edges are more likely to connect news articles of the same veracity. Meanwhile, we find that differences in the degree distributions can be complex (e.g., as shown in Figure 2, fake news attract more user engagements than real news in PolitiFact, but less in GossipCop). This motivates our following degree-based innovations for a learnable social graph refinement approach. ## 4. Proposed Framework - Decor Motivated by our empirical findings on veracity-related degree patterns, we propose the DECOR framework for degree-corrected social graph refinement (overviewed in Figure 4). DECOR can be considered as a novel extension of the Degree-Corrected Stochastic Blockmodel (DCSBM) (Dong et al., 2019) to the fake news detection scenario, which empowers fake news detectors with effective denoising of user engagements. Given a pair of news articles connected by common users, we propose a social degree correction module to adjust the corresponding edge weight using degrees and the news co-engagement. This module is jointly optimized with the GNN classifier, which leverages the corrected edge weights and news article features to predict the news veracity labels. ### Connection with the DCSBM Model In Section 3.3, we observed that degree patterns are closely related to news veracity labels. Next, we formally demonstrate these connections from a theoretical perspective based on the DCSBM model (Dong et al., 2019), a generative model for graphs that derives edge placement likelihoods in a degree-based manner. The benefit of DCSBM is that it allows us to simultaneously model the effect of _degree patterns_ and _class labels_, which are of key interest, in a tractable probabilistic way. Based on the DCSBM model, we will then theoretically derive a principled likelihood ratio-based approach for graph structure learning for the fake news detection application. _Framework._ We first formulate the standard DCSBM under our fake news detection scenario. Recall the news engagement graph \(\mathcal{G}=\{\mathcal{P},\mathcal{E}\}\) formulated in Section 3.2, where \(|\mathcal{P}|=N\). Each news article node in \(\mathcal{G}\) is associated with a class label from the label space \(\mathcal{Z}=\{0,1\}\). Consider a pair of news article nodes \(p_{i}\in\mathcal{P}\) and \(p_{j}\in\mathcal{P}\) with co-engagement \(C_{ij}\). The nodes have class labels \(z_{i}\in\mathcal{Z}\) and \(z_{j}\in\mathcal{Z}\), respectively. Recall that \(C_{ij}\) is defined as the number of common users between \(p_{i}\) and \(p_{j}\). Figure 4. Overview of the proposed Degree-Corrected Social Graph Refinement (DECOR) framework. Next, to formulate structure learning under the DCSBM model, our basic intuition is that _same-class edges_ (i.e., edges \(e_{ij}\) where \(z_{i}=z_{j}\)) are more likely to be useful and informative than _cross-class edges_ (i.e., edges where \(z_{i}\neq z_{j}\)), and hence, structure learning should aim to give a higher weight to same-class edges. Intuitively, cross-class edges tend to indicate noisy edges, as in the example in Figure 1, where the co-engagement between them arises just by chance. Moreover, since our main goal is to classify \(p_{i}\), identifying edges where \(z_{i}=z_{j}\) clearly provides highly useful information for this task. Hence, our key idea is to perform structure learning by deriving the _same-class likelihood ratio_: **Definition 2** (Same-class likelihood ratio).: _The same-class likelihood ratio, i.e. the likelihood ratio for \(z_{i}=z_{j}\) over \(z_{i}\neq z_{j}\) when observing \(C_{ij}\) edges between \(p_{i}\) and \(p_{j}\), is_ \[LR_{ij}:=\frac{\mathbb{P}(C_{ij}|z_{i}=z_{j})}{\mathbb{P}(C_{ij}|z_{i}\neq z_{ j})}. \tag{2}\] The higher this likelihood ratio, the more evidence the data (specifically, \(C_{ij}\)) gives in favor of \(z_{i}=z_{j}\) over \(z_{i}\neq z_{j}\); and hence, structure learning should give a higher weight to such edges. _Derivation._ Under the DCSBM model, the \(C_{ij}\) edges between \(p_{i}\) and \(p_{j}\) are independently Poisson distributed, i.e., \(C_{ij}\sim\text{Poi}(\lambda_{ij})\), where \(\lambda_{ij}\) denotes the expected number of edges: \[\lambda_{ij}=\begin{cases}\beta_{i}\beta_{j}p&\text{if }z_{i}=z_{j}\\ \beta_{i}\beta_{j}q&\text{if }z_{i}\neq z_{j}\end{cases}, \tag{3}\] where \(\beta_{i}\) and \(\beta_{j}\) are the "degree correction parameters" that allow us to generate nodes with different degrees. \(p\) and \(q\) are parameters controlling the rate at which edges are generated under the same-class and cross-class cases, respectively. Generally, we have \(p>q\), i.e., same-class edges have a higher tendency to be generated. The corresponding maximum likelihood values \(\hat{\beta_{i}}\) and \(\hat{\beta_{j}}\) for \(\beta_{i}\) and \(\beta_{j}\) are given as \[\hat{\beta_{i}}=\frac{d_{i}}{m},\quad\hat{\beta_{j}}=\frac{d_{j}}{m}, \tag{4}\] in the DCSBM model (Kang et al., 2017), where \(m=|\mathcal{E}|\) denotes the number of edges. \(d_{i}\) and \(d_{j}\) respectively refer to the weighted degrees of nodes \(p_{i}\) and \(p_{j}\). Since \(C_{ij}\sim\text{Poi}(\lambda_{ij})\), the likelihood ratio \(LR_{ij}\) for \(z_{i}=z_{j}\) over \(z_{i}\neq z_{j}\) can be derived as: \[LR_{ij} =\frac{\mathbb{P}(C_{ij}|z_{i}=z_{j})}{\mathbb{P}(C_{ij}|z_{i}\neq z _{j})} \tag{5}\] \[=\frac{e^{-\hat{\beta_{i}}\beta_{j}p}(p_{i}\beta_{j}p)C_{ij}}{e^{ -\hat{\beta_{i}}\beta_{j}q}(\beta_{i}\beta_{j}q)C_{ij}}\] \[=e^{-\hat{\beta_{i}}\beta_{j}(p-q)}(\frac{p}{q})C_{ij}.\] Substituting the \(\hat{\beta_{i}}\) and \(\hat{\beta_{j}}\) given in Eq.4 into Eq.5, we derive the maximum likelihood estimate for \(LR_{ij}\): \[\text{MLE}(LR_{ij})=e^{-\frac{d_{i}d_{j}}{m^{2}}(p-q)}(\frac{p}{q})C_{ij}. \tag{6}\] Treating \(m,p,q\) as fixed (since they are shared by all nodes), we thus see that the MLE is a function of \(C_{ij}\), \(d_{i}\) and \(d_{j}\): in particular, it is a _log-linear_ function of \(C_{ij}\) and \(d_{i}d_{j}\): \[\text{MLE}(LR_{ij}) =\Phi(C_{ij},d_{i},d_{j}):=e^{-\frac{d_{i}d_{j}}{m^{2}}(p-q)}( \frac{p}{q})C_{ij} \tag{7}\] \[=\exp\left[\begin{pmatrix}C_{ij}\\ d_{i}d_{j}\end{pmatrix}\cdot\begin{pmatrix}\log(p)-\log(q)\\ -\frac{p-q}{m^{2}}\end{pmatrix}\right] \tag{8}\] _Implications._ We first note that Eq. 8 agrees with our empirical finding in Observation 2: if we fix \(d_{i}d_{j}\) in Eq. 8, then as long as \(\log(p)-\log(q)>0\), we observe that higher \(C_{ij}\) is associated with a higher \(LR_{ij}\), and thus a higher probability of same-class edges (\(z_{i}=z_{j}\)), agreeing with Figure 3 where the Real-Fake edges have lowest \(C_{ij}\) for a given \(d_{i}d_{j}\). For structure learning purposes, we could simply use \(\Phi(C_{ij},d_{i},d_{j})\), which we recall is an estimator for \(LR_{ij}=\frac{\mathbb{P}(C_{ij}|z_{i}=z_{j})}{\mathbb{P}(C_{ij}|z_{i}\neq z_{j})}\). However, the standard DCSBM model is built upon relatively strong assumptions (e.g. pre-defined \(p\) and \(q\) values); for fitting real data, we would like to relax these assumptions and allow the model to be flexibly learned from data. The DCSBM model contains very few learnable parameters, which is a fundamental limitation in adapting to the complex degree-based patterns in the news engagement graph. This motivates us to develop DECOR, a degree-based learnable social graph refinement framework, which we will next describe in detail, by _relaxing the assumption of log-linearity_: that is, instead of treating \(\Phi(C_{ij},d_{i},d_{j})\) as a fixed and log-linear function defined in Eq. 8, we instead treat it as a _learnable non-linear function_\(\tilde{\Phi}(C_{ij},d_{i},d_{j})\) to be updated jointly with the rest of the model, during the structure learning process. ### Social Degree Correction As illustrated in Figure 1, the news engagement graph contains structural noise. In light of our empirical findings on degree-veracity relationships and the DCSBM framework, we propose to learn a degree-corrected social graph that downweights the noisy edges to eliminate their negative impacts and facilitate fake news detection via GNN-based classifiers. Recall that the type of an edge in the news engagement graph (i.e. connecting new articles of same or different veracity) is characterized by the co-engagement and degrees of the connected articles. Motivated by the DCSBM model's degree-based probabilistic derivation of edge placement likelihood, we propose to adjust edge weights in the news engagement graph via learning a _social degree correction mask_\(\mathbf{M}\in\mathbb{R}^{N\times N}\), where \(\mathbf{M}_{ij}\) in the interval \((0,1)\) represents the degree correction score for edge \(e_{ij}\) between news article nodes \(p_{i}\) and \(p_{j}\). The value of \(\mathbf{M}_{ij}\) is predicted given co-engagement \(C_{ij}\) of articles \(p_{i}\) and \(p_{j}\), and the articles' weighted node degrees \(d_{i}\) and \(d_{j}\) from the news engagement graph. Specifically, we adopt a neural predictor to obtain \(\mathbf{s}_{ij}\in\mathbb{R}^{2}\), which contains two scores for edge preservation and elimination, respectively: \[\mathbf{s}_{ij}=\tilde{\Phi}(C_{ij},d_{i},d_{j}). \tag{9}\] \(\tilde{\Phi}(\cdot)\) is a MLP-based architecture, and can be considered as a learnable extension of Eq.8 in the DCSBM model. The scores in \(\mathbf{s}_{ij}\) are normalized via the softmax function. To preserve computational efficiency, we design the social degree correction process as _pruning_. In other words, we conduct Eq.9 on all the news pairs connected by common users to obtain the corresponding degree correction scores: \[\mathbf{M}_{ij}=\begin{cases}u_{ij}&\text{if }C_{ij}\neq 0\\ 0&\text{else}\end{cases}. \tag{10}\] where \(u_{ij}\) denotes the softmax-normalized score in \(\mathbf{s}_{ij}\) that correlates with edge preservation. Given the co-engagement matrix \(\mathbf{C}\) of news engagement graph \(\mathcal{G}\), we utilize \(\mathbf{M}\) to obtain a degree-corrected adjacency matrix \(\mathbf{A}_{c}\): \[\mathbf{A}=\mathbf{C}\cdot\mathbf{M}+\mathbf{I} \tag{11}\] \[\mathbf{A}_{c}=\mathbf{D}^{-\frac{1}{2}}\mathbf{\hat{A}}\mathbf{D}^{-\frac{1} {2}}, \tag{12}\] where \(\mathbf{I}\) represents an identity matrix of size \(N\), and \(\mathbf{D}\) is the diagonal matrix of degrees for \(\mathbf{\hat{A}}\). Through the above operations, noisy edges in the news engagement graph are assigned smaller weights, as \(\tilde{\Phi}(\cdot)\) in Eq.9 leverages degree-based properties to predict a low degree correction score. ### Prediction on Degree-Corrected Graph With the degree-corrected adjacency matrix \(\mathbf{A}_{c}\), we can leverage the powerful GNN architectures (e.g. GCN (Garfathi et al., 2017), GIN (Yang et al., 2018) and GraphConv (Golovolov et al., 2017)) to predict the veracity labels of article nodes in the degree-corrected news engagement graph. Central to GNNs is the message-passing mechanism (Kipf and Welling, 2017), which follows an iterative scheme of updating node representations based on information aggregation among the node's neighborhood. For a news article \(p\in\mathcal{P}\), the initial news article feature \(\mathbf{h}_{p}^{(0)}\) is set as the news content representation \(\mathbf{x}_{p}\): \[\mathbf{h}_{p}^{(0)}=\mathbf{x}_{p}, \tag{13}\] where \(\mathbf{x}_{p}\) is extracted from news article \(p\) via a pre-trained language model \(\mathcal{M}\) with frozen parameters. At the \(k\)-th layer of a GNN, the news article representation \(\mathbf{h}_{p}^{(k)}\) is obtained via: \[\mathbf{m}_{p}^{(k)}=\textsc{AGGREGATE}^{(k)}\left(\left\{\mathbf{h}_{u}^{(k- 1)},\forall u\in\mathcal{N}(p)\right\}\right) \tag{14}\] \[\mathbf{h}_{p}^{(k)}=\textsc{COMBINE}^{(k)}\left(\mathbf{h}_{p}^{(k-1)}, \mathbf{m}_{p}^{(k)}\right), \tag{15}\] where \(\mathcal{N}(p)\) denotes the neighbors of \(p\) on the news engagement graph, and \(\mathbf{m}_{p}^{(k)}\) is the aggregated information from \(\mathcal{N}(p)\). Let \(\mathbf{h}_{p}\in\mathbb{R}^{2}\) be the output of the GNN-based classifier for node \(p\). Then, the news veracity label of \(p\) is predicted as \(\tilde{\mathbf{y}}_{p}=\text{softmax}(\mathbf{h}_{p})\). During training, we minimize the following cross entropy loss: \[\mathcal{L}=\sum_{p\in\mathcal{P}_{train}}\textsc{CELoss}\left(\tilde{ \mathbf{y}}_{p},\mathbf{y}_{p}\right). \tag{16}\] The degree correction mask predictor \(\tilde{\Phi}(\cdot)\) is jointly optimized with the GNN-based classifier. DECOR utilizes low-dimensional degree-related properties to guide the social degree correction operations, which facilitates edge denoising on \(\mathcal{G}\) without loss of computational efficiency. ## 5. Experiments In this section, we empirically evaluate DECOR to answer the following five research questions: * **Fake News Detection Performance** (Section 5.2): How well does DECOR perform compared with competitive baselines? * **Ablation Study** (Section 5.3): How effective are co-engagement and degree patterns, respectively, in improving the fake news detection performance of DECOR? * **Limited Training Data** (Section 5.4): Does DECOR perform well under label sparsity? * **Computational Efficiency** (Section 5.5): How efficient is DECOR compared with existing GSL methods? * **Case Study** (Section 5.6): Does DECOR downweight the noisy edges connecting influential real and fake news articles? ### Experimental Setup #### 5.1.1. Datasets We evaluate DECOR on the public benchmark FakeNewsNet (Yang et al., 2018), which consists of two real-world datasets: PolitiFact and GossipCop. Both datasets contain news articles annotated by leading fact-checking websites and the articles' related social user engagements from Twitter. The descriptive statistics of the datasets are summarized in Table 1. To simulate the real-world scenarios, we split the news samples following a _temporal_ order. Specifically, the most recent 20% real and fake news instances constitute the test set, and the remaining 80% instances posted earlier serve as the training set. #### 5.1.2. Baselines We benchmark DECOR against twelve representative baseline methods, which can be categorized into the following three groups by model architecture: **News content based methods (G1)** leverage the semantic features in the news articles. Specifically, **dEFEND**e is a content-based variant of dEFEND (Yang et al., 2018) without incorporating user comment texts, which utilizes a hierarchical network with the co-attention mechanism. **SAFE**w is a content-based variant of SAFE (Zhu et al., 2019) without incorporating visual information from images, which leverages a CNN-based fake news detector. **SentGCN**(Yang et al., 2018) models each news article as a graph of sentences, and utilize the GCN (Garfathi et al., 2017) architecture for news veracity prediction. **BERT**(Chen et al., 2019) and **DistilBERT**(Wang et al., 2019) (with model names BERT-base and DistilBERT-base, respectively) are large pre-trained bidirectional Transformers, which we fine-tune to the downstream task of fake news detection. **Social graph based methods (G2)** encode the social context into graph structures, and leverage GNNs to learn news article representations. Specifically, **GCNFN**(Yang et al., 2018) leverages user responses and user following relations to construct a propagation tree for \begin{table} \begin{tabular}{l c c} \hline \hline **Dataset** & **PolitiFact** & **GossipCop** \\ \hline \# News Articles & 497 & 16,599 \\ \# Real News & 225 & 12,641 \\ \# Fake News & 272 & 3,958 \\ \# User-News Engagements & 227,184 & 963,009 \\ \# Distinct Users & 143,481 & 202,907 \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset statistics. each news article. **FANG**(Wang et al., 2017) establishes a comprehensive social graph with users, news and sources, and learns the representations with GraphSAGE (Kang et al., 2017). We also apply three representative GNN architectures on our proposed news engagement graph, namely **GCN**(Kang et al., 2017), **GIN**(Wang et al., 2017), and **GraphConv**(Wang et al., 2017). For a fair comparison, we only implement the model components involving news articles, social user identities, and user-news relations. **Graph Structure learning (GSL) methods (G3)** aim to enhance representation via learning an optimized graph structure. We implement two GSL methods that focus on edge denoising, **Pro-GNN**(Kang et al., 2017) applies low-rank and sparsity properties to learn a clean graph structure that is similar to the original graph. **RS-GNN**(Kang et al., 2017) simultaneously learns a denoised graph and a robust GNN via constructing a link predictor guided by node feature similarity. #### 5.1.3. Evaluation Metrics Following prior works (Wang et al., 2017; Wang et al., 2017), we adopt four widely-used metrics to evaluate the performance of fake news detection methods: Accuracy (**Ace.**), Precision (**Prec.**), Recall (**Rec.**) and F1 Score (**F1**). In all experiments, we report the average metrics across 20 different runs of each method. #### 5.1.4. Implementation Details We implement our proposed DECOR model and its variants based on PyTorch 1.10.0 with CUDA 11.1, and train them on a server running Ubuntu 18.04 with NVIDIA RTX 3090 GPU and Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz. To construct DECOR's news engagement graph, we select active social users with least 3 reposts, and threshold a user's maximum number of interactions with each news article at 1% of the total number of news articles. We extract 768-dimensional news article features via a pre-trained BERT model with frozen parameters; specifically, we utilize pre-trained weights from HuggingFace Transformers 4.13.0 (HuggingFace Transformers 4.13.0 (HuggingFace Transformers 4.13.0 (model name: bert-base-uncased). The predictor \(\tilde{\Phi}(\cdot)\) for social degree correction is a 2-layer MLP with hidden size 16 for PolitiFact and 8 for GossipCop. The GNN architecture is set to 2 layers with 64 hidden dimensions. The model is trained for 800 epochs, and model parameters are updated for via an Adam optimizer (Kingmaa et al., 2014) with learning rate 0.0005. Technically, our framework is model-agnostic, which could coordinate with various GNN models on the news engagement graph. Here, we select three representative GNN architectures as backbones: GCN (K and avoids the potential task-irrelevant signals from user profiles and related tweets. _(4)_ Existing GSL methods for edge denoising (G3) are not suited to fake news detection. One possible reason is that these methods are similarity-guided, i.e., links between nodes of dissimilar features are strongly suppressed. However, in our fake news detection scenario, two news articles on different topics can be closely connected in terms of co-engagement and veracity type. _(5)_ Compared with competitive fake news detectors, DECOR substantially enhances the performance of three representative GNN backbones. This validates the effectiveness of using degrees and co-engagement to learn a refined news engagement graph. ### Ablation Study We conduct an ablation study to assess the contribution of DECOR's major components in detecting fake news, and summarize the results in Figure 5. We compare DECOR with two variants, namely **DECOR-COE** without co-engagement, and **DECOR-Deg** without degrees (definitions given in Section 3.3). As shown in Figure 5, comparing DECOR with either DECOR-COE or DECOR-Deg, the superior fake news detection performance of DECOR illustrates that both co-engagement and degrees play a significant role in achieving the final improvements. Note that in numerous cases, DECOR-COE guided by degrees outperforms the corresponding GNN backbones that utilize the raw news engagement graph, which is consistent with our first empirical finding (Observation 1) on the distinctive connections between node degrees and news veracity. This further highlights the effectiveness of incorporating degree-related properties for social graph refinement. ### Performance under Label Scarcity Label scarcity poses an imminent challenge for real-world applications of fake news detection. Due to the timely nature of news articles, high-quality annotations are usually scarce. We evaluate the performance of DECOR under limited training samples, and summarize the results in Figure 6. We observe that DECOR consistently outperforms the competitive GNN baselines on the news engagement graph for all training sizes: 20%, 40%, 60% and 80% of the data. DECOR learns an optimized graph by explicitly leveraging the degree-related structural signals embedded in degrees and news co-engagement, which serves as informative news veracity indicators and thereby complement the limited ground-truth knowledge from fact-checked annotations. ### Computational Efficiency We evaluate the computational cost of DECOR regarding parameter number and model runtime. Specifically, we train all models on the same GPU device for 800 epochs, and compare the time elapsed. Note that both Pro-GNN and RS-GNN adopt the same 2-layer GCN architecture as the "GCN" method reported in Table 3. Results in Table 3 validate that DECOR is able to achieve impressive performance gains while maintaining low computational cost. Compared with existing GSL methods, three innovations account for DECOR's efficiency in fake news detection: _(1)_ DECOR leverages low-dimensional features (i.e. degrees and co-engagement) to predict an adjustment score for each edge, whereas existing GSL methods utilize node features that are high-dimensional in terms of news article representations. _(2)_ DECOR utilizes a lightweight degree correction component, which facilitates joint optimization of the social degree correction module and the GNN detector. In contrast, existing GSL methods adopt alternating optimization of the GNN and the link predictor, resulting in slower model training. _(3)_ DECOR operates as pruning on the existing edges in the news engagement graph, whereas existing GSL methods conduct pairwise computations (e.g. feature similarity) among all nodes. Hence, the complexity of DECOR is linear to the number of edges, whereas existing GSL methods incur up to quadratic complexity. These results suggest that DECOR is suitable for deployment in resource-limited scenarios, e.g., online fact-checking services. Figure 5. Ablation study of DECOR. Figure 6. Comparison of DECOR against baselines (F1 Score) under varying training data sizes. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & \# Params & Runtime (s) & Avg. F1 \\ \hline GCN & 49,280 & 2.47 & 0.8964 \\ GIN & 49,347 & 3.19 & 0.9024 \\ GraphConv & 98,627 & 3.72 & 0.8989 \\ \hline Pro-GNN & 296,289 & 158.03 & 0.7691 \\ RS-GNN & 102,722 & 39.51 & 0.7210 \\ \hline GCN w/ DECOR & 49,410 & 4.63 & 0.9479 \\ GIN w/ DECOR & 49,477 & 5.09 & 0.9398 \\ GraphConv w/ DECOR & 98,757 & 5.17 & 0.9377 \\ \hline \hline \end{tabular} \end{table} Table 3. Model efficiency comparison on PolitiFact dataset. ### Case Study To further illustrate why DECOR outperforms existing social graph based models and GSL methods, we conduct a case study to illustrate DECOR's capability of downweighting the noisy edges between fake and real news articles. In Figure 7, we visualize exemplar cases in the neighborhood of \(p\), an influential fake news article published by a hoax news site. From the subgraph on the left hand side, we observe that \(p\) is involved in two types of edges: _(1)_ Noisy edges with large edge weights. \(p\) is closely connected with three influential real news pieces. As these articles all focus on trending political topics, they attract a large number of common readers. _(2)_ Clean edges with small edge weights. \(p\) is also connected with several fake news pieces; however, these articles attract less social users, which results in small groups of common readers with \(p\). These structural patterns are problematic, as propagating information among noisy edges can contaminate the neighborhood, leading to suboptimal article representations. Existing social graph based models generally assume a fixed graph structure and are thereby heavily limited in suppressing edge noise. Prior works on similarity-guided edge denoising also cannot address this issue, as the articles contain similar topics but different veracity. In contrast, DECOR leverages the structural degree-based properties in a flexible manner. This facilitates the elimination of degree-related edge noise. From the subgraph on the right hand side of Figure 7, we find that DECOR effectively suppresses the noisy edges, and recognizes the clean edges by assigning larger weights. These cases provide strong empirical evidence that DECOR effectively refines the news engagement graph for enhanced fake news detection. ## 6. Conclusion and Future Work In this paper, we investigate the fake news detection problem from a novel aspect of social graph refinement. We observe that edge noise in the news engagement graph are degree-related, and find that news veracity labels closely correlate with two structural properties: degrees and news co-engagement. Motivated by the DCSBM model's degree-based probabilistic framework for edge placement, we develop DECOR, a degree-based learnable social graph refinement framework. DECOR facilitates effective suppression of noisy edges through a learnable social degree correction mask, which predicts an adjustment score for each edge based on the aforementioned degree-related properties. Experiments on two real-world benchmarks demonstrate that DECOR can be easily plugged into various powerful GNN backbones as an enhancement. Furthermore, DECOR's structural corrections are guided by low-dimensional degree-related features, allowing for computationally efficient applications. We believe our empirical and theoretical findings will provide insights for future research in designing and refining more complex multi-relational social graphs for fake news detection. ## 7. Acknowledgements This work was supported by NUS-NCS Joint Laboratory (A-0008542-00-00). The authors would like to thank the anonymous reviewers for their valuable feedback.
2306.17622
Tidal dissipation due to the elliptical instability and turbulent viscosity in convection zones in rotating giant planets and stars
Tidal dissipation in star-planet systems can occur through various mechanisms, among which is the elliptical instability. This acts on elliptically deformed equilibrium tidal flows in rotating fluid planets and stars, and excites inertial waves in convective regions if the dimensionless tidal amplitude ($\epsilon$) is sufficiently large. We study its interaction with turbulent convection, and attempt to constrain the contributions of both elliptical instability and convection to tidal dissipation. For this, we perform an extensive suite of Cartesian hydrodynamical simulations of rotating Rayleigh-B\'{e}nard convection in a small patch of a planet. We find that tidal dissipation resulting from the elliptical instability, when it operates, is consistent with $\epsilon^3$, as in prior simulations without convection. Convective motions also act as an effective viscosity on large-scale tidal flows, resulting in continuous tidal dissipation (scaling as $\epsilon^2$). We derive scaling laws for the effective viscosity using (rotating) mixing-length theory, and find that they predict the turbulent quantities found in our simulations very well. In addition, we examine the reduction of the effective viscosity for fast tides, which we observe to scale with tidal frequency ($\omega$) as $\omega^{-2}$. We evaluate our scaling laws using interior models of Hot Jupiters computed with MESA. We conclude that rotation reduces convective length scales, velocities and effective viscosities (though not in the fast tides regime). We estimate that elliptical instability is efficient for the shortest-period Hot Jupiters, and that effective viscosity of turbulent convection is negligible in giant planets compared with inertial waves.
Nils B. de Vries, Adrian J. Barker, Rainer Hollerbach
2023-06-30T12:52:01Z
http://arxiv.org/abs/2306.17622v1
Tidal dissipation due to the elliptical instability and turbulent viscosity in convection zones in rotating giant planets and stars ###### Abstract Tidal dissipation in star-planet systems can occur through various mechanisms, among which is the elliptical instability. This acts on elliptically deformed equilibrium tidal flows in rotating fluid planets and stars, and excites inertial waves in convective regions if the dimensionless tidal amplitude (\(\epsilon\)) is sufficiently large. We study its interaction with turbulent convection, and attempt to constrain the contributions of both elliptical instability and convection to tidal dissipation. For this, we perform an extensive suite of Cartesian hydrodynamical simulations of rotating Rayleigh-Benard convection in a small patch of a planet. We find that tidal dissipation resulting from the elliptical instability, when it operates, is consistent with \(\epsilon^{3}\), as in prior simulations without convection. Convective motions also act as an effective viscosity on large-scale tidal flows, resulting in continuous tidal dissipation (scaling as \(\epsilon^{2}\)). We derive scaling laws for the effective viscosity using (rotating) mixing-length theory, and find that they predict the turbulent quantities found in our simulations very well. In addition, we examine the reduction of the effective viscosity for fast tides, which we observe to scale with tidal frequency (\(\omega\)) as \(\omega^{-2}\). We evaluate our scaling laws using interior models of Hot Jupiters computed with MESA. We conclude that rotation reduces convective length scales, velocities and effective viscosities (though not in the fast tides regime). We estimate that elliptical instability is efficient for the shortest-period Hot Jupiters, and that effective viscosity of turbulent convection is negligible in giant planets compared with inertial waves. keywords: Hydrodynamics - planet-star interactions - instabilities - convection - planets and satellites: gaseous planets ## 1 Introduction Tidal deformations and the corresponding dissipation of tidal flows lead to transfers of angular momentum and energy from one body to its companion. This can result in many long-term effects in exoplanetary and close binary systems, such as tidal circularisation of orbits (e.g. Nine et al., 2020), spin-orbit synchronisation (e.g. Dobbs-Dixon et al., 2004; Lurie et al., 2017) and tidal heating (potentially leading to radius inflation, e.g. Bodenheimer et al., 2001). Perhaps the most extreme outcome is orbital decay and inspiral of a short-period exoplanet, which has potentially been observed for WASP-12b (e.g. Maciejewski et al., 2016; Patra et al., 2020; Turner et al., 2021). Indeed, considerable study has gone into understanding the effects of tides in stars and planets, a review of which can be found in Ogilvie (2014). Tidal effects are thought to be especially strong in Hot Jupiters and other short-period exoplanets due to their close proximities to their stars. The tidal response in a star or planet is usually split up into an equilibrium or non-wave-like tide, and a dynamical or wave-like tide (e.g. Zahn, 1977; Ogilvie, 2012). The equilibrium tide is the quasi-hydrostatic fluid bulge rotating around the body (e.g. Zahn, 1977), while the dynamical tide consists of waves generated by resonant tidal forcing (such as inertial waves in convection zones or internal gravity - or gravito-inertial - waves in radiation zones). The equilibrium tide is thought to be dissipated through its interaction with turbulence, usually of a convective nature (Zahn, 1966; Goldreich and Nicholson, 1977; Zahn, 1989; Goodman and Oh, 1997; Penev et al., 2007, 2009, 2010; Ogilvie and Lesur, 2012; Vidal and Barker, 2020, 2020; Duguid et al., 2019, 2020), or by instabilities of the equilibrium tide itself (which could involve the excitation of waves (e.g. Cebron et al., 2010, 2012; Cebron et al., 2013; Barker and Lithwick, 2013; Barker et al., 2016; Barker, 2016). In this paper we primarily focus on the equilibrium tide and study tidal dissipation due to both the elliptical instability of this flow in convective regions of stars and planets (e.g. Waleffe, 1990; Kerswell, 2002), as well as the interaction of the equilibrium flow with the turbulent convection itself. The net effect of the equilibrium tide is to deform the body into an ellipsoidal shape (more correctly: prolate spheroidal in the absence of a rotational bulge) that approximately follows the companion. Recently, such a tidal deformation was observed directly for the first time in the Hot Jupiter WASP-103b using the transit method (Barros et al., 2022). The elliptical deformation of body 1 due to a second body is represented by the ellipticity, or (dimensionless) tidal amplitude parameter: \[\epsilon=\left(\frac{m_{2}}{m_{1}}\right)\left(\frac{R_{1}}{a}\right)^{3}, \tag{1}\] where \(m_{1}\) and \(m_{2}\) are the masses of bodies 1 and 2, i.e. the planet and host star, respectively, \(R_{1}\) is the radius of body 1, and \(a\) is the orbital separation (semi-major axis). This is essentially a measure of the maximum dimensionless radial displacement in the equilibrium tide. The largest estimated elliptical deformation is \(\epsilon\approx 0.06\) for WASP-19b (with its 0.78 day orbit, e.g. Hebb et al., 2010), and it can be similarly large with values \(\epsilon\gtrsim 0.01\) for other Hot Jupiters with short orbital periods (or in the very closest binary stars). This elliptical deformation of the streamlines allows the elliptical instability to operate (Waleffe, 1990; Kerswell, 2002). This elliptical deformation, no matter how small, can potentially excite pairs of inertial waves inside the planet. These waves couple with the deformation (Waleffe, 1990), leading to exponential growth of their amplitudes. This mechanism is in essence a triadic (three-wave) resonance interaction. To excite these inertial waves in planets, energy must be extracted from the tidal flow. Thus, rotational or orbital energy is transferred into these waves and when these waves dissipate this energy is then converted into heat. In this way, the instability results in tidal dissipation. However, if the waves are viscously damped - by either the (tiny) molecular viscosity of the fluid or by a turbulent viscosity - before they can grow, the instability cannot operate. Larger deformations \(\epsilon\) result in faster growth of the waves and means that they can overcome larger viscosities. An easily deformable, close-in planet is therefore favoured for occurrence of this instability, which suggests why we are considering it as a potential tidal mechanism for Hot Jupiters. Specifically, it is thought that the elliptical instability could be one of the processes responsible for circularisation of planets with very short orbital periods up to 3 days and tidal locking, i.e. tidal spin-orbit synchronisation, for planets with orbits up to 15 days (Barker & Lithwick, 2013a; Barker, 2016). We show the eccentricity distribution of these planets as a function of their orbital period from observations in Fig. 1. Nearly all Hot Jupiters with periods \(P_{\rm orb}<3\) days have eccentricities \(e\approx 0\), and those with \(P_{\rm orb}<10\) days have a strong preference for circular orbits or small \(e\) values, whereas those with \(P_{\rm orb}>10\) days have a wide range of eccentricities. This distribution is thought to result from tidal dissipation inside these planets, but based on prior theoretical results it does not appear to be explained by the elliptical instability in isolation. We thus appear to require a more efficient mechanism of tidal dissipation in Hot Jupiters. To parameterise the rate of tidal dissipation we often use the (modified) tidal quality factor \(Q^{\prime}\), first defined when considering tidal evolution in the solar system (Goldreich & Gold, 1963). \(Q^{\prime}\) is a measure of the total energy stored in the tide (\(E_{0}\)) divided by the energy dissipated in one tidal period, i.e., \[Q^{\prime}=\frac{3}{2k_{2}}\frac{2\pi E_{0}}{\int\lvert E\rvert dt}. \tag{2}\] Here, \(\dot{E}\) is the rate at which energy is dissipated and \(k_{2}\) is the Love number, which is related to the density distribution (being smaller for more centrally-condensed bodies, with \(k_{2}=3/2\) for a homogeneous fluid body). A higher value of \(Q^{\prime}\) corresponds to lower tidal dissipation and vice versa. Thus lower values of \(Q^{\prime}\) correspond to shorter tidal evolutionary timescales. However, the actual tidal dissipation timescales depend on both the process in question and the periods and masses of the planet and companion. The factor \(Q^{\prime}\) is not a constant parameter, and will depend on tidal frequency and amplitude as well as the internal structure and rotation of the body. However, it is thought to take values of approximately \(10^{1}-10^{2}\) for rocky planets (Goldreich & Soter, 1966), approximately \(10^{4}-10^{5}\) for Jupiter (Lainey et al., 2009) and Saturn (Lainey et al., 2012; Lainey et al., 2017), and approximately \(10^{6}\) or smaller for Hot Jupiters (e.g. Ogilvie, 2014). The effect of the elliptical instability on tidal dissipation has been studied previously in simulations using a local Cartesian box model located within the convection zone of a planet or star, both with (Barker & Lithwick, 2013b) and without (Barker & Lithwick, 2013a) weak magnetic fields. The former study found that the elliptical instability leads to bursty behaviour, where the inertial waves generated by the instability interact with geostrophic columnar vortical flows produced by their nonlinear interactions. Similar behaviour features in global hydrodynamical simulations of the elliptical instability (Barker, 2016), where zonal flows take the place of columnar vorticies in the resulting dynamics. Such dynamics might be referred to as "predator-prey" dynamics, where columnar vortices or zonal flows can be thought of as the predators and the inertial waves as the prey. In this analogy the columnar vortices feed off the inertial waves, and as the energy in these vortices increases inertial waves become suppressed. Once the energy in the inertial waves decreases, the vortices also consequently decay until inertial waves can grow again, and the cycle starts anew. Upon taking magnetic fields into account in the local model, the behaviour changed from bursts to sustained energy input into the flow, as magnetic fields break up or prevent formation of strong vortices (Barker & Lithwick, 2013b). Similar sustained behaviour is observed if vortices are damped by an artificial frictional force mimicking Ekman friction due to rigid (no-slip) boundaries (e.g. Le Reun et al., 2017). These prior studies set out to analyse the elliptical instability in the convective regions of planetary (or stellar) interiors, but did not incorporate convection explicitly (except perhaps by motivating a choice of viscosity). The interaction of the elliptical instability with convection has been studied within linear theory (e.g. Kerswell, 2002; Le Bars & Le Dizes, 2006), experimentally in cylindrical containers (e.g. Lavorel & Bars, 2010) and using idealised laminar global simulations in a triaxial ellipsoid (e.g. Cebron et al., 2010). However, these studies mainly focused on heat transport instead of tidal dissipation, which is our focus in this work. Due to the introduction of convection another mechanism of tidal dissipation arises in the system in addition to the elliptical instability. If convection is sufficiently turbulent, it is expected that it will damp Figure 1: Eccentricity distribution of exoplanets with \(P_{\rm orb}<100\) days and masses \(M>0.3M_{J}\). Those with \(P_{\rm orb}<10\) days are referred to as “Hot Jupiters”. Exoplanets with periods \(P_{\rm orb}<3\) days have eccentricities \(e<0.2\), but most of these planets have \(e\approx 0\), whereas those with \(P_{\rm orb}>10\) days exhibit a wide range of eccentricities. Figure produced from exoplanets. the tidal flow, which can be parameterised as an effective viscosity \(v_{\rm eff}\gg v\) (where \(v\) is the tiny molecular viscosity). The efficiency of this effective viscosity as a tidal dissipation mechanism has long been a subject of debate, particularly in the fast tides regime when the tidal frequency \(\omega\) exceeds the dominant convective frequency \(\omega_{c}\). In this case, the effective viscosity is expected to be reduced, but its scaling behaviour with \(\omega\) is debated. Based on arguments stemming from mixing-length theory (MLT), Zahn (1966, 1989) argued that it is expected that the effective viscosity is proportional to the distance travelled by an eddy, i.e. the characteristic convective length scale. However, if the convective timescale exceeds the tidal timescale, the convective eddies can only interact with the tidal flow on the length scales an eddy can travel in a tidal period. Following this argument, the length scale, and thus the effective viscosity, is reduced according to \(v_{\rm eff}\propto\omega_{c}/\omega\). Goldreich & Nicholson (1977) on the other hand argued that only convective eddies with a frequency similar to the tidal frequency, i.e. \(\omega_{c}\sim\omega\), could contribute. These so-called'resonant' eddies would then require both a smaller velocity and smaller length scale to achieve this'resonant' frequency. Following a Kolmogorov scaling argument, this results in an effective viscosity scaling as \(v_{\rm eff}\propto(\omega_{c}/\omega)^{-2}\). Many works have been devoted to finding the correct scaling using numerical and asymptotic methods. The initial works of Penev et al. (2007, 2009a, 2009b) found evidence for the \(\omega^{-1}\) scaling, but did not probe very far into the fast tides regime (i.e. they considered \(\omega/\omega_{c}=0(1)\)). Subsequent works (Ogilvie & Lesur, 2012; Vidal & Barker, 2020a,b; Duguid et al., 2019, 2020) found strong evidence to favour the \(\omega^{-2}\) scaling for fast tides (\(\omega\gtrsim 10\omega_{c}\)), although a weaker "intermediate scaling" closer to \(\omega^{-1}\) (with exponent between \(-1\) and \(-1/2\)) has been observed for \(\omega\sim\omega_{c}\)(Vidal & Barker, 2020a; Duguid et al., 2020; Vidal & Barker, 2020b). In this paper we build upon Duguid et al. (2019, 2020), which used local low simulations to examine the effective viscosity of convective turbulence acting on the tidal flow. Here we also take into account the influence of rapid rotation on the convection, which is expected to be important in giant planets and young rapidly-rotating stars. We also use an elliptical background flow that corresponds more closely with the equilibrium tide, compared with the oscillating shear flow used in e.g. Duguid et al. (2019, 2020), which is stable to elliptical instability. In De Vries et al. (2023), hereafter Paper 1, the non-linear interactions of the elliptical instability and convection were studied. We found evidence for both energy injection by the elliptical instability, as well as from the effective viscosity arising from the interaction of turbulent convection with the equilibrium tide. On the other hand, the generation of convective Large Scale Vortices (LSVs), which on a planet may instead correspond with zonal flows at mid to low latitudes Currie et al. (2020), was found to inhibit the elliptical instability for the Ekman numbers (ratio of viscous to Coriolis forces) we considered. In Paper 1 we focused on exploring the fluid dynamical interactions of the elliptical instability and convection. Here we build upon Paper 1 by endeavouring to quantify the tidal dissipation that arises from the elliptical instability as well as the effective viscosity of the convection acting on the equilibrium tide. To this end we will derive temperature-based scaling laws using mixing-length theory and rotating mixing-length theory for key convective quantities such as the vertical convective velocity, dominant length scale and frequency, and verify that they agree with our simulation results. Duguid et al. (2020) obtained empirically three regimes for the effective viscosity (as a function of the ratio of tidal to convective frequencies) in non-rotating simulations based on the aforementioned convective quantities. Here we apply rotating mixing-length theory to their scaling laws to derive corresponding expressions for the effective viscosity in the rapidly rotating regime (relevant for giant planets). We compare these predictions with simulations to validate using these prescriptions for rotating convection. If these agree, we might be able to use these expressions to compute the effective viscosity using realistic values of the Rayleigh number, Ekman number, viscosity and tidal deformation for giant planets and stars. To this end we continue to explore the local box model (Barker & Lithwick, 2013a,b; Le Reun et al., 2017) - representing a small patch of the polar regions of a planet or star (see Fig. 2) - from Paper 1. We extend the range of parameters they surveyed by running additional simulations varying the Ekman number, Rayleigh number and ellipticity. Finally, we will apply our scaling laws to make predictions for \(Q^{\prime}\) - based on interior models of Hot Jupiters obtained using the MESA code - due to the elliptical instability and turbulent effective viscosity and compare these to the linearly-excited inertial waves. In Section 2 we will describe the model used and discuss the scaling law predictions obtained using RMLT. In Section 3 we derive scaling laws from our numerical simulations and compare them with our theoretical predictions. In Section 4 we outline the astrophysical implications of our results, by generating interior profiles of a Jupiter-like and a Hot Jupiter planet using the MESA code, which we use to evaluate the dissipation of the equilibrium tide and that due to inertial waves. We finally present a discussion and our conclusions in Section 5. ## 2 Model setup ### The elliptical instability We build upon the results of Paper 1, using the same setup, so we only give a brief overview of our model here. (See Paper 1 for a more detailed description.) In the frame rotating with the tidal bulge, the equilibrium tide is an elliptical flow inside the planet. We define the rotation rate \(\gamma\) of this flow as the difference of the planetary spin \(\Omega\) and the orbital rotation rate \(n\), i.e. \(\gamma\equiv\Omega-n\). We work in the frame rotating with the planet at the rate \(\Omega\), modelling a small patch of an equilibrium tidal flow, which we treat as a background flow \(\mathbf{U}_{0}\) Figure 2: Location of the local box in the convection zone of a Hot Jupiter. We indicate the rotation axis and the local temperature gradient, which is represented by the red (hot) and blue (cold) sides of the box. Following Barker and Lithwick (2013), the equilibrium tide can be written in this frame as: \[\mathbf{U}_{0}=\mathbf{A}\mathbf{x}=-\gamma\epsilon\begin{pmatrix}\sin(2\gamma t )&\cos(2\gamma t)&0\\ \cos(2\gamma t)&-\sin(2\gamma t)&0\\ 0&0&0\end{pmatrix}\mathbf{x}, \tag{3}\] where \(\mathbf{x}\) represents the position vector from the centre of the planet in the frame rotating with the planet. This represents the exact equilibrium tide of a uniformly rotating incompressible fluid body perturbed by an orbiting companion (Chandrasekhar, 1967; Barker et al., 2016), and also approximates the main features of the equilibrium tide in more realistic models (e.g. Ogilvie, 2012; Barker, 2020). The elliptical instability operates when two inertial waves have frequencies that approximately add up to the tidal frequency \(2\gamma\)(Kerswell, 2002). In the short wavelength limit, this occurs for two waves with frequencies \(\omega=\pm\gamma\). These waves must also satisfy the inertial wave dispersion relation: \[\omega=\pm 2\Omega\cos(\theta), \tag{4}\] where \(\theta\) is the angle between the wavevector and rotation axis, which therefore allows us to determine that the elliptical instability can only operate in the interval \(n=[-\Omega,3\Omega]\). Outside this interval no inertial waves exist that satisfy both the dispersion relation and \(\omega=\pm\gamma\). Finally, it is known that the elliptical instability grows exponentially (in linear theory) at a rate proportional to \(\epsilon\gamma\)(Kerswell, 2002). For clarity of presentation \(\gamma=\Omega\) is chosen in this work, unless otherwise mentioned, resulting in \(n=0\), i.e. strictly representing the unphysical case where there is no rotation of the bulge. The body in question is not rotating around its companion which causes the tidal effects. However, it turns out that for simulations the only linear effect of choosing a different value of \(\Omega\), and therefore a non-zero value of \(n\), would be to modify the fastest growing mode, and also its growth rate (e.g. Kerswell, 2002; Barker and Lithwick, 2013; De Vries et al., 2023). ### Governing equations and setup of the simulations We use Rotating Rayleigh-Benard Convection (RRBC) as our model to study the convective instability, as it is the simplest model of rotating convection (Chandrasekhar, 1961) which allows us to study its interaction with the elliptical instability. In addition, we use the Boussinesq approximation, which is appropriate for studying small-scale convective (and wavelike) flows. Using the Boussinesq approximation is valid if the vertical size of our simulated domain \(d\) is much smaller than a pressure or density scale height and the flows in the simulation are much slower than the sound speed (Spiegel and Veronis, 1960). However, by choosing this approximation we neglect variations in properties such as the density and temperature. Furthermore, since we require small vertical scales, we cannot model the largest-scale convective flows using this approximation. The box in our current setup represents a polar region, which we have illustrated in Fig. 2. This location arises from our choice of rotation axis, which points in the \(z\)-direction, and temperature profile, which solely depends on \(z\). By making this choice the local rotation and gravity vectors are either aligned or anti-aligned (depending on the sign of \(\Omega\)) and thus we are located at the poles. The aforementioned temperature profile of the conduction state, i.e. the temperature gradient introduced by the hot and cold plates at the bottom and top of our box, respectively, and about which we perturb, is given by: \[\alpha g(T-T_{0})=\frac{zN^{2}}{d}, \tag{5}\] where \(g\) is the local gravitational acceleration (assumed constant), \(\alpha\) is the (constant) thermal expansion coefficient and \(N^{2}\) is the (constant) squared Brunt-Vaisala (or buoyancy) frequency, which is (negative) positive for (un)stable stratification. We choose \(T_{0}=0\) without loss of generality. As a result, the temperature at the bottom is \(T(z=0)=0\), while the temperature at the top is \(T(z=d)=N^{2}/(\alpha g)\), such that the temperature difference is \(\Delta T=-N^{2}/\alpha g\). Note that the introduction of buoyancy modifies the (gravito-)inertial wave dispersion relation to: \[\omega^{2}=4\Omega^{2}\cos^{2}(\theta)+N^{2}\sin^{2}(\theta). \tag{6}\] To non-dimensionalise the governing equations we scale lengths by the vertical domain size \(d\) (representing the distance between the plates), times by the thermal timescale \(d^{2}/\kappa\), and we consequently scale velocities with \(\kappa/d\). Finally, we use \(T=\Delta T\theta\) to scale the temperature (i.e. the temperature is scaled by the temperature difference between the plates). Using these non-dimensionalisations and the Boussinesq approximation, the governing equations, in the frame rotating at the rate \(\Omega\) about \(z\), for the dimensionless perturbations \(\mathbf{u}\) and \(\theta\) to the background flow \(\mathbf{U}_{0}\) and temperature profile \(T(z)\) are: \[\frac{D\mathbf{u}}{Dt}+\mathbf{u}\cdot\nabla\mathbf{U}_{0}+\frac{ \mathrm{Pr}}{\mathrm{EK}}\hat{\mathbf{z}}\times\mathbf{u}=-\nabla p+\mathrm{ PrRa}\theta\hat{\mathbf{z}}+\mathrm{Pr}\nabla^{2}\mathbf{u}, \tag{7}\] \[\nabla\cdot\mathbf{u}=0,\] (8) \[\frac{D\theta}{Dt}-u_{z}=\nabla^{2}\theta, \tag{9}\] where \[\frac{D}{Dt}\equiv\frac{\partial}{\partial t}+\mathbf{U}_{0}\cdot\nabla+ \mathbf{u}\cdot\nabla, \tag{10}\] with \(\mathbf{u}=(u_{x},u_{y},u_{z})\) and \(p\) being the perturbation to the pressure. The non-dimensional parameters describing the convection are the Rayleigh, Ekman and Prandtl numbers: \[\mathrm{Ra}=\frac{\alpha g(-N^{2})d^{4}}{\nu\kappa},\qquad\mathrm{ Ek}=\frac{\nu}{2\Omega d^{2}},\qquad\mathrm{Pr}=\frac{\nu}{\kappa}, \tag{11}\] where \(\nu\) and \(\kappa\) are the constant kinematic viscosity and thermal diffusivity. Due to the equilibrium tidal background flow there are two additional dimensionless numbers in the system: \(\epsilon\) and \(\gamma\) (and there would also be \(n\) if we allowed rotation of the bulge). Finally, we can relate the Rayleigh number and dimensional squared buoyancy frequency: \(N^{2}=-\mathrm{Ra}\,\mathrm{Pr}\,\kappa^{2}/(\alpha gd^{4})\). Upon setting \(\mathrm{Pr}=1\) we find in dimensionless (thermal time) units: \(N^{2}=-\mathrm{Ra}\). Our simulations are executed in a small Cartesian box of dimensionless size \([L_{x},L_{y},1]\) with \(L_{x}=L_{y}=L\). As in Paper 1, to fully resolve bursts of the elliptical instability in tandem with the convective LSV we set \(L=4\) in most simulations. However, the simulations that measure properties unrelated to the elliptical instability are executed in a smaller box with \(L=2\). This box size ensures the LSV is still present, and the results are therefore similar to those with \(L=4\). From the appendix of Paper 1 we infer that the effective viscosity (without elliptical instability) is unaffected by this variation of the box size. The boundary conditions are periodic in the horizontal directions, and stress-free and impermeable in the vertical direction. We have chosen these boundary conditions because they are probably more relevant in the deep interior of a planet, far removed from any boundaries, than no-slip boundary conditions. The vertical boundary conditions are therefore: \(u_{z}(z=0)=u_{z}(z=1)=0\), \(\partial_{z}u_{x}(z=0)=\partial_{z}u_{x}(z=1)=\partial_{z}u_{y}(z=0)=\partial_{z} u_{y}(z=1)=0\). By choosing impermeable vertical boundaries the convection in our box represents a single convection cell in the vertical. Finally, vertical boundary conditions for the temperature perturbation are chosen to be perfectly conducting, with \(\theta(z=0)=\theta(z=1)=0\). The simulations are performed using the Snoopy code (Lesur and Longaretti, 2007), which implements a Fourier pseudo-spectral method using FFTW3 in a local Cartesian box. We use a sine-cosine decomposition in \(z\) and shearing waves (i.e. time-dependent Fourier modes) in \(x\) and \(y\) to account for the linear spatial dependence of the background flow. A 3rd-order Runge-Kutta scheme is used for the time-stepping, together with a CFL safety factor to ensure the timesteps are small enough to capture non-linear effects, usually set to 1.5. The anti-aliasing in the code uses the standard 2/3 rule (Boyd, 2001). A variety of different Rayleigh numbers were analysed using the simulations. The values of the Rayleigh number are typically reported using the supercriticality \(R\equiv\mathrm{Ra}/\mathrm{Ra}_{c}\) for clarity, where \(\mathrm{Ra}_{c}\) is the onset Rayleigh number (determined numerically). The range of the studied supercriticalities at \(\mathrm{Ek}=5\cdot 10^{-5.5}\) is from 2 to 20. The studied values of \(\epsilon\) range from 0.01 to 0.20, and the Ekman number ranges from \(5\cdot 10^{-4.5}\) to \(5\cdot 10^{-6}\). ### Energetic analysis of simulations Following Paper 1 we derive the kinetic energy equation by taking the dot product of \(\mathbf{u}\) with Eq. 7 and subsequently volume-averaging all quantities, where the latter is defined as: \(\langle X\rangle=\frac{1}{I^{2}d}\int_{Y}X~{}dV\). We obtain: \[\frac{d}{dt}K=I+\langle\mathrm{PrRa}\theta u_{z}\rangle-D_{\nu}, \tag{12}\] where we have defined the total kinetic energy \(K\), the energy transfer rate from the background tidal flow \(I\) and the mean viscous dissipation rate \(D_{\nu}\) according to: \[K\equiv\frac{1}{2}\langle|\mathbf{u}|^{2}\rangle,\quad I\equiv-\langle \mathbf{u}\mathbf{A}\mathbf{u}\rangle,\quad D_{\nu}\equiv-\mathrm{Pr}\langle \mathbf{u}\cdot\nabla^{2}\mathbf{u}\rangle. \tag{13}\] To obtain an equation for the thermal (potential) energy when \(\mathrm{Ra}>0\), we multiply Eq. 9 by \(\mathrm{PrRa}\theta\) and average over the box to obtain: \[\frac{d}{dt}P=\langle\mathrm{PrRa}\theta u_{z}\rangle-D_{\nu}, \tag{14}\] where we have defined the mean thermal energy \(P\) and the mean thermal dissipation rate \(D_{\kappa}\) as: \[P\equiv\mathrm{PrRa}\frac{1}{2}\langle\theta^{2}\rangle,\quad D_{\kappa} \equiv-\mathrm{PrRa}\langle\theta\nabla^{2}\theta\rangle. \tag{15}\] The total energy is \(E=K+P\), which thus obeys: \[\frac{d}{dt}E=I+2\langle\mathrm{PrRa}\theta u_{z}\rangle-D_{\nu}-D_{\kappa}=I+ 2\langle\mathrm{PrRa}\theta u_{z}\rangle-D, \tag{16}\] where \(D=D_{\nu}+D_{\kappa}\) is the total dissipation rate. In a steady state, i.e. no change in time of the total energy, it is expected that the (time-averaged value of the) energy injected together with the buoyancy work balances the total dissipation. Since there are two energy injection terms, the total dissipation cannot be used directly to infer tidal dissipation rates. However, the energy injected by the tide must be dissipated if a steady state is to be maintained. Therefore, to interpret the tidal energy dissipation rate we examine the tidal energy injection rate \(I\). (When \(\mathrm{Ra}<0\), the thermal energy is \(-P\) and a minus sign is introduced into both terms on the RHS of Eq. 14. The buoyancy work terms then cancel between Eq. 12 and Eq. 14, leaving only \(I\) and \(D\) in Eq. 16 such that in steady state \(I\approx D\).) Since we know both the elliptical instability (Barker and Lithwick, 2013) and convection (e.g. Guervilly et al., 2014; Favier et al., 2014) in isolation can produce geostrophic flows such as vortices, we introduce further diagnostics to analyse these flows and their role in any possible bursty dynamical behaviour. To do this, we decompose the total energy injection from the background flow according to \[I=I_{2D}+I_{3D}, \tag{17}\] where the barotropic energy injection is defined as \(I_{2D}=-\langle\mathbf{u}_{2D}\mathbf{A}\mathbf{u}_{2D}\rangle\) and the baroclinic energy injection is defined as \(I_{3D}=-\langle\mathbf{u}_{3D}\mathbf{A}\mathbf{u}_{3D}\rangle\). \(I_{2D}\) (and \(\mathbf{u}_{2D}\)) are defined to include all (geostrophic) modes where the wavevector has only non-vanishing \(x\) and \(y\) components, with \(k_{z}=0\), and \(I_{3D}\) (and \(\mathbf{u}_{3D}\)) includes all the modes with \(k_{z}\neq 0\). Because pure inertial waves with \(k_{z}=0\) have \(\omega=0\), and this work is concerned with convectively unstable simulations, i.e. no gravity waves exist which could have non-zero frequencies even when \(k_{z}=0\), this decomposition can be crudely thought of as a decomposition into waves/convective eddies (\(I_{3D}\)) and geostrophic vortices (\(I_{2D}\)). We have found that at small ellipticities the time-averaged energy input into the vortical motions \(I_{2D}\) is approximately zero (or small, see also Barker and Lithwick, 2013), but that the input into the waves \(I_{3D}\) is on average non-zero (which it must be when the elliptical instability operates) and clearly demonstrates any bursty behaviour observed. Based on this observation, only results derived from \(I_{3D}\) will be plotted in this paper. Arguments to describe scaling laws for the dissipation due to the elliptical instability were first proposed in Barker and Lithwick (2013) by (crudely) picturing the instability saturation as involving the most unstable single mode whose amplitude saturates when its growth rate (\(\sigma\)) balances its nonlinear cascade rate. Thus, if the most important mode of the elliptical instability satisfies \(\sigma\sim k\mu\), where \(k\) is its wave number magnitude and \(u\) is its velocity amplitude, then we find \(u\sim\epsilon\gamma/k\). The total dissipation rate \(D\) therefore scales as \(D\sim u^{2}\sigma\sim\epsilon^{3}\gamma^{3}/k^{2}\). Thus, in such a statistically-steady state the dissipation and energy injection rate are expected to scale as \[D=I\propto\epsilon^{3}. \tag{18}\] If this scaling law holds, the dissipation falls off rapidly as the orbital period of the planet increases, since \(\epsilon\propto P_{\mathrm{orb}}^{-2}\), resulting in \(Q^{\prime}\propto P_{\mathrm{orb}}^{4}\). The result of crudely applying this is that circularisation of Hot Jupiters would only be predicted out to about three-day orbital periods. In Paper 1 we observed that, when the elliptical instability operates, the energy injection is consistent with either scaling as \(\epsilon^{3}\) or possibly as the steeper \(\epsilon^{6}\). We will explore this issue further here using simulations, and also determine the astrophysical implications of these results. We can also interpret the energy transfer rates \(I\) and \(I_{3D}\) in terms of an effective viscosity like in Paper 1, obtaining \(\nu_{\mathrm{eff}}\) and \(\nu_{\mathrm{eff},3D}\) respectively. This interpretation is most commonly used to measure the interaction between turbulent convection and the equilibrium tide, but also applies for the elliptical instability. To calculate the effective viscosity, we assume that the tidal flow is viscously dissipated by some spatially and temporally constant kinematic viscosity \(\nu_{\mathrm{eff}}\), which will depend in principle upon \(\mathrm{Ra},\mathrm{Ek},\mathrm{Pr},\gamma\) and \(\epsilon\) (and also \(n\), if that was varied), as well as \(L\). This viscous dissipation rate should then equal the rate of work done on the convective flow by the tidal flow. Following Goodman and Oh (1997); Ogilvie and Lesur (2012); Duguid et al. (2019), we note that the rate of work done on the convective flow is: \[I=-\frac{1}{V}\int_{V}\mathbf{u}\cdot(\mathbf{u}\cdot\nabla)\mathbf{U}_{0}dV. \tag{19}\] To obtain the rate of energy dissipation we define the strain rate tensor for the tidal flow as \(\epsilon_{ij}^{0}\equiv\frac{1}{2}\{\partial_{i}U_{0,j}+\partial_{j}U_{0,i}\}\), resulting in: \[\frac{2\nu_{\mathrm{eff}}}{V}\int_{V}\epsilon_{ij}^{0}e_{ij}^{0}dV=4\nu_{\mathrm{ eff}}\gamma^{2}\epsilon^{2}. \tag{20}\] The effective viscosity is then _defined by_ \[\nu_{\rm eff}=I/(4\gamma^{2}\epsilon^{2}). \tag{21}\] In Paper 1 we found that when the elliptical instability does not operate the convection can still interact with the tidal flow to provide \(I\propto\epsilon^{2}\) such that \(\nu_{\rm eff}\) is independent of \(\epsilon\). Our interpretation of this regime as "convective turbulent viscosity damping the tidal flow" can be understood from crudely applying classical eddy viscosity arguments to the Reynolds stress component \(\langle u_{i}u_{j}\rangle\) that appears in Eq. 19. In this approach, the velocity correlation would be proportional to the tidal velocity shear, i.e., \(\langle u_{i}u_{j}\rangle\propto\nabla\mathbf{U}_{0}\) (see for example Eq. 19 in Terquem 2021) and \(|\nabla\mathbf{U}_{0}|\sim\epsilon\), thus leading to \(I\propto\epsilon^{2}\). In our model we do not consider the evolution of the tidal flow \(\mathbf{U}_{0}\). Instead we treat it as a fixed (but time-dependent) background flow. The energy in this background flow is considered to be much larger than the energy in the perturbations. As such any energy transferred from this flow to the perturbations (or vice versa) is negligible compared to the energy in the background flow. Therefore, the background flow itself is not modified in our simulations. As a consequence, our results apply to a snapshot in the evolution of our system in time. This is a reasonable approximation, considering that timescales of tidal evolution are typically much longer than convective or rotational timescales. ### Scalings of the effective viscosity using mixing-length theory We concluded in Paper 1 that turbulent convection acts to damp the equilibrium tidal flow like an effective viscosity (independently of \(\epsilon\)). In Duguid et al. (2020), who studied the effective viscosity in a non-rotating local box model of convection, three different regimes with associated scaling laws for the effective viscosity were observed. The scalings they obtained depend on the convective velocity \(u_{c}\), the convective length scale \(l_{c}\) and the ratio of the tidal frequency \(\omega=2\gamma\) to the convective frequency \(\omega_{c}\), and are given by: \[\nu_{\rm eff}=\begin{cases}5u_{c}l_{c}&\frac{|\omega|}{\omega_{c}}\lesssim 1 0^{-2},\\ \frac{1}{2}u_{c}l_{c}\left(\frac{\omega_{c}}{\omega_{c}}\right)^{\frac{1}{2}}& \frac{|\omega|}{\omega_{c}}\in[10^{-2},5],\\ \frac{25}{\sqrt{3}}u_{c}l_{c}\left(\frac{\omega_{c}}{\omega_{c}}\right)^{\frac {1}{2}}&\frac{|\omega|}{\omega_{c}}\gtrsim 5.\end{cases} \tag{22}\] We have reported the (upper bound) numerical coefficients from Duguid et al. (2020) here, but wish to clarify that rotation and our different background flow might modify these. Note that the choice of scaling laws for the convective quantities \(u_{c}\), \(l_{c}\) and \(\omega_{c}\) will depend on rotation (and perhaps magnetic fields etc.). Therefore, before we can apply the above scalings, we must derive appropriate scaling laws for these quantities depending on which regime the flow is in and verify that these regimes apply in our numerical simulations. In non-rotating simulations, it is reasonable to set \(l_{c}=d\), pretending that \(d\) is the Boussinesq equivalent of a pressure scale height (or mixing length i.e. multiple of a pressure scale height). However, it is not clear whether this is appropriate for rapid rotation, where we might imagine using a shorter horizontal length scale for \(l_{c}\) would be more appropriate instead, which would reduce the turbulent viscosity. Which of these is appropriate may depend on the intended application, i.e. the effective viscosity is not a property of the fluid, but a way to model the interaction between a particular fluid flow and convective flow. From now on we choose \(l_{c}\) to represent a horizontal convective length scale, which is therefore modified by rotation, and we will show that this is a suitable choice to match our simulation results. We can apply mixing-length theory (MLT, Bohm-Vitense, 1958) to predict the scaling laws of convective properties such as convective velocities, length scales, turnover times and effective viscosities. MLT has been applied to non-rotating cases previously (e.g. Zahn, 1966; Duguid et al., 2019, 2020), but our cases are sufficiently rapidly rotating that we must account for modifications of convective properties by rotation. To do so, we use rotating mixing-length theory (RMLT, Stevenson, 1979) to predict scaling laws for rotating convection (following e.g. Barker et al., 2014; Mathis et al., 2016; Currie et al., 2020). Within RMLT, the vertical convective velocity, which is expected to be roughly equal to the horizontal velocity on the relevant scales, is given by: \[u_{c}\sim d^{1/5}F^{2/5}\Omega^{-1/5}, \tag{23}\] where \(F\) is the vertical heat flux (more specifically a buoyancy flux with units of \(L^{2}T^{-3}\)). We may write this in terms of the standard dimensionless numbers by converting the Rayleigh number to a flux-based Rayleigh number \(\mathrm{Ra}\sigma_{F}\), which are related by \[\mathrm{Ra}\sim\mathrm{Ra}_{F}^{2/5}\mathrm{Pr}^{1/5}\mathrm{Ek}^{-4/5}\sim F ^{2/5}d^{8/5}\kappa^{-1}y^{-1/5}\mathrm{Ek}^{-4/5}, \tag{24}\] since \(N^{2}\sim F^{2/5}\Omega^{4/5}d^{-4/5}\) and by definition \(\mathrm{Ra}\sigma_{F}=\mathrm{NuRa}\), where \(\mathrm{Nu}=F/(-\kappa N^{2})\) is a Nusselt number (ratio of total heat flux to conductive flux). Converting to the Rayleigh number (based on a fixed temperature drop or \(N^{2}\)) from the flux-based Rayleigh number (based on a fixed heat flux \(F\)) entails a switch from flux-based scalings to temperature-based (and by extension \(N^{2}\)-based) scalings. This switch is necessary as the simulations are executed using a constant temperature difference, i.e. they are temperature-based rather than flux-based. After this switch, RMLT predicts for the convective velocity: \[u_{c}\sim\mathrm{RaEk}\frac{\kappa}{d}. \tag{25}\] Furthermore, the dominant horizontal length scale of convection is predicted to scale as \[l_{c}\sim\Omega^{-3/5}F^{1/5}d^{3/5}\sim\frac{\mathrm{Ra}^{1/2}\mathrm{Ek}}{ \mathrm{Pr}^{1/2}}d. \tag{26}\] Finally, the convective turnover frequency (based on the horizontal length scale) according to RMLT is \[\omega_{c}\sim\frac{u_{c}}{l_{c}}\sim\mathrm{Ra}^{1/2}\mathrm{Pr}^{1/2}\frac{ \kappa}{d^{2}}. \tag{27}\] These are the RMLT scalings written in terms of Rayleigh, Ekman and Prandtl numbers. These scalings agree with those found in Guervilly et al. (2019); Aurnou et al. (2020), and with many others, indicating that the results found from the Coriolis-Inertia-Archimedean (CIA) balance are in agreement with the predictions of RMLT following Stevenson (1979). The three effective viscosity scaling laws in Eq. 22 can be written using these predictions from RMLT as: \[\nu_{\rm eff}\propto\begin{cases}\mathrm{Ra}^{3/2}\mathrm{Ek}^{2}\mathrm{Pr}^{ -1/2}\kappa&\text{low frequency},\\ \mathrm{Ra}^{7/4}\mathrm{Ek}^{2}\mathrm{Pr}^{-1/4}\kappa^{3/2}d^{-1}\omega^{-1 /2}&\text{intermediate freq.},\\ \mathrm{Ra}^{5/2}\mathrm{Ek}^{2}\mathrm{Pr}^{1/2}\kappa^{3}d^{-4}\omega^{-2}& \text{high frequency}.\end{cases} \tag{28}\] The first of these regimes occurs when the tidal frequency is low, while the rotation rate is high (so that we use RMLT rather than MLT). Naively, this situation seems counter-intuitive because the tidal frequency is related to the rotation rate, but it can occur if the body is close to spin-orbit synchronisation. We have not supplied ranges of \(\omega/\omega_{c}\) for which these apply as we will determine these based on our simulations. Instead we elect to refer to these regimes as the low, intermediate and high frequency regimes, where the frequency in question is the tidal frequency (compared with the convective frequency). Note that these regimes have not been previously verified with simulations of rotating convection interacting with tidal flows (unlike in the non-rotating case). We can use the scalings in Eqs. 25, 26 and 28 to analyse our results as a function of both Rayleigh and Ekman numbers, in regimes attainable by simulations. To analyse our simulation results in terms of the Ekman number we used two approaches: fixing the Rayleigh number and fixing the supercritical \(R=\mathrm{Ra}/\mathrm{Ra}_{\mathrm{C}}\). The second approach modifies the power of the Ekman number scaling as the critical Rayleigh number scales as \(\mathrm{Ra}_{\mathrm{C}}\approx 3(\pi^{2}/2)^{2/3}\mathrm{Ek}^{-4/3}\) for rapid rotation, which results in \(u_{\mathrm{C}}\sim R\mathrm{Ek}^{-1/3}\) and \(l_{\mathrm{C}}\sim R^{1/2}\mathrm{Ek}^{1/3}\), omitting all parameters which are set to one. This leads to the following changes to \(\nu_{\mathrm{eff}}\) scalings: \[\nu_{\mathrm{eff}}\propto\begin{cases}R^{3/2}\mathrm{Ek}^{0}&\mathrm{low\ frequency},\\ R^{7/4}\mathrm{Ek}^{-1/3}\omega^{-1/2}&\mathrm{intermediate\ freq.},\\ R^{5/2}\mathrm{Ek}^{-4/3}\omega^{-2}&\mathrm{high\ frequency}.\end{cases} \tag{29}\] For completeness, since some of our simulations enter the regime where rotation is no longer rapid, we include here the scalings of the relevant quantities using non-rotating MLT in terms of Rayleigh and Prandtl numbers: \[u_{\mathrm{C}}\sim\mathrm{Ra}^{1/2}\mathrm{Pr}^{1/2}\frac{\kappa}{d}, \tag{30}\] and the relevant length scale in this regime is likely to be comparable with the vertical length scale \(d\), i.e. \(l_{\mathrm{C}}=d\). It follows that: \[\omega_{\mathrm{C}}\sim\mathrm{Ra}^{1/2}\mathrm{Pr}^{1/2}\frac{\kappa}{d^{2}}, \tag{31}\] which is the same scaling obtained previously using RMLT. The three regimes we expect for the effective viscosity using MLT are then: \[\nu_{\mathrm{eff}}\propto\begin{cases}\mathrm{Ra}^{1/2}\mathrm{Pr}^{1/2}\kappa &\mathrm{low\ frequency},\\ \mathrm{Ra}^{3/4}\mathrm{Pr}^{3/4}\kappa^{3/2}d^{-1}\omega^{-1/2}&\mathrm{ intermediate\ freq.},\\ \mathrm{Ra}^{3/2}\mathrm{Pr}^{3/2}\kappa^{3}d^{-4}\omega^{-2}&\mathrm{high\ frequency}.\end{cases} \tag{32}\] The high frequency regime within non-rotating MLT is unlikely to occur in our simulations as that regime only applies when the tidal frequency is high, yet the rotation rate is low. It is however likely to be important in reality, for example inside spun-down Hot Jupiter host stars, due to for example magnetic braking (e.g. Benbakoura et al., 2019). If a Hot Jupiter host star is spun down, and is thus slowly rotating, but there is a large orbital frequency due to the short-period Hot Jupiter companion, the tidal frequency is also high (and in the fast tides regime), indicating that this regime is relevant there (e.g. Duguid et al., 2020; Barker, 2020). From this multitude of scalings a new question arises: for a given system, which scalings (if any!) are the correct ones? This question in reality consists of two separate questions. The first part of the question is related to whether MLT or RMLT (or neither) predictions should be used, and the second part relates to which tidal frequency regime is applicable. One of our key aims in this paper is to test these scalings and to determine the appropriate ones for astrophysical extrapolation. We can quantify the transition from MLT to RMLT using the convective Rossby number: \[\mathrm{Ro}_{\mathrm{C}}\equiv\left(\frac{u_{\mathrm{C}}}{2\Omega l_{c}} \right)=\left(\frac{\omega_{\mathrm{C}}}{2\Omega}\right), \tag{33}\] which is based on the spin of the planet, and the convective velocities and frequencies. Fortunately, using these temperature-based definitions, regardless of whether the regime in question is MLT or RMLT, the expression for the Rossby number in terms of the diffusion-free scalings is the same because \(\omega_{\mathrm{C}}\) has the same form in both regimes. This useful result was also found previously (e.g. Arunou et al., 2020), and leads to the expression for the convective Rossby number: \[\mathrm{Ro}_{\mathrm{C}}\sim\mathrm{Ra}^{1/2}\mathrm{Pr}^{-1/2}\mathrm{Ek}. \tag{34}\] On the other hand, the transitions between the different frequency regimes for \(\nu_{\mathrm{eff}}\) depend on the ratio \(\omega/\omega_{c}\), which we can write as: \[\frac{\omega}{\omega_{c}}=\frac{\omega}{u_{\mathrm{C}}/l_{\mathrm{C}}}=\frac{ 1}{2}\frac{2\omega l_{c}}{u_{\mathrm{C}}}\equiv\frac{1}{2}\mathrm{Ro}_{\omega} ^{-1}. \tag{35}\] We have defined this quantity as a "tidal convective Rossby number", \(\mathrm{Ro}_{\omega}\). The two Rossby numbers are related via the factor \(\Omega/\omega\). In this work, the two Rossby numbers differ by a factor of \(1/2\), because \(\Omega=\gamma=\frac{1}{2}\omega\) is set for the simulations with a given Ek. The regime transitions are thus expected to occur at roughly the same value of the rotation rate. Using the tidal frequency transitions obtained in Duguid et al. (2020), where the transition from intermediate to high frequency regimes occurs around \(\frac{\omega_{\mathrm{C}}}{\omega_{\mathrm{C}}}\approx 5\), this may be expected to occur here at \(\mathrm{Ro}_{\omega}\approx 0.1\). The transition from MLT to RMLT on the other hand is likely to start at \(\mathrm{Ro}_{\mathrm{C}}\approx 0.1\) (e.g. Fig. 4 of Barker et al., 2014). ### Illustrative simulations To illustrate the flow observed in our simulations, we plot snapshots of the vertically-averaged vertical vorticity perturbation (to the elliptical flow) \(\langle\omega_{2}\rangle_{z}\) at Ek = \(5\cdot 10^{-5.5}\), \(t=0.12\) in Fig. 3. In the figure on the left we plot the simulation with \(\mathrm{Ra}=6\mathrm{Ra}_{\mathrm{C}}\), \(\epsilon=0.04\). In this simulation the equilibrium tide is present (as a background flow, but is not shown explicitly), but the ellipticity is sufficiently small such that the convective LSV inhibits the elliptical instability (Paper 1). The observed behaviour is a cyclonic convective LSV embedded in an anticyclonic background. However, the cyclone appears very noisy due to the presence of many small-scale convective eddies. In the figure on the right we plot the simulation with \(\mathrm{Ra}=6\mathrm{Ra}_{\mathrm{C}}\), \(\epsilon=0.1\). This is in the regime with a strong elliptical instability, albeit with a slightly larger \(\epsilon\) than realistically expected for a Hot Jupiter. For illustration we have chosen a snapshot during a burst of the elliptical instability. The cyclonic vortex is stronger than the one in the left panel. Furthermore, the surrounding background is more strongly anticyclonic as a result. Our subsequent analysis of the contributions of the elliptical instability to the energy injection rate (and hence tidal dissipation rate) is based on flows more like the one on the right of Fig. 3, while the analysis of the effective viscosity of convection originates primarily from quantities measured from flows like the one shown on the left. ## 3 Scaling laws for the elliptical instability and rotating convection Our simulations necessarily use dimensionless parameters that are far from the astrophysical ones, except perhaps for \(\epsilon\) for the hottest Jupiters. Hence, we now turn to obtain scaling laws for the energy injection due to the elliptical instability to compare with the heuristic arguments in SS 2.3, as well as scaling laws for the convective velocity and effective viscosity by testing the prescriptions obtained in SS 2.4. For the latter, we choose parameters in the strongly rotationally-constrained regime, with fast tides, and thus we expect to observe the high frequency RMLT scaling for the effective viscosity in our simulations. We will also justify this regime as being the most relevant in giant planets later in SS 4. ### Energy injection due to elliptical instability When the flow is sufficiently turbulent, the energy injection rate (\(I_{3D}\)) due to the elliptical instability on its own scales consistently with \(\epsilon^{3}\)(Barker & Lithwick, 2013a,b). However, the energy injection we observe in our simulations doesn't result from the elliptical instability alone. We plot the energy injection \(I_{3D}\) as a function of \(\epsilon\) for various values of \(\mathrm{Ra}\) at fixed \(\mathrm{Ek}=5\cdot 10^{-5.5}\) in Fig. 4, which we divide into two regimes by a vertical dashed line. This vertical dashed line is located at \(\epsilon=0.08\). As we found in Paper 1, the points to the left of this line represent simulations without visible bursts of elliptical instability for \(\mathrm{Ra}\gtrsim 2\mathrm{Ra}_{c}\), for which it appears to have been largely suppressed. The \(\mathrm{Ra}=6\mathrm{Ra}_{c}\) data points in burgundy are fitted using an \(\epsilon^{2}\) scaling. The data agrees very well with this scaling for \(\epsilon\) below the transition, indicating that the energy injection here corresponds to an effective viscosity that is independent of \(\epsilon\). This presumably results from the action of convective turbulence in damping the tidal flow rather than from the elliptical instability, as we will justify further in SS 3.2. The points on the right of the vertical dashed line feature bursts of instability in which the kinetic energy and energy transfer rates repeatedly grow to large values, indicating that the elliptical instability operates in this regime. The operation of the elliptical instability appears to be in addition to the effective viscosity resulting from convective turbulence, but the energy injection rate due to the elliptical instability is much larger, as we illustrate by the strong departure of these points from the black \(\epsilon^{2}\) line. We fit these with our (naive) theoretically predicted \(\epsilon^{3}\) fit (solid-blue line), and a previously observed \(\epsilon^{6}\) fit (solid-red line Barker & Lithwick, 2013a). Both fits are consistent with the data on the right hand side (over such a narrow range of \(\epsilon\)), and are inconsistent with data on the left. Furthermore, the data and fits are consistent at all values of \(\mathrm{Ra}\), indicating that this scaling is independent of the Rayleigh number. The energy injection rate of the elliptical instability would remain greater than that of the effective viscosity due to convection for \(\epsilon\gtrsim 0.01\) if we extrapolate the former with an \(\epsilon^{3}\) scaling. Following Barker & Lithwick (2013b), we use the naive theoretical prediction to obtain a proportionality constant \(\chi\) from our fit to the data shown in Fig. 4 such that \(I_{3D}\equiv\chi e^{3}\gamma^{3}\). We find \(\chi=0.044\) for the plotted blue line, with \(\chi\approx 0.18\) as an upper estimate when fitting to the top right clump of data points. If instead we calculate based on \(I_{3D}\propto\chi_{2}\epsilon^{6}\gamma^{3}\), we obtain \(\chi\equiv\chi_{2}\epsilon^{3}\approx 22.45\cdot\epsilon^{3}\). To illustrate the efficiency of this \(\epsilon^{6}\) scaling we insert the highest-inferred ellipticity of a Hot Jupiter, \(\epsilon=0.06\), and find \(\chi=4.8\cdot 10^{-3}\). Hence, the elliptical instability is considerably weaker if this steeper scaling applies. The \(\epsilon^{3}\) scaling can thus be viewed as an "upper bound" on the energy transfer rates resulting from the elliptical instability for small \(\epsilon\). ### Comparison of RMLT predictions to the simulations In this section we explore further the regime for \(\epsilon\lesssim 0.08\) that we have identified, and we will demonstrate that it results from convective turbulence damping the background tidal flow. First, we fit the convective velocities as a function of Rayleigh number in the left panel of Fig. 5 to verify our predictions based on RMLT. The data is obtained from simulations at fixed \(\mathrm{Ek}=5\cdot 10^{-5.5}\), and with such values of \(\epsilon\) that only sustained energy injection is present without visible bursts of elliptical instability (which tend to produce larger vertical velocities when they occur). These values of \(\epsilon\) that contain no visible bursts of the elliptical instability vary with Rayleigh number as stronger convective driving results in stronger suppression of the elliptical instability; for example at \(\mathrm{Ra}=4\mathrm{Ra}_{c}\approx 0.9\cdot 10^{8}\) values up to \(\epsilon=0.04\) are used, while at \(\mathrm{Ra}=10\mathrm{Ra}_{c}\approx 2.2\cdot 10^{8}\) we use up to \(\epsilon=0.075\), and at \(\mathrm{Ra}=20\mathrm{Ra}_{c}\approx 4.4\cdot 10^{8}\) we use up to \(\epsilon=0.1\). The same values of \(\epsilon\) are used for all subsequent figures as a function of \(\mathrm{Ra}\). In this and subsequent figures, orange circles represent simulations without bursts of the elliptical instability and blue circles represent those in which there are visible bursts. We plot the best fit RMLT scaling in solid-red and for stronger convection (i.e. relatively weaker rotation), we fit the non-rotating MLT scaling in solid-black. The RMLT scaling is in very good agreement with our data for \(\mathrm{Ra}\lesssim 3\cdot 10^{8}\), indicating that RMLT is the appropriate description of rotating convection in our simulations. We separately fit the convective velocities as a function of the rotation rate \(\Omega\) in the right panel of Fig. 5 at constant \(\mathrm{Ra}=1.3\cdot 10^{8}\) at \(\epsilon\in[0.02,0.05]\). These values of \(\epsilon\) are used in all subsequent figures at fixed \(\mathrm{Ra}\). We have elected to plot these results as a function of \(\Omega\) instead of Ekman number because \(\Omega\) has a more direct relation Figure 3: The vertical vorticity perturbation averaged over \(z\) (\((\omega_{c})_{z}\)) of the flow. The cyclonic vortex is centred for clarity in both images. Left: Convection on top of the equilibrium tide in the regime without the elliptical instability \(\mathrm{Ra}=6\mathrm{Ra}_{c}\), \(\epsilon=0.04\), \(\mathrm{Ek}=5\cdot 10^{-5.5}\) at \(t=0.12\). Right: Convection on top of the equilibrium tide in the regime with a strong elliptical instability with \(\mathrm{Ra}=6\mathrm{Ra}_{c}\), \(\epsilon=0.1\), \(\mathrm{Ek}=5\cdot 10^{-5.5}\) at \(t=0.12\). to the tidal frequency \(\omega\) than the Ekman number, particularly in real bodies where \(\nu,d\neq 1\). In these simulations we have set \(\nu=d=1\), however, so \(\Omega=(1/2)\mathrm{Ek}^{-1}\). The simulations at high rotation rate do feature bursts of the elliptical instability, because the associated high tidal frequency strengthens the elliptical instability whilst weakening the convective driving because the Rayleigh number is fixed. The data points at strong rotation, \(\Omega\geq 10^{4.4}\), fit the RMLT prediction of \(\Omega^{-1}\) well. The data points at weaker rotation rates become more weakly dependent on \(\Omega\) as they begin to approach the non-rotating MLT prediction. The black-solid line fitted to the left-most data points scales only weakly as \(\Omega^{-0.2}\). It is expected that at even smaller rotation rates, or larger Rayleigh numbers, this scaling would become Figure 4: Energy injection rate (into 3D modes) \(I_{3D}\) as a function of \(\epsilon\) for various Rayleigh numbers. The vertical dashed line at \(\epsilon=0.08\) marks the transition between sustained behaviour on the left, and bursts in addition to sustained behaviour on the right. Three lines are fitted to the data at \(\mathrm{Ra}=6\mathrm{Ra}_{c}\). The sustained behaviour is consistent with an \(\epsilon^{2}\) scaling for \(\epsilon\leq 0.08\), represented by the black line. Bursts of the elliptical instability contribute on top of this sustained energy injection, resulting in a much larger energy injection for larger \(\epsilon\). The sustained+bursts energy injection is fitted using an \(\epsilon^{3}\) fit in blue, and an \(\epsilon^{6}\) fit in red. Figure 5: Left: Scaling of the vertical convective velocity compared with the predictions of MLT and RMLT at fixed \(\mathrm{Ek}=5\cdot 10^{-5.5}\). Only those simulations with sufficiently small ellipticities are used such that no bursts of the elliptical instability are present, as indicated by the orange data points. At these ellipticities the vertical velocity is negligibly impacted by the ellipticity. We observe the RMLT scaling with \(\mathrm{Ra}\) in red, and a hint for the non-rotating MLT scaling with \(\mathrm{Ra}^{1/2}\) in black. Right: Scaling of the vertical convective velocity at fixed \(\mathrm{Ra}=1.3\cdot 10^{8}\) and \(\epsilon\in[0.02,0.05]\). The blue data points correspond to simulations with bursts of the elliptical instability. We retrieve the RMLT scaling in red at large \(\Omega\), and find that the scaling tends to the MLT prediction to be independent of rotation rate as \(\Omega\) becomes small, here illustrated by the black-solid line, which follows \(\Omega^{-0.2}\). fully independent of rotation. This figure indicates that the transition from MLT to RMLT is indeed gradual, instead of abrupt. From both figures we find - according to RMLT - that the convective velocity is well-described by \[u_{c}=0.28\text{RaEk}\frac{\kappa}{d}, \tag{36}\] for rapid rotation, and for weaker rotation it follows the non-rotating MLT scaling \[u_{c}=7.1\cdot 10^{-2}\text{Ra}^{1/2}\text{Pr}^{1/2}\frac{\kappa}{d}. \tag{37}\] Note that both scalings are in fact diffusion-free but have been written using the standard dimensionless numbers from our fits. Next we obtain the horizontal length scale from simulations at fixed Rayleigh number \(\text{Ra}=1.3\cdot 10^{8}\) and at fixed supercriticality \(R=6\) as a function of \(\Omega\). We use two different methods to calculate a dominant \(l_{c}\), illustrated here using the heat flux spectrum \(F(k_{\perp})=\text{Re}(\hat{u}_{z}\hat{T}^{*})\) as a function of horizontal wave number \(k_{\perp}=\sqrt{k_{x}^{2}+k_{y}^{2}}\), where hats indicate a 2D (\(k_{x},k_{y}\)) Fourier transform and we have averaged over the inner vertical 1/3 of the box, i.e. between \(z=1/3\) and \(z=2/3\), and subsequently summed up the contribution from all modes within an integer bin of \(k_{\perp}\). The first prescription was used by Barker et al. (2014); Currie et al. (2020), and is obtained by: \[l_{c}=2\pi\left(\frac{\int k_{\perp}F(k_{\perp})dk_{\perp}}{\int F(k_{\perp} )dk_{\perp}}\right)^{-1}, \tag{38}\] and the second was used by Parodi et al. (2004): \[l_{c}=2\pi\frac{[k_{\perp})^{-1}F(k_{\perp})dk_{\perp}}{\int F(k_{\perp})dk_{ \perp}}. \tag{39}\] In our simulations both methods agree very well when based on the same quantity. However, vastly different results are obtained if the energy spectrum (as used by Parodi et al., 2004) is used instead of \(F(k_{\perp})\)(as used by Barker et al., 2014; Currie et al., 2020), as we show in both panels of Fig. 6. The length scales calculated using the heat flux according to Eq. 38 are plotted in orange diamonds and the length scales according to Eq. 39 but for the vertical kinetic energy spectrum \(E_{z}(k_{\perp})=\frac{1}{2}|\hat{u}_{z}|^{2}\) instead of \(F(k_{\perp})\) are plotted in yellow diamonds. We have opted to calculate length scales based on the "vertical kinetic energy" spectrum \(E_{z}(k_{\perp})\) in the latter instead of the total kinetic energy spectrum because the total kinetic energy spectrum is strongly dominated by the large scale horizontal motions of the LSV. This forces the power to be concentrated on the largest scales, while these horizontal motions are unlikely to contribute substantially to heat transport or provide the dominant contribution to the effective viscosity. For completeness, the length scale obtained from the temperature fluctuation spectrum, i.e. \(|\delta(k_{\perp})|^{2}\), is also plotted in green diamonds. Furthermore, we have added the length scale corresponding to the highest peak of the heat flux spectrum as a proxy for the dominant length scale in blue squares. The length scales corresponding to the peaks in the vertical kinetic energy and temperature perturbation spectra are omitted, because they tend to be located at the box scale, likely due to influence of the LSV, and then rapidly decrease and eventually align with the linear onset scale for \(\Omega\gtrsim 10^{4.6}\). Finally, fits to the data are included, with the RMLT prediction fit in solid-red and the linear onset length scale in dashed-purple. The left panel displays \(l_{c}\) as a function of \(\Omega\) at fixed \(\text{Ra}=1.3\cdot 10^{8}\), on the same range as the right panel of Fig. 5. We find that the blue squares, i.e. the peaks of the heat flux spectrum, follow a fit proportional to \(\Omega^{-1}\) in solid-red. Note that the blue squares do not agree with this fit when \(\Omega\gtrsim 10^{4.5}\), which is probably because the simulations are not turbulent enough then to follow RMLT and instead lie more closely to the linear onset length scale. In terms of the length scales as obtained from the integrals there are substantial differences between those calculated based on different quantities. All three quantities match together close to linear onset for the three right-most data points, which have supercriticalities of 2.4, 1.8 and 1.3 from left to right, but they diverge for \(\Omega\lesssim 10^{4.8}\), coinciding with the generation of the LSV as the supercriticality of the system increases. The length scale corresponding with squared temperature perturbations in green diamonds stays close to the linear onset scale, i.e. it scales as roughly \(\Omega^{-1/3}\). The length scale based on the kinetic energy is much larger than the other two, but also follows a scaling roughly similar to \(\Omega^{-1/3}\) (fit not shown) in the interval \(\Omega=[10^{3.5},10^{4.6}]\). These two scalings do not match our predictions according to RMLT and also do not display a transition to become independent of rotation when \(\Omega\lesssim 10^{4.4}\). The length scale calculated using the heat flux on the other hand is steeper than the other two in the range \(\Omega=[10^{4.3},10^{4.6}]\). RMLT is expected to apply in this range because the flow is turbulent and strongly rotationally constrained. The slope fitted within this range in dashed-burgundy scales as \(\Omega^{-0.6}\), which should be compared with the temperature-based RMLT scaling as \(\Omega^{-1}\). This disagreement is likely to arise from the narrow range of \(\text{Ra}\) considered and because these simulations are not turbulent enough to match the RMLT scaling fully. However, it is much steeper than the result obtained from the other two quantities and tapers off at small \(\Omega\) as expected. In the right panel we demonstrate that with fixed supercriticality \(R\), our results are consistent with the RMLT prediction of \(l_{c}\propto\Omega^{-1/3}\) regardless of which quantity or method is used to compute the length scale. The solid-red line, with the same parameters as the solid-red line in the left panel matches the heat flux data well. The length scale obtained from the temperature fluctuations is slightly larger, and the length scale obtained from the vertical kinetic energy is much larger. Interestingly, the peaks in blue squares do not follow the solid-red RMLT prediction as closely as they do in the left panel. We attribute this difference to fluctuations in the spectrum causing the peak to shift around, particularly as the spectrum near the peak of the heat flux is quite broad (see Fig. A1 in the appendix), so the length scale based on integrals may be better suited here. Furthermore, while these data superficially seem to follow the linear onset scaling, each of these follows a distinct scaling with a different prefactor than the onset scaling. Note that when the supercriticality is fixed (equivalent to plotting results as a function of \(\text{RaEk}^{4/3}\)) instead of the Rayleigh number, the predictions of RMLT have the same dependence on \(\Omega\) as the linear onset scaling, but this does not imply that the length scale is controlled by viscosity. Based on these results, we use the length scale obtained from the integral heat flux method in the rest of this work, i.e. the dark orange diamonds, and use the solid-red RMLT fit whenever it is expected to apply. From the solid-red fit of both panels of Fig. 6, if we reintroduce \(\text{Ra}\) using the definition \(\text{Ra}_{c}\approx 8.7\text{Ek}^{-4/3}\), we obtain: \[l_{c}=0.63\text{Ra}^{1/2}\text{Ek}\text{Pr}^{-1/2}d. \tag{40}\] Using this scaling together with Eq. 36 we obtain a scaling law for the convective frequency \[\omega_{c}\approx 0.44\text{Ra}^{1/2}\text{Pr}^{1/2}\frac{\kappa}{d^{2}}. \tag{41}\] We examine the scaling of the effective viscosity with convection strength (\(\text{Ra}\)) in Fig. 7 for simulations with \(\text{Ek}=5\cdot 10^{-5.5}\). Only results from simulations with sustained energy injection are plotted in this figure. There is a minimum value of \(\text{Ra}\approx 2.5\text{Ra}_{c}\) for which using an effective viscosity according to RMLT reasonably approximates the data. This minimum also corresponds to the threshold value above which an LSV appears (Guervilly et al., 2014; Favier et al., 2014). We apply these theoretically-predicted and empirically-fitted scaling laws to determine an effective viscosity in Fig. 7. The blue line corresponds to the low frequency regime in Eq. 28, the black line corresponds to the intermediate frequency regime, and the red line to the high frequency regime, with orange points indicating the simulations. Varying Ra in this figure also means varying the ratio of tidal to convective frequencies, which can change which regime might be predicted in Eq. 28. The low and intermediate frequency predictions agree well with the simulations at high Ra. At low Ra the simulations agree with the high frequency prediction, though there is a departure for the smallest Ra for which the simulations are no longer sufficiently turbulent. The top panel of Fig. 8 shows instead the effective viscosity as a function of the rotation rate \(\Omega\) at fixed Ra = \(1.3\cdot 10^{8}\), corresponding to Ra = \(6\)Ra\({}_{c}\) at Ek = \(5\cdot 10^{-5.5}\). At fixed Ra we expect the effective viscosity to rapidly decrease as the rotation rate increases. Since we set \(\gamma=\Omega\) in these simulations the tidal frequency is \(\omega=2\gamma=2\Omega\). The scalings of the effective viscosity obtained using RMLT according to Eq. 28 in terms of \(\Omega\) are then respectively \(\Omega^{-2}\), \(\Omega^{-2.5}\) and \(\Omega^{-4}\) in the low, intermediate and high tidal frequency regime. In the top panel of Fig. 8 we over-plot these low, intermediate and high frequency regime scalings, which are in good agreement with the simulation results. Based on our results for the convective length scale from the simulations there is some uncertainty around the solid-red fit of \(\Omega^{-4}\). According to the simulation data this should possibly scale as \(\Omega^{-3.6}\) instead, as the scaling obtained for the convective length sale goes as \(\Omega^{-0.6}\) instead of \(\Omega^{-1}\). The difference in the results is negligible however, and for consistency with the RMLT prediction for the effective viscosity we opted to keep instead the \(\Omega^{-4}\) scaling in the plot. In the bottom panel of Fig. 8 we fixed \(R\) = 6 at \(\epsilon\in[0.02,0.05]\), which are the values of \(\epsilon\) used for all subsequent results at fixed \(R\). We examined the variation of the effective viscosity with \(\Omega\). Again, we observe a decrease as the rotation rate is increased, though this is a weaker trend than we found when fixing Ra. We also observe two possible scaling regimes. When we compare with those expected by RMLT we again find good agreement with our simulation results. We find that even when fixing the convective supercriticality, we obtain bursts of elliptical instability for sufficiently large \(\Omega\). This is perhaps because the suppressive effect of convection on the elliptical Figure 6: Horizontal convective length scale as a function of rotation rate, calculated using the integration methods in Eqs. 38 and 39, which agree well. Data points are calculated based on the heat flux (orange diamonds), vertical kinetic energy (yellow diamonds) and the squared temperature perturbations (green diamonds). The peaks of the heat flux spectrum are included in blue squares. The solid-red fit is the RMLT prediction of Ra\({}^{1/2}\Omega^{-1}\) (Eq.40), and the dashed-purple line is the linear onset length scale. Left: results at fixed Ra = \(1.3\cdot 10^{8}\). The peaks of the heat flux agree well with the RMLT prediction. The steepest fit to the heat flux length scale is plotted in dashed-burgundy, which probably differs from the RMLT prediction in solid-red because of the modest supercriticalities involved. The linear onset scaling is plotted in dashed-purple, which only agrees with our data for the three right-most points with the smallest supercriticalities. Right: Same but at fixed supercriticality \(R\) = 6. The heat flux data agree well with the RMLT prediction in solid-red. The linear onset scaling is plotted in dashed-purple, and differs from our simulation results. Figure 7: Effective viscosity as a function of Ra at Ek = \(5\cdot 10^{-5.5}\). Only simulations featuring sustained energy injection are plotted. In addition, all three scaling law regimes predicted using RMLT are plotted. instability is diminished for larger \(\Omega\) because the effective viscosity is lowered, while the increased rotation rate enhances the growth rate of the elliptical instability (relative to the viscous damping rate). In this section we have generally found good agreement with both the predictions of RMLT for convective velocities and length scales, and with their application to the scaling laws for the effective viscosity acting on (tidal) oscillatory shear flows in Duguid et al. (2020). Based on our fits of RMLT scaling laws to the data in Figs. 7 and 8, we find the following effective viscosity regimes: \[\nu_{\rm eff}=\begin{cases}6.4\cdot 10^{-3}\text{Ra}^{3/2}\text{Ek}^{2}\text{ Pr}^{-1/2}\kappa&\text{low freq.},\\ 0.012\text{Ra}^{7/4}\text{Ek}^{2}\text{Pr}^{-1/4}\kappa^{3/2}d^{-1}\omega^{-1/2 }&\text{interm. freq.},\\ 0.11\text{Ra}^{5/2}\text{Ek}^{2}\text{Pr}^{1/2}\kappa^{3}d^{-4}\omega^{-2}& \text{high freq.}.\end{cases} \tag{42}\] ### Regime transitions The previous section tentatively suggests we can use MLT and RMLT and the tidal frequency regimes observed in simulations to interpret (and make predictions for) the effective viscosity. However, to understand the full picture, one would need to understand when transitions between different regimes occur. As described in SS 2.4, by virtue of setting \(\Omega=\gamma\) in our simulations, the transitions are likely to occur for similar values of the Rossby number. Therefore, the occurrence of these combined transitions (MLT/RMLT and the different tidal frequency regimes) obfuscates the results in Fig. 7 and Fig. 8. One way to separate these two transitions is to first consider the quantity \(\omega/\omega_{c}\), which is important because it controls the regime transitions of the effective viscosity. However, it is also controlled by the transition from MLT to RMLT, because \(\omega_{c}\) depends on \(u_{c}\) and \(l_{c}\). In Fig. 9 the ratio \(\omega/\omega_{c}\) is plotted as a function of the Rayleigh number in the panel on the left, at constant Ek = \(5\cdot 10^{-5.5}\), and as a function of the rotation rate in the panel on the right, at constant Ra = \(1.3\cdot 10^{8}\). In the left panel we calculate \(\omega_{c}\) using the convective velocities obtained from simulations, whilst basing the convective length scale on Eq. 26. In addition, the prediction of \(\omega/\omega_{c}\) according to RMLT simulation results, with \(\omega_{c}\) given by Eq. 41, is plotted in solid-red. By forcing the convective length scale to follow the RMLT prediction, i.e. \(l_{c}\sim\text{Ra}^{1/2}\), \(\omega_{c}\) will no longer scale as \(\text{Ra}^{1/2}\) when \(u_{c}\) deviates from the RMLT prediction, and the scaling consequently changes from \(u_{c}\sim\text{Ra}\) to \(u_{c}\sim\text{Ra}^{1/2}\). This in turn forces the scaling of \(\omega_{c}\) to go from \(\omega_{c}\sim\text{Ra}^{1/2}\) to \(\omega_{c}\sim\text{Ra}^{0}\). In the figure, this change is manifested by the data points deviating from the solid-red prediction as their slope decreases when \(\text{Ra}\gtrsim 2\cdot 10^{8}\), in accordance with what is observed in Fig. 7. Thus, by fixing the length scale but plotting the simulation data for the convective velocity we can easily identify at what values of \(\omega/\omega_{c}\) this transition from RMLT to MLT occurs. From this panel, we find the transition at \(\omega/\omega_{c}\approx 10\), or a convective Rossby number \(\text{Ro}_{c}\approx 0.1\). In the right panel of Fig. 9 we show the ratio \(\omega/\omega_{c}\) as a function of \(\Omega\) using orange and blue (with elliptical instability bursts) circles, which is computed in the same way as in the left panel. In addition, \(\omega/\omega_{c}\) is calculated using the simulation data directly for both \(u_{c}\) and \(l_{c}\) in purple and burgundy squares. Purple squares indicate simulations without the elliptical instability, and burgundy squares indicate simulations with bursts of the elliptical instability. The prediction for \(\omega/\omega_{c}\) in the RMLT regime is again plotted in solid-red. The deviation of the orange data points from this solid-red line occurs for \(\Omega\lesssim 10^{4.4}\) like in the top panel of Fig. 8. Furthermore, this deviation coincides with \(\omega/\omega_{c}\approx 10\), as indicated in the left panel of Fig. 9. The convective frequency calculated directly using the simulation results for both \(u_{c}\) and \(l_{c}\) in the purple and burgundy squares illustrates how the transition from RMLT to MLT occurs in our simulations. First of all, the purple squares and some of the burgundy squares in the range \(\Omega=[10^{4.5},~{}10^{5}]\) match the dashed-black fit of \(\omega/\Omega^{-0.4}\sim\Omega^{1.4}\), illustrating that indeed according to simulations \(\omega_{c}\sim u_{c}/l_{c}\sim\Omega^{-1}/\Omega^{-0.6}\sim\Omega^{-0.4}\). The purple squares in the interval \(\Omega=[10^{3.5},~{}10^{4.4}]\) do not deviate as much from the solid-red prediction as the pure RMLT convective length scale results in orange on the same interval. This implies that when the convective velocity becomes independent of \(\Omega\), so does the convective length scale. As Figure 8: Top: Effective viscosity at fixed Rayleigh number \(\text{Ra}=1.3\cdot 10^{8}\) and \(\epsilon\in[0.02,0.05]\) as a function of rotation rate, together with all three predictions based on RMLT and the scalings obtained in Duguid et al. (2020). Bottom: Same as above but at constant supercriticality \(R=6\) and \(\epsilon\in[0.02,0.05]\). a result \(\omega_{c}\) is maintained to be almost independent of \(\Omega\), which is indicated by scaling as \(\omega_{c}\sim\Omega^{0.2}\) according to the dashed-blue fit. Note also that the value of \(\omega/\omega_{c}\) using simulation results decreases to \(\approx 1\), suggesting that the effective viscosity in this range should transition from the high tidal frequency to the intermediate tidal frequency regime according to the transition found in the non-rotating simulations of Duguid et al. (2020), if these hold here. Fig. 9 indicates that care must be taken to first identify the regime of rotational influence on the convection (i.e. MLT vs RMLT) to predict the value of \(\omega_{c}\) before calculating the ratio \(\omega/\omega_{c}\), and thus determining which frequency regime is relevant for the effective viscosity. The deviation from the RMLT prediction for these quantities in both figures occurs roughly when \(\mathrm{Ro}_{c}^{-1}\approx 10\), so we conclude that when \(\mathrm{Ro}_{c}<0.1\) RMLT is the correct prescription for the rotating convection, and that \(\mathrm{Ro}_{c}\approx 0.1\) is where the transition from RMLT to MLT begins and the rotational influence diminishes. To fully disentangle and interpret the effective viscosity and its dependence on \(\Omega\) and \(\omega\) separately, we should also calculate the effective viscosity as a function of the ratio \(\omega/\omega_{c}\). To this end we use values of \(\omega_{c}\) obtained from the simulations, i.e. corresponding to the square markers in the right panel of Fig. 9. The results for \(\nu_{\mathrm{eff},3D}\) are plotted in Fig. 10. These figures are closely related to Fig. 8, but are specifically designed to explore the \(\omega/\omega_{c}\) dependence. In the left panel of Fig. 10, we show results with fixed \(\mathrm{Ra}=1.3\cdot 10^{8}\), while in the right panel simulations with fixed \(R=6\) are plotted. The effective viscosity is divided by the factor of \(u_{c}l_{c}\) which is present in all expressions for this quantity. By eliminating this factor the dependence of the effective viscosity on the ratio of \(\omega/\omega_{c}\) is therefore directly measured. It is important to note that due to the transition from MLT to RMLT in the left panel and us fixing the supercriticality in the right panel, \(\omega/\omega_{c}\) in general depends on the Ekman number. In the left panel both the intermediate and high frequency regimes are observed. The high frequency regime is plotted in solid-red line, while the intermediate frequency regime is plotted in solid-black. Both scalings agree well with simulation data. The transition from the high frequency to the intermediate frequency regime found previously at \(\omega/\omega_{c}\approx 5\) (without rotation in Duguid et al. 2020) is plotted using a vertical dashed line in the left panel. The location of this transition agrees remarkably well with our data. In the right panel, only the high frequency regime is observed. We thus conclude that we have not observed the low tidal frequency regime in our simulations. Moreover, we find that the intermediate regime in Duguid et al. (2020) is reproduced and the transition to this seems to occur at the same value of \(\omega/\omega_{c}\), even when the convective velocity and length scale are influenced by rotation. The prefactors are however different from those found in Duguid et al. (2020), both lower by approximately a factor of two. Reproducing Eq. 22 with these altered prefactors: \[\nu_{\mathrm{eff}}=\begin{cases}5u_{c}l_{c}&\frac{|\omega|}{\omega_{c}} \lesssim 10^{-2},\\ 0.25u_{c}l_{c}\left(\frac{\omega_{c}}{\omega}\right)^{\frac{1}{2}}&\frac{| \omega|}{\omega_{c}}\in[10^{-2},5],\\ 3u_{c}l_{c}\left(\frac{\omega_{c}}{\omega}\right)^{2}&\frac{|\omega|}{\omega_{ c}}\gtrsim 5.\end{cases} \tag{43}\] In summary, to correctly interpret and make predictions for the effective viscosity, one must first determine whether or not the convection is strongly influenced by rotation (i.e. whether RMLT or MLT is an appropriate description) using the convective Rossby number. Then the ratio of \(\omega/\omega_{c}\), i.e. the "tidal Rossby number", can be used to determine which of the low, intermediate or high tidal frequency regimes are appropriate. Upon plugging in the results for \(u_{c}\) and \(l_{c}\) from Eq. 36 and Eq. 40: \[\nu_{\mathrm{eff}}=\begin{cases}0.88\mathrm{Ra}^{3/2}\mathrm{Ek}^{2}\mathrm{ Pr}^{-1/2}\kappa&\frac{|\omega|}{\omega_{c}}\lesssim 10^{-2},\\ 0.029\mathrm{Ra}^{7/4}\mathrm{Ek}^{2}\mathrm{Pr}^{-1/4}\kappa^{3/2}d^{-1} \omega^{-1/2}&\frac{|\omega|}{\omega_{c}}\in[10^{-2},5],\\ 0.10\mathrm{Ra}^{5/2}\mathrm{Ek}^{2}\mathrm{Pr}^{1/2}\kappa^{3}d^{-4}\omega^{ -2}&\frac{|\omega|}{\omega_{c}}\gtrsim 5.\end{cases} \tag{44}\] These scalings are likely to be more robust than the scalings in Figure 9: Left: Ratio of the tidal to convective frequencies as a function of \(\mathrm{Ra}\) compared with the RMLT prediction at fixed \(\mathrm{Ek}=5\cdot 10^{-5.5}\). The data for \(u_{c}\) is obtained from simulations, while \(l_{c}\) is calculated using Eq. 26. The predicted result based on Eq. 41 is plotted in solid-red. The change from the RMLT to MLT scaling occurs around \(\mathrm{Ra}\approx 2\cdot 10^{8}\) in the left panel of Fig. 5, which matches the departure observed here and occurs at \(\omega/\omega_{c}\approx 10\). Right: Same at fixed \(\mathrm{Ra}=1.3\cdot 10^{8}\) as a function of \(\Omega\) in orange and blue circles. Here the change from MLT to RMLT occurs around \(\Omega=10^{4.5}\) in the top panel of Fig. 8, again matching the departure here, corresponding to \(\omega/\omega_{c}\approx 10\). The purple and burgundy squares represent \(\omega_{c}\) calculated using both \(u_{c}\) and \(l_{c}\), which stays closer to the prediction of \(\omega_{c}\) independent of the rotation rate, and therefore attains lower values than the RMLT prediction, crossing below the \(\omega/\omega_{c}=5\) threshold. Eq. 42, because the numerical coefficient of the scaling for the low frequency regime is based on a measured result in Duguid et al. (2020) and the scaling for the intermediate frequency regime is no longer obfuscated by the two transitions occurring at the same time. ## 4 Astrophysical applications In the previous section we obtained scaling laws to describe our simulation results for tidal energy transfer rates and effective viscosities, as well as convective velocities, length scales and frequencies. In this section we strive to apply these scaling laws to'real' parameters of astrophysical bodies to make predictions for these quantities in giant planets. This is possible because we have shown that the diffusion-free scaling laws of MLT and RMLT are applicable to most of our simulations, and if we assume they also apply in reality, we can therefore readily extrapolate our results. ### Simple estimates We start by reporting parameter estimates from the literature for Jupiter, obtained using models before (Guillot et al., 2004, hereafter GSHS04) and after (Gastine and Wicht, 2021, hereafter GW21) the Junon mission (e.g. Bolton et al., 2017). We report these in Table 1. We calculate from this data the ratio of tidal to convective frequencies (\(2\gamma/\omega_{c}\)) to allow us to determine if we are in the high-frequency regime for the effective viscosity. This ratio is found to be, upon setting1\(\omega/2=\gamma=2\pi/P_{\rm orb}\), Footnote 1: This is appropriate for circularisation of weakly eccentric orbits in spin-synchronised planets, and can be thought of as a representative value for estimates of synchronisation tides with a circular orbit. \[\omega/\omega_{c}=\begin{cases}9.4\cdot 10^{1}\left(\frac{P_{\rm orb}}{1 \,{\rm d}}\right)^{-1}&\text{GSHS04},\\ 3.7\cdot 10^{2}\left(\frac{P_{\rm orb}}{1\,{\rm d}}\right)^{-1}&\text{GW21 at $R=0.196R_{J}$},\\ 2.4\cdot 10^{2}\left(\frac{P_{\rm orb}}{1\,{\rm d}}\right)^{-1}&\text{GW21 at $R=0.98R_{J}$}.\end{cases} \tag{45}\] Thus we conclude that we are firmly in the high-frequency tidal regime (\(\omega/\omega_{c}\gg 1\)) for the orbital periods associated with Hot Jupiters, which is the regime explored in most of our simulations. This is also likely to be the case in Jupiter due to tidal forcing from its moons (e.g. Goldreich and Nicholson, 1977). The effective viscosity can be calculated using the parameters from Table 1, again setting \(\gamma=2\pi/P\). To evaluate the different regimes, we assume the transitions from the low to intermediate frequency regimes obtained by Duguid et al. (2020) to obtain the following. Using data from the left column of the table for the purposes of illustration, we find: \[v_{\rm eff}=\begin{cases}880&m^{2}/s,&\frac{|\omega|}{\omega_{c}}<10^{-2},\\ 2.54\left(\frac{P_{\rm orb}}{1\,{\rm d}}\right)^{1/2}&m^{2}/s,&\frac{|\omega|}{ \omega_{c}}\in[10^{-2},5],\\ 6.1\cdot 10^{-3}\left(\frac{P_{\rm orb}}{1\,{\rm d}}\right)^{2}&m^{2}/s,&\frac{| \omega|}{\omega_{c}}>5,\end{cases} \tag{46}\] We have included the low frequency regime for completeness even though this hasn't been clearly probed with our simulations. ### Detailed planetary models using MESA To provide a more detailed estimate of the effective viscosity and resulting tidal dissipation in a Jupiter-like planet we require models for its internal structure, i.e. profiles of pressure and density (and other quantities) as a function of radius. To do so, we use a modified version of the test suite case _make_planets_ of the Modules for Experiments in Stellar Astrophysics (MESA) code (Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2022) with the MESASDK (Townsend, 2022) to generate 1D interior profiles. This code has been previously used to generate a range of planetary models (e.g. Muller et al., 2020; Muller and Helled, 2023). However, some caveats reside in the applicability of this code to planets: since it is designed to model stars it uses equations of state based on H and He without heavy elements - unless the EOS is modified (Muller et al., 2020) - necessary to generate for example a dilute core which is expected based on Juno's gravity field measurements of Jupiter (Stevenson, 2020; Helled et al., 2022). Furthermore, it treats the core itself as rigid and omits the possibility of stable layers produced by helium rain. These may be important for tidal dissipation (e.g. Pontin et al., 2023) but are outside the scope of our study. MESA by default treats the convection using MLT (for which we use the Cox prescription, Cox and Giuli, 1968) instead of RMLT (if we assume this to be valid even in the presence of magnetic fields). We have maintained the mixing length parameter at the standard value of two, and intend to convert the obtained MLT values of these models to RMLT later on in this work. Following Muller et al. (2020) who find it to be negligible for planetary structure and evolution, we omit semiconvection in our models. Our initial Jupiter model has a radius of \(2R_{J}\) and a mass of \(1M_{J}\), of which 10 Earth-masses are located in a core with density \(10gcm^{-3}\). We have evolved the model for 4.5 Gyr to mimic the age of Jupiter and we use a constant surface irradiation of \(5\cdot 10^{4}\) erg \(cm^{-2}s^{-1}\), similar to what Jupiter receives from the Sun, which is deposited at a column depth of 300 \(gcm^{-2}\) (about 0.7 bar). We also create a Hot Jupiter model with the same parameters except that we increase the surface heating to represent the irradiation of a one-day planet around a Sun-like star of \(10^{9}\) erg \(cm^{-2}s^{-1}\). Furthermore, we incorporate additional interior heating with uniform rate 0.05 erg \(cm^{-3}s^{-1}\) throughout the fluid envelope, which can be thought to represent the impact of tidal heating or Ohmic dissipation (or other mechanisms) that could possibly inflate a number of Hot Jupiters. In this way, whilst keeping all other parameters equal, we \begin{table} \begin{tabular}{l l l l} \hline & GSHS04 & GW21 & GW21 \\ & & \(R=0.196R_{J}\) & \(R=0.98R_{J}\) \\ \hline \(u_{c}\) (\(ms^{-1}\)) & 0.1 & \(0.01-0.1\) & 1 \\ \hline \(\Omega\) (\(s^{-1}\)) & \(1.75\cdot 10^{-4}\) & \(1.75\cdot 10^{-4}\) & \(1.75\cdot 10^{-4}\) \\ \hline \(d\) (\(m\)) & \(3\cdot 10^{6}\) & \(5.5\cdot 10^{7}\) & \(5.5\cdot 10^{7}\) \\ \hline \(\nu\) (\(m^{2}s^{-1}\)) & \(10^{-6}\) & \(2.66\cdot 10^{-7}\) & \(3.92\cdot 10^{-7}\) \\ \hline \(\kappa\) (\(m^{2}s^{-1}\)) & \(10^{-5}\) & \(2.7\cdot 10^{-5}\) & \(1.32\cdot 10^{-6}\) \\ \hline Pr & 0.1 & 0.01 & 0.3 \\ \hline Ek & \(10^{-15}\) & \(10^{-18}\) & \(10^{-18}\) \\ \hline Ra & \(10^{25}\) & \(10^{28}\) & \(10^{31}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Table of dimensional and nondimensional parameters reproduced from Guillot et al. (2004) (GSHS04), Gastine and Wicht (2021) (GW21). can determine the effects of the increased radius (and stronger convection) of a puffy Hot Jupiter on the effective viscosity and tidal dissipation rates. A summary of changes to the default inlists used to generate these models is provided in Appendix B. The convective velocities and length scales (mixing lengths) obtained using the MESA code are calculated using non-rotating MLT. Although the rotation rate - and thus the introduction of RMLT - is expected to affect convective length scales and velocities, the effect on the heat flux is likely to be negligible (Stevenson, 1979; Ireland and Browning, 2018). Therefore, we assume that the heat flux is independent of rotation, and is therefore the same in both MLT and RMLT. We then convert \(u_{c}\) and \(l_{c}\) to RMLT using the scalings we have derived, but to do so we must use flux-based scalings instead of the temperature-based scalings used in the previous sections and in the simulations of this paper. On the other hand, the temperature difference (which is imposed in simulations), and as a result the buoyancy frequency, are expected to change under the influence of rotation, in order to carry the same flux. In these flux-based scalings the conversion from MLT to RMLT is defined differently to the temperature-based scalings used previously in this work. In the temperature-based scalings the corrections introduced for both \(u_{c}\) and \(l_{c}\) involve \(\mathrm{Ro}_{c}\) linearly, while in the flux-based scalings the corrections are respectively: \[u_{c}=\tilde{\mathrm{Ro}}_{c}^{1/5}\tilde{u}_{c},\quad\text{and}\quad l_{c}= \tilde{\mathrm{Ro}}_{c}^{3/5}\tilde{l}_{c}, \tag{47}\] where the quantities with a tilde are those calculated using non-rotating MLT. We have also denoted the Rossby number in the above equations with a tilde (\(\tilde{\mathrm{Ro}}_{c}=\tilde{u}_{c}/(2\Omega l_{c})\)) because flux-based scalings imply Rossby numbers calculated using MLT and RMLT are different, unlike for the temperature-based scalings where they are the same. In the low frequency regime the effective viscosity must therefore be scaled by \[\nu_{\mathrm{eff}}\sim u_{c}l_{c}\sim\tilde{u}_{c}\tilde{l}_{c}\tilde{\mathrm{ Ro}}_{c}^{4/5}. \tag{48}\] This correction factor of \(\tilde{\mathrm{Ro}}_{c}^{4/5}\) was also employed by Mathis et al. (2016). In the high tidal frequency regime the effective viscosity is instead scaled by \[\nu_{\mathrm{eff}}\sim u_{c}l_{c}\left(\frac{u_{c}}{l_{c}}\right)^{2}\sim \tilde{u}_{c}\tilde{l}_{c}\tilde{\mathrm{Ro}}_{c}^{4/5}\left(\frac{\tilde{u}_ {c}}{l_{c}}\right)^{2}\tilde{\mathrm{Ro}}_{c}^{-4/5}\sim\tilde{u}_{c}\tilde{l }_{c}\left(\frac{\tilde{u}_{c}}{l_{c}}\right)^{2}. \tag{49}\] Combining these, we find in RMLT: \[\nu_{\mathrm{eff}}\propto\begin{cases}5\tilde{u}_{c}\tilde{l}_{c}\tilde{ \mathrm{Ro}}_{c}^{4/5}&\frac{|\omega|}{\omega_{c}}\lesssim 10^{-2},\\ 0.25\tilde{u}_{c}\tilde{l}_{c}\tilde{\mathrm{Ro}}_{c}^{3/5}\left(\frac{\tilde{u }_{c}/\tilde{l}_{c}}{\omega}\right)^{\frac{1}{2}}&\frac{|\omega|}{\omega_{c}} \in[10^{-2},5],\\ 3\tilde{u}_{c}\tilde{l}_{c}\left(\frac{\tilde{u}_{c}/\tilde{l}_{c}}{\omega} \right)^{2}&\frac{|\omega|}{\omega_{c}}\gtrsim 5.\end{cases} \tag{50}\] Hence, while the effective viscosity in the low tidal frequency regime is strongly affected by rotation, it is entirely unaffected by rotation in the high tidal frequency regime according to RMLT (assuming a fixed flux independent of rotation). This follows when considering the scaling laws in Eq. 28 in terms of flux-based RMLT: \[\nu_{\mathrm{eff}}\propto\begin{cases}F^{3/5}\Omega^{-4/5}d^{4/5}&\text{low frequency},\\ F^{7/10}d^{3/5}\Omega^{-3/5}\omega^{-1/2}&\text{intermediate freq.},\\ F\omega^{-2}&\text{high frequency}.\end{cases} \tag{51}\] The equivalent relations written using flux-based MLT would be: \[\tilde{\nu}_{\mathrm{eff}}\propto\begin{cases}F^{1/3}d^{4/3}&\text{low frequency},\\ F^{1/2}d\omega^{-1/2}&\text{intermediate freq.},\\ F\omega^{-2}&\text{high frequency}.\end{cases} \tag{52}\] The scaling laws in the high tidal frequency regime with and without rapid rotation (i.e. according to MLT or RMLT) are therefore identical when written using flux-based scalings. However, the regime transitions may not be the same in both cases because the flux-based scalings for \(\omega_{c}\) differ between MLT and RMLT. Convective frequencies are typically smaller in MLT, and as such the high tidal frequency regime is generally entered for lower tidal frequencies than in RMLT. Figure 10: Left: Effective viscosity at fixed Rayleigh number \(\mathrm{Ra}=1.3\cdot 10^{8}\) as a function of \(\omega/\omega_{c}\), after dividing by \(u_{c}l_{c}\). The high frequency prediction is plotted in solid-red, and the intermediate frequency prediction is plotted in black. The vertical dashed-black line indicates the transition between these regimes at \(\omega/\omega_{c}=5\) found previously (without rotation Duguid et al., 2020), which matches the transition in our data well. Right: The same at fixed supercriticality \(R=6\), where only the high frequency regime is present in this data. We next present our results for Rossby numbers and the corresponding effective viscosities - in both the fast tide and slow tide regimes, using both MLT and RMLT - as a function of radius in our two planetary models. For these illustrative calculations we set \(P_{\rm orb}=1\) day and \(P_{\rm rot}=10\) h for the Jupiter model, mimicking a planet similar to Jupiter but orbiting its star with a period of 1 day. For the Hot Jupiter model we instead set \(P_{\rm orb}=P_{\rm rot}=1\) day, representing spin-orbit synchronisation. The tidal period is \(P_{\rm tide}=1\) day for both figures. This can be thought to represent the eccentricity tide in a spin-orbit synchronised planet, as opposed to being based on \(\gamma=\Omega-n\), but is only chosen for illustration in the first model. In Fig. 11 the Rossby numbers are plotted in the Jupiter model on the left and the Hot Jupiter model on the right. The MLT Rossby number as calculated from the data is plotted in solid-black; the one calculated from RMLT is plotted in dashed-blue. The MLT Rossby numbers are clearly smaller, but even in RMLT they are much smaller than one, indicating that the convection is strongly rotationally-constrained. Note that the lower densities and stronger convection in the inflated Hot Jupiter model produce larger Rossby numbers, but they are still much smaller than one. This justifies the use of RMLT (over MLT) in giant planets. The ratio of convective to tidal frequencies (\(\omega_{c}/\omega\)) is also plotted as a function of radius in Fig. 11. The MLT prediction for this "tidal Rossby number" is plotted in solid-red and the RMLT prediction is plotted in dashed-magenta, and these only differ by a factor of \(\Omega/|\gamma|\). In the Hot Jupiter model this factor equals one for our chosen parameters, and as such \(\omega_{c}/\omega={\rm Ro}_{c}\). For both models \({\rm Ro}_{c}\ll 1\), such that RMLT is the appropriate description of the convection, and hence for the convective frequency. This figure indicates that the fast tides regime is relevant inside both models (except for perhaps the final percent or so of the radius where we approach the surface stable layer). Figure 11: Left: Flux-based MLT (black) and RMLT (dashed-blue) Rossby numbers as a function of radius for the Jupiter-like planet with \(P_{\rm rot}=10\) hrs, \(P_{\rm orb}=P_{\rm tide}=1\) day after evolving the model for 4.5 Gyr. This is much smaller than one in the whole of the interior according to both prescriptions, i.e. the interior is strongly rotationally constrained. The ratio of convective to tidal frequencies (“tidal Rossby number”), is also much smaller than one for these parameters, indicating that the planet is in the fast tides regime. Right: Same but for the inflated Hot Jupiter with \(P_{\rm rot}=P_{\rm orb}=P_{\rm tide}=1\) day. Convection is stronger in this model but the same regimes (rapid rotation and fast tides) hold as in the left panel. The ratio of \(\omega_{c}/\omega\) is equal to the convective Rossby number here, hence the lines overlap. Figure 12: Left: Effective viscosity as a function of radius for the Jupiter-like planet with \(P_{\rm rot}=10\) hrs, \(P_{\rm orb}=P_{\rm tide}=1\) day after evolving the model for 4.5 Gyr. We show the microscopic viscosity \(3\cdot 10^{-7}\;m^{2}/s\) reproduced from French et al. (2012) (solid-black) for reference, the MLT prediction in the low frequency regime (solid-blue), the MLT prediction in the fast tides regime (solid-green), the RMLT prediction in the slow tides regime (dashed-cyan) and the RMLT prediction in the fast tides regime (dotted-red). The fast tides predictions overlap regardless of regime whereas applying RMLT in the slow tides regime drastically reduces the effective viscosity. Right: Same but for the inflated Hot Jupiter with \(P_{\rm rot}=P_{\rm orb}=P_{\rm tide}=1\) day. The Hot Jupiter model has more efficient convection and larger effective viscosity in all regimes. The effective viscosity as a function of radius is shown in Fig. 12 in both planetary models. In the left panel, we show the effective viscosity in the Jupiter model for our chosen rotational and tidal periods, which demonstrates that this is much larger than the microscopic viscosity (solid-black) for all predictions. To compute the kinematic viscosity in Jupiter requires sophisticated calculations outside the scope of our models (and not calculated within MESA), so we use the typical value obtained by French et al. (2012) for reference, of \(\nu=3\cdot 10^{-7}\ m^{2}/s\), in both panels. There are large differences between the various predictions for \(\nu_{\rm eff}\) in Fig. 12. The MLT prediction in the slow tides regime in solid-blue predicts \(\nu_{\rm eff}\approx 10^{6}\ m^{2}/s\), while the RMLT prediction in the same slow tides regime in dashed-cyan only attains values of \(\approx 10^{2}\ m^{2}/s\). The MLT prediction for this regime decreases slightly from the interior to the surface, which is because the convective length scale decreases faster than the convective velocity increases from the core to the surface. On the other hand, the RMLT prediction increases towards the surface, because the Rossby number rapidly increases there. The fast tides regime prediction according to both RMLT and MLT (strictly obtained using all three regimes in Eq. 50 and the uncorrected version respectively, but the fast tides one is most relevant) are plotted in solid-green and dotted-red respectively. The two lines overlap because the effective viscosity is independent of rotation according to both theories, as we have demonstrated above. The effective viscosity in the fast tides regime is however several orders of magnitude smaller still than both predictions in the slow tides regime, with a value of only \(\approx 10^{-2}\ m^{2}/s\) except for close to the surface. This value is much larger than the microscopic viscosity, but is probably negligibly small for damping tidal flows. This would imply an effective Ekman number in the fast tides regime of Ek \(\approx 10^{-2}/(2\cdot 10^{-4}\cdot(10^{4})^{2})=O(10^{-7})\), where we've set \(d\) to be a similar order of magnitude as the RMLT convective length scale, which is \(O(10^{4})\) throughout most of the interior, except very close to the surface. This value is several orders of magnitude larger than the microscopic value, but is smaller than what is often used in numerical simulations. The right panel of Fig. 12 shows the effective viscosity as a function of radius for our inflated Hot Jupiter model. We observe that all values for \(\nu_{\rm eff}\) have shifted upwards compared to our Jupiter model. However, even in this model we expect to be in the fast tides regime throughout (almost) the entire planet, which would predict \(\nu_{\rm eff}\approx 10^{2}\ m^{2}/s\). Thus the increased irradiation and internal heating introduced here results in significantly larger effective viscosities, and therefore smaller values of \(Q^{\prime}\). ### Tidal dissipation rates in Jupiter and Hot Jupiters Now that we have obtained radial profiles of \(\nu_{\rm eff}\) we can use these to compute the resulting damping of the equilibrium tide and the associated tidal quality factor \(Q^{\prime}\) in our planetary models. We follow the approach described in Barker (2020) to calculate the equilibrium tidal flow and its resulting dissipation and omit details here. To do so, we first calculate the irrotational equilibrium tide (more specifically the dominant quadrupolar \(l=2\) component with azimuthal wave number \(m=2\)) defined in their section 2, since this is likely to be the correct one in giant planets2. The dissipation of this tidal flow is computed assuming an effective viscosity that acts like an isotropic microscopic kinematic viscosity but with a local value \(\nu_{\rm eff}(r)\) to damp the equilibrium tide. This requires performing the integral over radius in Eq. 20 of Barker (2020) to obtain the dissipation rate \(D_{\nu}\). The only modification here is we account for the rotational dependence of \(\nu_{\rm eff}\) and \(\omega_{c}\) as described above, otherwise we employ their Eq. 27 to obtain \(\nu_{\rm eff}(r)\) in the various different frequency regimes (the slightly different pre-factors we have obtained lead to negligible differences here). The resulting tidal quality factor is then obtained by: Footnote 2: This should be used in preference to the equilibrium tide of e.g. Zahn (1989) in convective regions of planets since \(|N^{2}|\ll\omega\ \omega^{2}\)(Terugem et al., 1998). We neglect the action of rotation on this component by considering Coriolis forces on the equilibrium tide to drive the wavelike tide. This equilibrium/dynamical or non-wavelike/wavelike splitting of the tidal response is formally valid in linear theory for low frequency (relative to the dynamical frequency) tidal forcing (Ogilvie, 2012). \[Q^{\prime}=\frac{3(2l+1)R^{2l+1}}{16\pi G}\frac{|\omega||A|^{2}}{D_{\nu}}, \tag{53}\] where \(A\propto\epsilon\) is the amplitude of the tidal perturbation (so that the ratio \(D_{\nu}/|A|^{2}\) and hence \(Q^{\prime}\) is independent of tidal amplitude in linear theory), \(G\) is the gravitational constant, \(R\) is the planetary radius, and \(\omega=2\pi/P_{\rm tide}\). To obtain \(Q^{\prime}\) for the elliptical instability we can use Eq. 18 and find (Barker and Lithwick, 2013): \[Q^{\prime}\approx\frac{10^{5}}{\chi/0.05}\left(\frac{m_{1}+m_{2}}{m_{2}} \right)\left(\frac{P_{\rm orb}}{1\,{\rm d}}\right)^{4}, \tag{54}\] where \(\chi\) is fit from our simulations. This is particularly crude because it equates the size of our Cartesian box with the planetary radius, but in the absence of a better approach it provides us with an estimate that is broadly consistent with simulations. This represents a "nonlinear" mechanism of tidal dissipation because \(Q^{\prime}\) depends on tidal amplitude and has a strong dependence on the orbital period \(P_{\rm orb}\). To put our results for these two mechanisms in context, we also compute \(Q^{\prime}\) resulting from the dissipation of linearly excited inertial waves in this planetary model by applying the frequency-averaged formalism of Ogilvie (2012). We follow the approach outlined in Section 3.1 and Eq. 30 of Barker (2020) to obtain \(Q^{\prime}\) in our planetary models, fully accounting for the planetary structure. This prediction for \(Q^{\prime}\) provides a tidal frequency-independent "typical level of dissipation" due to inertial waves according to linear theory. This method necessarily ignores the potentially complicated frequency-dependence of the dissipation in linear theory and any possible modifications of this by nonlinear effects (e.g. Ogilvie and Lin, 2004; Astoul and Barker, 2022). However, it is thought to be representative of the dissipation of inertial waves excited by linear tidal forcing, i.e. not via elliptical instability (which also excites inertial waves, but nonlinearly in this regard). We show \(Q^{\prime}\) in Fig. 13 as a function of tidal period for each of these mechanisms. The spin frequency is fixed by setting \(P_{\rm rot}=10\) hr for the Jupiter model in the left panel and \(P_{\rm rot}=1\) day for the Hot Jupiter model in the right panel. For the elliptical instability, we provide two predictions, one with \(P_{\rm orb}=1\) day and the other with \(P_{\rm orb}=3\) day. Note that when \(\nu_{\rm eff}\) is independent of tidal frequency (in the low frequency regime), \(D_{\nu}\propto\omega^{2}\propto P_{\rm tide}^{-2}\) and \(Q^{\prime}\propto\omega^{-1}\propto P_{\rm tide}\), while in the high frequency regime where \(\nu_{\rm eff}\propto\omega^{-2}\propto P_{\rm tide}^{-1}\); \(D_{\nu}\) is independent of \(\omega\) and \(Q^{\prime}\propto\omega\propto P_{\rm tide}^{-1}\). In addition, we expect \(Q^{\prime}\) due to elliptical instability to scale as \(\omega^{3}\propto P_{\rm tide}^{-3}\) and the frequency-averaged inertial wave prediction to be independent of \(\omega\) by definition. The left panel of Fig. 13 demonstrates that convective damping of equilibrium tides by an effective viscosity is indeed an inefficient tidal dissipation mechanism in giant planets and leads to large \(Q^{\prime}\). The low tidal frequency regime in dashed-blue and dashed-magenta for MLT and RMLT, respectively, indicate their strongest dissipation when the tidal frequency is large. Note that these predictions are calculated using the classical prefactor of \(1/3\) for the effective viscosity for illustration. These lines indicate that if RMLT applies, as is expected, \(Q^{\prime}\) is still \(\mathcal{O}(10^{9})\) if we neglect the frequency-reduction of \(\nu_{\rm eff}\), thus the dissipation (and resulting tidal evolution) is weak. The combination of low, intermediate and high tidal frequency regimes for \(\nu_{\rm eff}\) with the fitted prefactors dubbed \(\nu_{\rm eff}\) in solid-blue and solid-red indicates that the high tidal frequency regime impacts the effective viscosity significantly, particularly when \(P_{\rm tide}\) is small. These predictions approximately connect to the frequency-independent MLT and RMLT predictions for large \(P_{\rm tide}\) where there is a transition to the intermediate and low frequency regimes. The prefactors obtained using fits to simulations are larger than the dashed-magenta prediction, thus resulting in a slightly lower \(Q^{\prime}\) when transitioning into the low tidal frequency regime. This is because the factor 1/3 often utilised, as plotted here for the MLT and RMLT lines, is essentially arbitrary, unlike our numerical fits. The elliptical instability on a 1 day orbit (solid-green) on the other hand is an efficient dissipation mechanism, particularly when the tidal frequency is high. It is significantly more effective than convective damping of equilibrium tides according to each prediction for the entire range of tidal periods considered. The elliptical instability prediction on a 3 day orbit (dashed-green) is weaker than the 1 day orbit prediction, but would still predict more effective dissipation even than the slow-tides MLT effective viscosity for almost all of the parameter range considered. The most efficient mechanism in this model, except for the very highest tidal frequencies, is the frequency-averaged dissipation due to inertial waves shown in solid-cyan, which produces a \(Q^{\prime}=\mathcal{O}(10^{3})\) for our chosen rotation period. Since the rotation period is known, we would thus predict a typical value \[Q^{\prime}\approx 2\cdot 10^{3}\left(\frac{P_{\rm rot}}{10{\rm hr}}\right)^{2}, \tag{55}\] for tidal dissipation due to inertial waves. Indeed, this is sufficiently dissipative to explain tidal dissipation rates in Jupiter and Saturn (Lainey et al., 2009, 2012; Lainey et al., 2017), without requiring any resonance-locking scenario (e.g. Fuller et al., 2016). The Hot Jupiter model on the other hand has a larger radius, stronger convection, and is rotating somewhat more slowly, so it has much higher effective viscosities and is impacted to a lesser extent by rotation. As a result, all mechanisms except the dissipation of (linear) inertial waves are more efficient. The elliptical instability is predicted to be particularly efficient for short orbital periods, e.g. 1 day orbit prediction for \(Q^{\prime}=\mathcal{O}(10^{2})\) when the tidal period is 1 day. The increase in dissipation here due to the elliptical instability stems from the large radius of the Hot Jupiter, resulting in \(\epsilon\approx 0.095\). Radius inflation and internal heating, as well as the marginally decreased rotation rate, allows the convective damping of equilibrium tides to operate more efficiently than in the Jupiter-like model in the left panel. However, once again the inertial wave mechanisms are predicted to be substantially more dissipative than effective viscosity acting on equilibrium tides. Linear dissipation of inertial waves occurs with a similar order of magnitude to the Jupiter-like model, and is predicted to be dominant for \(P_{\rm side}\gtrsim 2\) days. ## 5 Discussion and Conclusion ### Comparison with previous work We find tidal dissipation rates due to the elliptical instability that are roughly equivalent to those observed in prior work (Barker & Lithwick, 2013a, b; Barker, 2016) when it operates. Indeed, our efficiency factor \(\chi\) is consistent with a similar value (\(\chi\in[0.01,0.1]\)) and is independent of Rayleigh number when the elliptical instability operates. We also potentially observe the \(\epsilon^{6}\) scaling found in Barker & Lithwick (2013a), and find that if this scaling holds true only the very closest Hot Jupiters experience significant tidal dissipation due to the elliptical instability (because this would effectively imply a much smaller value of \(\chi\) for realistic \(\epsilon\) values). The scaling laws we have confirmed using temperature-based RMLT match those obtained from Coriolis-Inertial-Archimedean (CIA) triple balance arguments (e.g. Ingersoll & Pollard, 1982; Aubert et al., 2001; Jones, 2015; Gastine et al., 2016; Gurevilly et al., 2019; Figure 13: Tidal quality factor \(Q^{\prime}\) as a function of tidal period for a myriad of mechanisms. Left: Jupiter model. Right: Inflated Hot Jupiter model. In both panels, MLT and RMLT predictions for \(Q^{\prime}\) due to convective damping of equilibrium tides using an effective viscosity with no tidal frequency reduction (low frequency regime) are shown in dashed-blue and -magenta respectively. The frequency-reduced effective viscosities in solid-blue and -red for MLT and RMLT respectively indicate that the frequency reduction significantly reduces the effectiveness of the dissipation. The elliptical instability in solid-green and dashed-green lines for two different orbital periods, and the (linear) frequency-averaged inertial wave dissipation in solid-cyan are also plotted. Inertial waves are considerably more dissipative than equilibrium tide damping by turbulent viscosity, whether they are linearly or nonlinearly (i.e. via elliptical instability) excited. Elliptical instability is predicted to be dominant for the shortest tidal periods, and linear excitation of inertial waves is dominant for longer periods. The Hot Jupiter model has smaller \(Q^{\prime}\) (hence more efficient dissipation) for all dissipation mechanisms due to the larger radius and slower rotation. Aurnou et al., 2020; Bouillaut et al., 2021, and many others) and the applicability of these temperature-based scalings reinforce the applicability of the diffusion-free flux-based scalings confirmed previously using simulations (Barker et al., 2014; Currie et al., 2020). We observe the transition from RMLT to MLT to begin around \(\mathrm{Ro}_{c}\approx 0.1\) as in Barker et al. (2014), and find RMLT is the appropriate description of (sufficiently turbulent) convection for \(\mathrm{Ro}_{c}\lesssim 0.1\). Similar to Guervilly et al. (2019) we find sufficiently strongly supercritical (turbulent) convection is required for the convective length scale to agree with the diffusion-free predictions of RMLT. Our results for the length scale depend strongly on how it is calculated, but they are (when properly interpreted) not generally consistent with the predictions from the linear onset of convection. One caveat to the above is that different methods to calculate the convective length-scale give vastly different results, and simulations must be turbulent enough and sufficiently rotationally-constrained to obtain reasonable agreement with RMLT. We favour definitions for the length-scale based on either the peak or the integrated "centroid" wavenumber for the heat flux spectrum as a function of horizontal wavenumber, which give better agreement with RMLT than e.g. the temperature fluctuation spectrum. The most challenging case for testing RMLT is when measuring the convective length scale as a function of rotation rate (Ekman number) at constant Rayleigh number, where only a narrow range of simulations are in the appropriate regime (sufficiently turbulent but rotationally-constrained). Furthermore, we find that when fixing supercriticality, i.e. similar to measuring the convective length scale as a function of \(\mathrm{Ra}\mathrm{E}\mathrm{k}^{4/3}\), the length-scale scales proportional to \(\mathrm{E}\mathrm{k}^{1/3}\propto\Omega^{-1/3}\). While this superficially agrees with the linear onset prediction (in which the scale is viscously-controlled), we demonstrate that this coincides with the diffusion-free prediction of RMLT when supercriticality is fixed, and we find a different pre-factor than predicted by the former. Furthermore, we find that the appropriate regime in terms of the convective Rossby number for RMLT to be a valid description of convection is \(\mathrm{Ro}_{c}<0.1\), with a transition in scaling laws from RMLT to MLT starting at \(\mathrm{Ro}_{c}\approx 0.1\). Regarding the tidal frequency dependence of the effective viscosity of turbulent convection in damping the equilibrium tide, our results are consistent with the same three regimes of tidal frequency as the non-rotating simulations of Duguid et al. (2020), even though they used an oscillating shear flow and we use a more realistic equilibrium tidal (elliptical) flow. However, we have studied rotating convection and thus obtained different prescriptions in terms of the dimensionless parameters that are described well by our heuristic application of RMLT. Despite these differences, our results are consistent with the intermediate tidal frequency scaling of \((\omega_{c}/\omega)^{-1/2}\) as Duguid et al. (2020); Vidal & Barker (2020). The prefactors in the intermediate and high tidal frequency regimes are lower by approximately a factor of two. However, we observe a transition from the high tidal frequency to the intermediate tidal frequency at the same value \(\omega/\omega_{c}\approx 5\). ### Future work One avenue for future work would be to perform simulations varying \(\gamma\) and \(\Omega\), to fully disentangle the different dependencies on \(\mathrm{Ro}_{c}\) and \(\mathrm{Ro}_{c\omega}\). Changing \(\gamma\) and \(\Omega\) independently would allow the realistic scenario of a planet orbiting with a nonzero orbital frequency in the inertial frame to be studied. This would be likely to impact the strength of the elliptical instability as it changes its linear growth rate. This would be expected to cause suppression of the elliptical instability for different strengths of convective driving (or for a different \(\epsilon\) for fixed \(\mathrm{Ra}\)). However, we do not expect any of our conclusions will be substantially modified in this case. Furthermore, in our current setup the Cartesian box is situated at the poles of the planet, with the gravity and rotation axis both pointing in the \(z\)-direction. The latitudinal location of the box, and thus the relative directions of gravity and the rotation axis could affect the resulting tidal dissipation. If the box is moved to a lower latitude, the directions of gravity and the rotation axis will be misaligned, causing convective motions subjected to rapid rotation to change angle (Novi et al., 2019; Currie et al., 2020). At lower latitudes the vortices introduced by rotating convection turn into zonal flows, which could modify dissipation due to the elliptical instability as well as the effective viscosity of convection. In addition, Currie et al. (2020) demonstrated that the predictions of RMLT hold from the poles to mid-latitudes, but at low-latitudes deviations were observed due to the presence of both zonal flows and because boundary conditions constrain the flow in the latitudinal direction. Hence, future work should focus on obtaining a theoretical understanding of convection and of the effective viscosity at mid and low latitudes, in the presence of strong zonal flows. There are strong magnetic fields present in Jupiter, and it is expected that Hot Jupiters would also have strong fields. This expectation is supported by observations tentatively inferring that a number of Hot Jupiters possess strong magnetic fields (Cauley et al., 2019). Therefore it is important to study the inclusion of magnetic fields, as they could have significant effects on tidal dissipation. Magnetic fields may prevent LSV formation by the elliptical instability, and therefore allow a continuous operation of the resulting energy transfers (Barker & Lithwick, 2013). It is likely that they also prevent the formation of the convective LSV (e.g. Maffei et al., 2019), and if so could allow continuous operation of the elliptical instability while convection is present in the system, potentially allowing for enhanced tidal dissipation. In addition, sufficiently strong magnetic fields will modify the properties of the convection and therefore the effective viscosity, and it remains to be seen how valid the predictions of Stevenson (1979) would be in this case. Also on the topic of magnetic fields, in a similar fashion to convection acting as an effective viscosity, an effective turbulent magnetic resistivity might arise (Tobias & Cattaneo, 2013; Cattaneo & Tobias, 2013). The turbulent magnetic resistivity has been explored previously in accretion discs (Lesur & Longaretti, 2009), but not in the context of tidal dissipation. It is entirely unknown whether an effective resistivity acting on a tidal flow features the same frequency-reduction as the effective viscosity (as assumed by Wei, 2022), and whether it might be an effective dissipation mechanism of the equilibrium tide for high tidal frequencies. Another question lies in the applicability of effective turbulent diffusivities like the effective viscosity and effective resistivity. The effective viscosity as calculated here is purely representative of the interaction of rotating convection with the tidal background flow. It is unclear if, for instance, the interaction between inertial waves generated by the elliptical instability (or more directly by tidal forcing) and convection can be modelled in the same way. So studying the interaction of convection with inertial waves, and calculating whether (and if so how) this can be modelled using an effective viscosity is an important topic for future work. In addition, the possible role of alternative energy transfer routes for fast tides, such terms involving correlations between tidal flow components and gradients of the convective flow (which identically vanish in our model) should be explored in global models to determine if they are ever important (e.g. Terquem, 2021; Barker & Astoul, 2021). A final avenue of future work is related to the analysis of tidal dissipation rates using planetary models. It would be worthwhile to modify the equation of state in a manner akin to Muller et al. (2020), which would allow us to obtain an extended dilute core and to measure the impact of such a core on tidal dissipation rates. Furthermore, a stably stratified dilute core might provide an important additional contribution to tidal dissipation by permitting the excitation of internal (inertia-)gravity waves (e.g. Fuller et al., 2016; Andre et al., 2019; Pontin et al., 2020, 2023; Lin, 2023; Dewberry, 2023). Finally, studying how \(Q^{\prime}\) evolves with planetary evolution for each of these mechanisms would be worthwhile. For self-consistency, one might then consider also evolving orbital parameters and irradiation fluxes in tandem with the structural evolution. ### Conclusion We have studied interactions between the elliptical instability and rotating turbulent convection in a local model representing a small patch of a giant planet (or star), building upon the simulations and analysis in Paper 1. We have found the elliptical instability to provide time-averaged tidal dissipation rates consistent with an \(\epsilon^{3}\) scaling when it operates (where \(\epsilon\) is proportional to the dimensionless tidal amplitude), which would lead to tidal quality factors \(Q^{\prime}\propto P_{\rm orb}^{4}\) (consistently with Barker and Lithwick, 2013, 2013; Barker, 2016). We find a dissipation rate sufficient to suggest this tidal mechanism could be the dominant one for the very shortest-period Hot Jupiters, with orbital periods shorter than two days. In this work we find that the observed efficiency factor (\(0.05\approx\chi\lesssim 0.18\) as an upper bound, defined such that in our units the dissipation rate \(D\equiv\chi\epsilon^{3}\gamma^{3}\)) seems to be independent of the convective driving (Rayleigh number) as long as the elliptical instability operates. Some of our results are also consistent with a steeper \(\epsilon^{6}\) scaling, which, if robust, would significantly weaken tidal dissipation for realistic values of \(\epsilon\), restricting the effectiveness of this mechanism except for the very shortest orbital periods. Our simulations have also obtained a sustained energy injection rate scaling as \(\epsilon^{2}\) for smaller values of \(\epsilon\) than those for which the elliptical instability is observed. This can be interpreted as an effective viscosity arising from the interaction between rotating convection and the equilibrium tidal flow that is independent of \(\epsilon\) (as would be predicted by a linear tidal mechanism). On the other hand, this effective viscosity is observed to depend on the convective velocity, length scale and tidal frequency. In this work we have obtained scaling laws for convective velocities and length scales, which are used to find predictions for the convective frequency and the effective viscosity, using both (temperature-based) MLT and RMLT prescriptions. We find very good agreement between the predictions of RMLT and our simulation data. Our simulations confirm the applicability of the diffusion-free scalings of RMLT (e.g. Stevenson, 1979; Barker et al., 2014; Currie et al., 2020; Arunou et al., 2020) to describe sufficiently turbulent rapidly rotating convection. We find that the scaling laws for the effective viscosity as a function of convective velocity, length scale and frequency - when the rotational modification of these quantities is accounted for - previously found in non-rotating simulations (Duguid et al., 2020) largely hold true in our rotating simulations. Our results support the frequency-reduction of the effective viscosity for fast tides (\(\omega_{c}/\omega\))\({}^{2}\) when \(\omega\gg\omega_{c}\). We also confirm the presence of the intermediate frequency regime they identified in our simulations, and that the transition to this regime occurs at a similar ratio of \(\omega/\omega_{c}\approx 5\). Furthermore, when considering the more realistic flux-based scalings instead of temperature-based scalings we find that the MLT and RMLT predictions for the high frequency (fast tides) regime for the effective viscosity are identical and are independent of rotation rate (as long as the heat flux is independent of rotation rate, which is a reasonable first approximation). Finally, we employed the MESA code to construct illustrative interior models of a Jupiter-like and an inflated Hot-Jupiter-like planet, subject to Jupiter-like irradiation and Hot Jupiter-like irradiation plus artificial interior heating, respectively. We compute the rotational modifications of convective velocities and length scales in these models, as well as the modifications of the effective viscosity to allow us to compute tidal dissipation resulting from convective damping of equilibrium tides according to the scaling laws we have derived and verified with simulations. In both models (even in inflated short-period Hot Jupiters), we find the convective Rossby numbers to be much smaller than one, indicating that the convection is strongly affected by rotation, therefore motivating our study of this regime in this paper. We find that for almost all applications to giant planets, the fast tides regime, in which the tidal frequency is much larger than the convective frequency, is highly likely to be the relevant one. In this regime the effective viscosity scales as \(\nu_{\rm eff}\propto(\omega_{c}/\omega)^{2}\). The resulting tidal quality factors \(Q^{\prime}\) for equilibrium tide damping (computed following Barker, 2020) are estimated to be in excess of \(10^{9}\) for tidal periods of interest, and this mechanism is therefore predicted to be an ineffective one in giant planets. On the other hand, we predict the elliptical instability to be efficient for very short orbital and tidal periods (with \(Q^{\prime}\sim 10^{2}\) in Hot Jupiters for periods of order one day), but that it falls off rapidly with increasing (tidal and orbital) periods. We also compute for the first time \(Q^{\prime}\) arising from the frequency-averaged dissipation due to inertial waves in "realistic models" of giant planets (following Ogilvie, 2012; Barker, 2020). This mechanism assumes these waves to be excited linearly by tidal forcing, as opposed to nonlinearly (with respect to tidal amplitude) by the elliptical instability. Inertial waves are by far the most efficient mechanism studied here, either those excited by the elliptical instability for short orbital and tidal periods, or by the linear frequency-averaged dissipation. The latter leads to \(Q^{\prime}\approx 10^{3}(P_{\rm rot}/10{\rm hr})^{2}\) for Jupiter-like rotation periods, which is consistent with the efficient tidal dissipation rates required to explain the observed orbital migration of the moons of Jupiter and Saturn (e.g. Lainey et al., 2009, 2012, where tidal amplitudes are likely to be too small for the elliptical instability to operate effectively). All mechanisms except the frequency-averaged inertial wave mechanism are more efficient in the Hot Jupiter model due to its larger radius, weaker rotation and stronger convective driving. This allows the elliptical instability to be on par or even more efficient than linearly-excited inertial waves in the shortest-period Hot Jupiters. We conclude that inertial wave mechanisms are probably the most efficient ones for dissipating tidal energy in giant planets, at least those without extended stable layers. ## Acknowledgements NBV was supported by EPSRC studentship 2528559. AJB and RH were supported by STFC grants ST/S000275/1 and ST/W000873/1. RH would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme "Frontiers in dynamo theory: from the Earth to the stars" where work on this paper was undertaken. This work was supported by EPSRC Grant No. EP/R014604/1. RH's visit to the Newton Institute was supported by a grant from the Heilbronn Institute. Simulations were undertaken on ARC4, part of the High Performance Computing facilities at the University of Leeds, and the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. ## Data Availability The simulation data used in this article will be shared on reasonable request to the corresponding author.
2307.16615
Well-posedness and simulation of weak solutions to the time-fractional Fokker-Planck equation with general forcing
In this paper, we investigate the well-posedness of weak solutions to the time-fractional Fokker-Planck equation. Its dynamics is governed by anomalous diffusion, and we consider the most general case of space-time dependent forces. Consequently, the fractional derivatives appear on the right-hand side of the equation, and they cannot be brought to the left-hand side, which would have been preferable from an analytical perspective. For showing the model's well-posedness, we derive an energy inequality by considering nonstandard and novel testing methods that involve a series of convolutions and integrations. We close the estimate by a Henry-Gronwall-type inequality. Lastly, we propose a numerical algorithm based on a nonuniform L1 scheme and present some simulation results for various forces.
Marvin Fritz
2023-07-31T12:45:08Z
http://arxiv.org/abs/2307.16615v1
Well-posedness and simulation of weak solutions to the time-fractional Fokker-Planck equation with general forcing ###### Abstract. In this paper, we investigate the well-posedness of weak solutions to the time-fractional Fokker-Planck equation. Its dynamics is governed by anomalous diffusion, and we consider the most general case of space-time dependent forces. Consequently, the fractional derivatives appear on the right-hand side of the equation, and they cannot be brought to the left-hand side, which would have been preferable from an analytical perspective. For showing the model's well-posedness, we derive an energy inequality by considering nonstandard and novel testing methods that involve a series of convolutions and integrations. We close the estimate by a Henry-Gronwall-type inequality. Lastly, we propose a numerical algorithm based on a nonuniform L1 scheme and present some simulation results for various forces. Key words and phrases:Time-fractional Fokker-Planck equation; well-posedness of weak solutions; Galerkin approximation; nonuniform L1 scheme 2020 Mathematics Subject Classification: Primary: 35R11; 35D30; 35A01; 65M60 The author is supported by the state of Upper Austria. \({}^{*}\)Corresponding author: Marvin Fritz. ## 1. Introduction Mathematicians and engineers have given time-fractional differential equations significant consideration in recent years. Such equations are nonlocal in time and possess an inherent history effect. We refer the interested reader to the multi-volume work "Handbook of Fractional Calculus with Applications" [6, 37, 43] and its references therein for more information and typical real-world applications such as physics, control theory, engineering, life and social sciences. In this work, we are concerned with the time-fractional Fokker-Planck equation, which permits subdiffusive behavior and its derivation and application have been investigated earlier in literature. We distinguish the model by its exterior force, which can be time-dependent [26, 42], space-dependent [7, 8, 10, 17, 31, 32, 33, 39, 41], or space-time dependent [4, 18, 27, 46]. We focus on the latter, most general, case and mention the publications [20, 22, 23, 25, 28, 34, 38, 48] that explored numerical methods for the time-fractional Fokker-Planck equation with space-time dependent forces. We emphasize that several articles have investigated a "time-fractional Fokker-Planck"-type equation, where the time-fractional derivative in the sense of Caputo appears on the left-hand side of the PDE. This is correct in the case of a time-independent force. However, for time dependent forces this model is not correct and according to [18], it is "physically defeasible" and its solution "does not correspond to a physical stochastic process". In this work, we provide some mathematical and numerical insights on this reformulation and the differences of both models. We present an analytical treatment of weak solutions to the time-fractional Fokker-Planck equation with space-time dependent forces. Specifically, we follow the Galerkin ansatz by spatially discretizing the system and deriving appropriate energy constraints, allowing us to reach the limit in the discretized system. We mention that weak solutions to other nonlinear time-fractional PDEs have been previously studied using the Galerkin method in the published works [13, 14, 15]. In addition, preliminary steps have been taken in the optimal control [9] and analysis [28, 29, 30, 24, 36, 25] of the time-fractional Fokker-Planck system. Nonetheless, mild, strong, and classical solutions have been investigated. The difficulty lies in the low regularity of weak solutions and the appearance of time-dependent forces, which do not allow us to transform the system to a more accessible system regarding analysis. A coupled system of a time-fractional Fokker-Planck equation with the Navier-Stokes equations was investigated in the work [16] but because of the complex coupling between the equations only the case of \(\alpha\in(\frac{1}{2},1)\) was considered. In Section 2, we discuss the mathematical model with its initial and boundary data. In Section 3, we present various function spaces and recall important conclusions from the theory of fractional derivatives, including chain inequalities, embedding theorems, and Gronwall-type inequalities. In Section 4, we finally present and verify the theorem declaring the well-posedness of weak solutions. Here, the system is discretized, and appropriate energy bounds are derived to pass the limit in the discretized system. In Section 5, we propose a numerical discretization of the time-fractional equation based on the nonuniform L1 scheme in time and finite elements in space. We show simulation results and focus on the influence of the fractional derivative. Moreover, we compare the model that is studied here to the physical defeasible model as mentioned above. ## 2 Modeling of the time-fractional Fokker-Planck equation Let \(\Omega\subset\mathbb{R}^{d}\), \(d\in\mathbb{N}\), be a Lipschitz domain and \(T<\infty\) a fixed final time. Shortly, we denote the time-space domain by \(\Omega_{T}=\Omega\times(0,T)\). Let \(\psi:\Omega_{T}\to\mathbb{R}\) denote a probability density function that represents the probability at a time \(t\) of finding the center of mass of a particle in the volume element \(x+\mathrm{d}x\). The time-fractional Fokker-Planck model with space-time dependent force can be derived by utilizing the Langevin equations, see [26, 27], and the model reads \[\partial_{t}\psi(x,t)-D\Delta D_{t}^{1-\alpha}\psi(x,t)+\mathrm{div}\big{(}F(x,t)D_{t}^{1-\alpha}\psi(x,t)\big{)}=0. \tag{2.1}\] Here, \(F:\Omega_{T}\to\mathbb{R}^{d}\) denotes the space-time dependent external force and \(D\) the diffusion coefficient. In contrast to the typical model of integer-order, the fractional derivative in the sense of Riemann-Liouville is introduced, which is defined by \[D_{t}^{1-\alpha}u(t)=\frac{1}{\Gamma(\alpha)}\frac{\mathrm{d}}{\mathrm{d}t} \int_{0}^{t}\frac{u(s)}{(t-s)^{1-\alpha}}\,\mathrm{d}s,\] where \(\Gamma\) denotes Euler's Gamma function. We introduce the singular kernel function \(g_{\alpha}(t)=t^{\alpha-1}/\Gamma(\alpha)\) and therefore, we can rewrite the fractional derivative with the convolution operator as \[D_{t}^{1-\alpha}u=\partial_{t}(g_{\alpha}*u).\] In the limit case of \(\alpha=1\), the model is reduced to the standard Fokker-Planck equation. This time-fractional model has been studied in the previous works [20, 22, 23, 25, 28, 34, 48] with regard to numerical methods and in [24] for the existence of mild and classical solutions. We note that the fractional derivative in the sense of Riemann-Liouville appears naturally in the equation's derivation, see [27]. However, the fractional derivative in the sense of Caputo would be preferable considering our variational approach to time-fractional partial differential equations and the involved analytical machinery. The Caputo derivative of order \(\alpha\) is denoted by \(\partial_{t}^{\alpha}\) and it reads \[\partial_{t}^{\alpha}u=D_{t}^{\alpha}(u-u_{0}). \tag{2.2}\] Here, \(u_{0}\) is the initial of the underlying system, which shall fulfill \[\big{(}g_{1-\alpha}*(u-u_{0})\big{)}(0)=0\] in the case that \(u\) is not continuous. If the force is time-independent, we could simply convolve the time-fractional Fokker-Planck equation (2.1) with the singular kernel function \(g_{1-\alpha}\) and exploit the properties \(g_{1-\alpha}*D_{t}^{1-\alpha}u=u\) and \(\partial_{t}^{\alpha}u=g_{1-\alpha}*\partial_{t}u\), see below in Section 3, to obtain the time-fractional equation \[\partial_{t}^{\alpha}\psi(x,t)-D\Delta\psi(x,t)+\operatorname{div}\bigl{(}F(x )\psi(x,t)\bigr{)}=0, \tag{2.3}\] which would be more accessible for analytical and numerical methods. However, we cannot simply exclude the relevant cases of time-dependent forces. In such cases, one would require a product rule for fractional derivatives to write \(FD_{t}^{1-\alpha}\psi\) as \(D_{t}^{1-\alpha}(F\psi)-D_{t}^{1-\alpha}F\psi\). However, this is not correct for fractional derivatives, as it can be already seen from the example \(\psi=F=1\). Then it holds \[FD_{t}^{1-\alpha}\psi=g_{\alpha}\neq 0=g_{\alpha}-g_{\alpha}=D_{t}^{1-\alpha} (F\psi)-D_{t}^{1-\alpha}F\psi.\] There is a fractional version of the Leibniz rule that requires two smooth functions \(f,g\) and reads [11, Theorem 2.18] \[D_{t}^{\alpha}(fg)=fD_{t}^{\alpha}g+\sum_{k=1}^{\infty}\binom{\alpha}{k} \partial_{t}^{k}f\cdot(g_{1-k+\alpha}*g).\] We can already see the issue of this formula. It requires smooth functions, and it turns out that there is an infinite sum on the right-hand side. Let us assume that \(F\) and \(\psi\) are smooth. Then we want to bring the fractional derivative in front of \(F\psi\) by the formula \[FD_{t}^{1-\alpha}\psi=D_{t}^{1-\alpha}(F\psi)-\sum_{k=1}^{\infty}\binom{1- \alpha}{k}\partial_{t}^{k}F\cdot(g_{2-k-\alpha}*\psi).\] Afterward, we convolve the system with \(g_{1-\alpha}\) and obtain the system \[\begin{split}&\partial_{t}^{\alpha}\psi(x,t)-D\Delta\psi(x,t)+ \operatorname{div}\bigl{(}F(t,x)\psi(x,t)\bigr{)}\\ &=\sum_{k=1}^{\infty}\binom{1-\alpha}{k}g_{1-\alpha}*\bigl{(} \partial_{t}^{k}F\cdot(g_{2-k-\alpha}*\psi)\bigr{)},\end{split} \tag{2.4}\] There have been several published articles that studied this model but neglecting the complete right-hand side. This is also the reason it is claimed in [18] that such a model (with neglecting the right-hand side) is "physically defeasible" and its solution "does not correspond to a physical stochastic process". In the case that \(F\) is affine linear in \(t,\) i.e. \(F(t,x)=a(x)+b(x)t,\) it yields \[\partial_{t}^{\alpha}\psi(x,t)-D\Delta\psi(x,t)+\operatorname{div} \bigl{(}F(t,x)\psi(x,t)\bigr{)}\] \[\quad=(1-\alpha)\cdot b(x)\cdot(g_{2-2\alpha}*\psi)(t)\] We would rather not consider infinitely many terms on the right-hand side of the PDE for a general \(F\) and therefore, we instead exploit the definition (2.2) of the Caputo derivative to obtain \[D_{t}^{1-\alpha}u(t)=\partial_{t}^{1-\alpha}u(t)+D_{t}^{1-\alpha}u_{0}= \partial_{t}^{1-\alpha}u(t)+u_{0}g_{\alpha}(t),\] and rewrite the time-fractional Fokker-Planck equation (2.1) as follows: \[\begin{split}&\partial_{t}\psi(x,t)-D\Delta\partial_{t}^{1- \alpha}\psi(x,t)+\operatorname{div}\bigl{(}F(x,t)\partial_{t}^{1-\alpha}\psi( x,t)\bigr{)}\\ &=g_{\alpha}D\Delta\psi_{0}-g_{\alpha}\operatorname{div}(F\psi_{ 0}).\end{split} \tag{2.5}\] We consider an initial condition \(\psi_{0}\in H^{1}_{0}(\Omega)\) and therefore, it holds that the right-hand side has the regularity \(L^{p}(0,T;H^{-1}(\Omega))\) with \(p<1/(1-\alpha).\) We equip this equation with the homogeneous Dirichlet boundary condition \(\psi=0\) on \(\partial\Omega\). However, our analytical results also hold for no-flux boundary conditions (i.e. homogeneous Neumann). Moreover, the system is equipped with the initial condition \(\psi(0)=\psi^{0}\geq 0\) in \(\Omega\). Physically, \(\psi^{0}\) is a given probability density function, i.e., it is nonnegative function and satisfies \(\int_{\Omega}\psi^{0}(x)\,\mathrm{d}x=1\) (however, we do not need to assume such properties in our well-posedness theorem below). Integrating the time-fractional Fokker-Planck equation in \(\Omega\) and employing integration by parts, we find \(\frac{\,\mathrm{d}}{\,\mathrm{dt}}\int_{\Omega}\psi(x,t)\,\mathrm{d}x=0.\) This implies then \(\int_{\Omega}\psi(x,t)\,\mathrm{d}x=1\) for almost all \(t\). ## 3. Mathematical preliminaries In this part, we present some important concepts and conclusions addressing fractional derivatives. For instance, we provide a fractional version of the Aubin-Lions lemma and a suitable Gronwall lemma. These are important results used in Galerkin-based proofs for showing the existence of weak solutions to partial differential equations. Let \(T<\infty\) be a fixed final time. We have already defined the singular kernel function in the previous section by \(g_{\alpha}(t)=t^{\alpha-1}/\Gamma(\alpha),\)\(t\in(0,T)\), \(\alpha>0\). We can extend the definition to the limit case of \(\alpha=0\) by \(g_{0}=\delta\). We observe that it holds \(g_{\alpha}\in L^{p}(0,T)\) for any \(\alpha>1-\frac{1}{p}\), i.e., \[g_{\alpha}\in L^{\frac{1}{1-\alpha}-\varepsilon}(0,T)\quad\forall\varepsilon \in\bigl{(}0,\tfrac{\alpha}{1-\alpha}\bigr{]}. \tag{3.6}\] Alternatively, using the concept of locally integrable functions, it naturally holds \(g_{\alpha}\in L^{1/(1-\alpha)}_{\mathrm{loc}}(0,T)\). E.g., it holds \(g_{\alpha}\in L^{2}(0,T)\) for any \(\alpha>\frac{1}{2}\) and \(g_{\alpha}\in L^{2}_{\mathrm{loc}}(0,T)\) for any \(\alpha\geq\frac{1}{2}\). Moreover, the kernel function satisfies the following semigroup property, see [11, Theorem 2.4], \[g_{\alpha}*g_{\beta}=g_{\alpha+\beta}\qquad\forall\alpha,\beta\in(0,1). \tag{3.7}\] We note that one can bound the \(L^{p}(0,t)\)-norm of a function \(u:(0,T)\to\mathbb{R}\) by its convolution with \(g_{\alpha}\) as follows: \[\begin{split}\|u\|_{L^{p}_{t}}^{p}:=\int_{0}^{t}|u(s)|^{p}\, \mathrm{ds}&\leq t^{1-\alpha}\int_{0}^{t}(t-s)^{\alpha-1}|u(s)|^ {p}\,\mathrm{ds}\\ &\leq T^{1-\alpha}\Gamma(\alpha)\bigl{(}g_{\alpha}*|u|^{p}\bigr{)} (t).\end{split} \tag{3.8}\] In other words, the space \[L^{p}_{\alpha}(0,T)=\Big{\{}u:(0,T)\to\mathbb{R}:\|u\|_{L^{p}_{\alpha}}^{p}:=\sup_ {t\in(0,T)}(g_{\alpha}*|u|^{p})(t)<\infty\Big{\}}, \tag{3.9}\] is indeed continuously embedded in the space \(L^{p}(0,T)\). We can relate the estimate (3.8) to \(g_{\alpha}\) by noting that \[(g_{\alpha}*|u|^{p})(t)\geq\frac{t^{\alpha-1}}{\Gamma(\alpha)}\|u\|_{L^{p}_{t }}^{p}=g_{\alpha}(t)\|u\|_{L^{p}_{t}}^{p}\geq g_{\alpha}(T)\|u\|_{L^{p}_{t}}^{ p}.\] In particular, this yields for any \(s\leq t\) \[(g_{\alpha}*|u|^{p})(t)\geq(g_{\alpha}*|u|^{p})(s)\geq g_{\alpha}(s)\|u\|_{L^ {p}_{x}}^{p}. \tag{3.10}\] Therefore, we can integrate this inequality on the time interval \((0,t)\) to obtain \[t\cdot(g_{\alpha}*|u|^{p})(t)\geq\int_{0}^{t}g_{\alpha}(s)\|u\|_{L^{p}_{s}}^{ p}\,\mathrm{ds},\] which implies the following useful bound \[\begin{split}(g_{\alpha}*|u|^{p})(t)&\geq\frac{1} {t}\int_{0}^{t}g_{\alpha}(s)\|u\|_{L^{p}_{x}}^{p}\,\mathrm{ds}\\ &\geq\frac{1}{T}\int_{0}^{t}g_{\alpha}(s)\|u\|_{L^{p}_{x}}^{p}\, \mathrm{ds}.\end{split} \tag{3.11}\] Similarly, if we take the convoluton instead of the integration of the inequality (3.10), we obtain \[\begin{split} g_{\alpha+1}(T)(g_{\alpha}*|u|^{p})(t)& \geq g_{\alpha+1}(t)(g_{\alpha}*|u|^{p})(t)\\ &\geq\big{(}g_{\alpha}*(g_{\alpha}\cdot\|u\|_{L^{p}_{t}}^{p}) \big{)}(t).\end{split} \tag{3.12}\] In the previous section, have also introduced the fractional derivatives in the sense of Riemann-Liouville \(D^{\alpha}_{t}u=\partial_{t}(g_{1-\alpha}*u)\) and Caputo \(\partial^{\alpha}_{t}u=D^{\alpha}_{t}(u-u_{0})\). It is well-known that the Caputo derivative can also be written as \(\partial^{\alpha}_{t}u=g_{1-\alpha}*\partial_{t}u\) if \(u\) is absolutely continuous, see [11, Lemma 3.5]. We note that it does not hold \(\partial^{\alpha}_{t}\partial^{\beta}_{t}u=\partial^{\alpha+\beta}_{t}u\) in general for the Caputo derivative. However, it holds, see [11, Theorem 3.14], \[\partial^{\alpha}_{t}\partial^{1-\alpha}_{t}u=\partial_{t}u. \tag{3.13}\] We define the fractional Sobolev-Bochner space for \(\alpha\in(0,1)\) on \((0,T)\) with values in a given Hilbert space \(H\) by \[W^{\alpha,p}(0,T;H)=\big{\{}u\in L^{p}(0,T;H):\partial^{\alpha}_{t}u\in L^{p}( 0,T;H)\big{\}}.\] Next, we state the inverse convolution property. Its name origins from the fact that the convolution with the kernel \(g_{\alpha}\) acts as an inverse operation on the \(\alpha\)-th fractional derivative up to the initial condition. In fact, it holds \[(g_{\alpha}*\partial^{\alpha}_{t}u)(t)=u(t)-u_{0}\qquad\forall u\in W^{\alpha,p}(0,T;H). \tag{3.14}\] This can be seen from the computation \[(g_{\alpha}*\partial^{\alpha}_{t}u)(t)=(g_{\alpha}*g_{1-\alpha}*\partial_{t}u )(t)=(1*\partial_{t}u)(t)=\int_{0}^{t}\partial_{t}u(s)\,\mathrm{ds}=u(t)-u_{0},\] where we used (3.7) to conclude \(g_{\alpha}*g_{1-\alpha}=g_{1}=1\). Furthermore, we mention the following consequences of the interaction between fractional derivatives and kernel functions: \[\partial_{t}^{\alpha}(g_{\alpha}*u)=D_{t}^{\alpha}(g_{\alpha}*u)=\partial_{t}(g _{1-\alpha}*g_{\alpha}*u)=\partial_{t}(1*u)=u, \tag{3.15}\] which holds for any \(u\in L^{1}(0,T;H)\). As in the integer-order setting, there are continuous and compact embedding results for fractional Sobolev spaces; see [47, Theorem 3.2]. For a given Gelfand triple \(V\hookrightarrow\hookrightarrow H\hookrightarrow V^{\prime}\), the classical Aubin-Lions lemma [40] reads \[\begin{split}& W^{1,1}(0,T;V^{\prime})\cap L^{\infty}(0,T;V) \hookrightarrow\hookrightarrow C([0,T];H),\\ & W^{1,1}(0,T;V^{\prime})\cap L^{p}(0,T;V)\hookrightarrow\hookrightarrow L ^{p}(0,T;H),\quad p\in[1,\infty),\end{split} \tag{3.16}\] and the fractional counterparts is as follows: \[\begin{split}& W^{\alpha,p}(0,T;V^{\prime})\cap L^{p^{\prime}}(0,T;V )\hookrightarrow C([0,T];H),\quad\quad p\in[1,\infty),\\ & W^{\alpha,p}(0,T;V^{\prime})\cap L^{p}(0,T;V)\hookrightarrow \hookrightarrow L^{p}(0,T;H),\quad p\in[1,\infty).\end{split} \tag{3.17}\] We observe that there is a give-and-take involved: The fractional derivative is of order \(\alpha<1\) i.e. it is less than the full derivative in the classical Aubin-Lions lemma. However, we require that the derivative is in the better space \(L^{p}(0,T;V^{\prime})\) instead of only \(L^{1}(0,T;V^{\prime})\) to achieve the same target space \(L^{p}(0,T;H)\) in the compactness result. The classical chain rule does not hold for fractional derivatives, but one can use the following inequality, see [44, Theorem 2.1], as a remedy: \[\frac{1}{2}\partial_{t}^{\alpha}\|u\|_{H}^{2}\leq(u,\partial_{t}^{\alpha}u)_{ H}\quad\forall u\in W^{\alpha,p}(0,T;H), \tag{3.18}\] for almost all \(t\in(0,T)\), which is also known as Alikhanov's inequality, see the original work [1]. Moreover, we conclude from (3.13) and Alikhanov's inequality the following: \[(\partial_{t}u,\partial_{t}^{\alpha}u)_{H}=(\partial_{t}^{1-\alpha}\partial_{ t}^{\alpha}u,\partial_{t}^{\alpha}u)_{H}\geq\frac{1}{2}\partial_{t}^{1-\alpha}\| \partial_{t}^{\alpha}u\|_{H}^{2}, \tag{3.19}\] which gives after integrating it over the time interval \((0,t)\) \[\int_{0}^{t}(\partial_{t}u,\partial_{t}^{\alpha}u)_{H}\,\mathrm{ds}\geq\frac{ 1}{2}(g_{\alpha}*\|\partial_{t}^{\alpha}u\|_{H}^{2})(t)\geq\frac{1}{2\Gamma( \alpha)T^{1-\alpha}}\|\partial_{t}^{\alpha}u\|_{L^{2}_{t}H}^{2},\] where we applied (3.8) in the last step. Next, we require a Gronwall-type inequality that allows convolutions on the right-hand side of the inequality. Moreover, we want to have an additional function on the right-hand side that is only locally integrable. Such inequalities are known as Henry-Gronwall inequalities. **Lemma 3.1** (Henry-Gronwall, cf. [19, Lemma 7.1.1]).: _Let \(b\geq 0\), \(\beta>0\), \(a\in L^{1}_{\mathrm{loc}}(0,T;\mathbb{R}_{\geq 0})\). If \(u\in L^{1}_{\mathrm{loc}}(0,T;\mathbb{R}_{\geq 0})\) satisfies_ \[u(t)\leq a(t)+b(g_{\beta}*u)(t),\quad\text{ for a.e. }t\in(0,T),\] _then it yields_ \[u(t)\leq C(\alpha,b,T)\cdot\big{(}(g_{0}+E)*a\big{)}(t),\quad\text{ for a.e. }t\in(0,T),\] _where \(E\) is related to the Mittag-Leffler function._ We prove the following extension of the Henry-Gronwall inequality that allows an additional term on the left-hand side. **Lemma 3.2**.: _Let \(b\geq 0\), \(b>0\), \(a\in L^{1}_{\rm loc}(0,T;\mathbb{R}_{\geq 0})\). If the functions \(u,v\in L^{1}_{\rm loc}(0,T;\mathbb{R}_{\geq 0})\) satisfy the inequality_ \[u(t)+(g_{\alpha}*v)(t)\leq a(t)+b(g_{\alpha}*u)(t)\qquad\text{for a.a. }t\in(0,T],\] _then it yields_ \[u(t)+\int_{0}^{t}v(s)\,\mathrm{d}s\leq C(\alpha,b,T)\cdot\big{(}(g_{0}+E)*a \big{)}(t)\qquad\text{for a.a. }t\in(0,T].\] Proof.: We define the function \(w=u+g_{\alpha}*v\). Since \(g_{\alpha}*v\) is again nonnegative, we obtain \[w(t)\leq a(t)+b(g_{\alpha}*u)(t)\leq a(t)+(g_{\alpha}*w)(t),\] and by the Henry-Gronwall it yields \[w(t)\leq C(\alpha,b,T)\cdot\big{(}(g_{0}+E)*a\big{)}(t).\] Moreover, we can use (3.8) to estimate \(g_{\alpha}*v\) by the integral of \(v\), and we obtain the lemma's desired bound. ## 4. Well-posedness of weak solutions In this section, we state and prove the well-posedness of weak solutions to the time-fractional Fokker-Planck equation (4.20). As we already mentioned, we equip the equation with a homogeneous Dirichlet boundary condition. As noted before, our analysis holds for no-flux boundary conditions as well. We analyze the PDE in the Hilbert triple \[H^{1}_{0}(\Omega)\hookrightarrowhookrightarrow L^{2}(\Omega)\hookrightarrow H ^{-1}(\Omega).\] We equip \(H^{1}_{0}(\Omega)\) with the norm \(\|\cdot\|_{H^{1}_{0}}=\|\nabla\cdot\|_{L^{2}(\Omega)}\) with is equivalent to the natural norm on \(H^{1}(\Omega)\) due to Poincare's inequality [3, 6.7]. We use the Galerkin method and discretize the partial differential equations in space. Further, we derive suitable energy estimates, and we emphasize the places where the time-fractional derivative comes into play. We shall then pass to the limit to deduce the existence of a weak solution. The uniqueness is obtained as usual. First off, however, we introduce the concept of a weak solution to the time-fractional Fokker-Planck equation in the following definition. **Definition 4.1**.: We call a function \(\psi:\Omega_{T}\to\mathbb{R}\) a weak solution to the time-fractional Fokker-Planck equation (4.20) if it is of the regularity \[\psi\in W^{1,1}(0,T;H^{-1}(\Omega))\cap H^{1-\alpha}(0,T;H^{1}_{0}(\Omega)),\] fulfills the initial data \(\psi(0)=\psi_{0}\) in \(H^{-1}(\Omega)\), and the following variational form: \[\begin{split}&\langle\partial_{t}\psi,\zeta\rangle_{H^{1}_{0}}+D( \partial_{t}^{1-\alpha}\nabla\psi,\nabla\zeta)_{L^{2}}-(F\partial_{t}^{1- \alpha}\psi,\nabla\zeta)_{L^{2}}\\ &=\langle f,\zeta\rangle_{H^{1}_{0}}-g_{\alpha}(D\nabla\psi_{0}, \nabla\zeta)_{L^{2}}+g_{\alpha}\cdot(F\psi_{0},\nabla\zeta)_{L^{2}}\qquad \forall\zeta\in H^{1}_{0}(\Omega).\end{split} \tag{4.20}\] As we see, we expect a solution that is continuous in time with values in the Hilbert space \(H^{-1}(\Omega)\). Therefore, it is well-defined for the initial to fulfill \(\psi(0)=\psi_{0}\) in \(H^{-1}(\Omega)\). Moreover, it holds \[H^{1-\alpha}(0,T;H^{1}_{0}(\Omega))\hookrightarrow C([0,T];H^{1}_{0}(\Omega)),\] if \(1-\alpha>1/2\) i.e. \(\alpha<\frac{1}{2}\). In this case, the initial is even satisfied in \(H^{1}_{0}(\Omega)\). Next, we state the main result of this work on the well-posedness of weak solutions to the time-fractional Fokker-Planck equation (4.20). **Theorem 4.2** (Well-posedness of weak solutions).: _Let us assume:_ * \(\Omega\subseteq\mathbb{R}^{d}\)_,_ \(d\in\mathbb{N}\)_, bounded Lipschitz domain,_ \(T<\infty\) _fixed final time,_ * \(\alpha\in(0,1)\)_,_ * \(\psi_{0}\in H^{1}_{0}(\Omega)\)_,_ * \(f\in L^{2}_{0}(0,T;H^{-1}(\Omega))\)_,_ * \(F\in L^{\infty}(\Omega_{T};\mathbb{R}^{d})\) _with_ \(\|F\|_{L^{\infty}(\Omega_{T})}\leq F_{\infty}<\infty\)_._ _Then there exists a unique weak solution \(\psi\) to the time-fractional Fokker-Planck equation (4.20) in the sense of Definition 4.1. Further, it has the additional regularity_ \[\psi\in W^{1,r^{\prime}}(0,T;H^{-1}(\Omega))\cap W^{1-\alpha,p}(0,T;L^{2}( \Omega))\cap H^{1-\alpha}(0,T;H^{1}_{0}(\Omega)),\] _with \(r^{\prime}\) being the Holder conjugate of \(r=\max\{q^{\prime},2\}\), \(q^{\prime}\) being the Holder conjugate of \(q=\frac{1}{1-\alpha}-\varepsilon\) for \(\varepsilon\in(0,\frac{\alpha}{1-\alpha}]\), and_ \[\begin{cases}p=\infty,&\alpha>\frac{1}{2},\\ p<\infty,&\alpha=\frac{1}{2},\\ p<\frac{2}{1-2\alpha},&\alpha<\frac{1}{2}.\end{cases}\] We comment on the assumptions in this well-posedness result. We see it as an advantage that we can show the equation's well-posedness for all fractional values between \(0\) and \(1\), and any dimension \(d\geq 1\). Further, we only require \(F\in L^{\infty}(\Omega_{T};\mathbb{R}^{d})\) as opposed to [24] that required \(F\in W^{2,\infty}(\Omega_{T})\) for showing results on mild solutions. Moreover, the work [29] studied a Volterra integral form of a class of time-fractional advection-diffusion-reaction equations, including the time-fractional Fokker-Planck equations. However, they required \(F\in C^{2}([0,T];W^{1,\infty}(\Omega)^{d})\) to show the well-posedness of the Volterra integral equation. As the solution lies in the space \(W^{1-\alpha,\infty}(0,T;L^{2}(\Omega))\) for \(\alpha>\frac{1}{2}\), we obtain \(\psi\in C([0,T];L^{2}(\Omega))\) for \(\alpha>\frac{1}{2}\). It remains to consider \(\alpha=\frac{1}{2}\). In this case, we have \(q=2-\varepsilon\) and \(r=\max\{1-1/(2-\varepsilon),2\}=2\). Therefore, it holds \(\psi\in H^{1}(0,T;H^{-1}(\Omega))\cap L^{2}(0,T;H^{1}_{0}(\Omega))\), i.e., by an interpolation result \(\psi\in C([0,T];L^{2}(\Omega))\). We summarize the continuity results as follows: \[\psi\in\begin{cases}C([0,T];L^{2}(\Omega)),&\alpha\in(0,1),\\ C([0,T];H^{1}_{0}(\Omega)),&\alpha\in(0,\frac{1}{2}).\end{cases}\] We conclude that the initial is indeed at least satisfied in \(L^{2}(\Omega)\). **Proof** In order to prove this theorem, we employ the Galerkin method to discretize the variational form in space. This reduces the time-fractional PDE to a system of fractional ODEs, which admits a discretized solution \(\psi_{k}\). We then derive \(k\)-uniform energy estimates, which imply the existence of weakly/weakly-\(*\) convergent subsequence \(\psi_{k_{j}}\). Finally, we pass to the limit \(j\to\infty\) and apply compactness methods to return to the variational form of the continuous system. Recently, the Galerkin method has been applied to various time-fractional PDEs, see, e.g., [13, 14, 15, 45]. **(1) Galerkin discretization.** We introduce the discrete spaces \[H_{k}=\text{span}\{h_{1},\dots,h_{k}\},\] where \(h_{j}:\Omega\to\mathbb{R}\), \(j\in\{1,\ldots,k\}\), are the eigenfunctions to the eigenvalues \(\lambda_{j}\in\mathbb{R}\) of the following problems \[(\nabla h_{j},\nabla v)_{L^{2}}=\lambda_{j}(h_{j},v)_{L^{2}}\quad\forall v\in H_ {0}^{1}(\Omega).\] Since the inverse Dirichlet-Laplace operator is compact, self-adjoint, injective, positive operators on \(L^{2}(\Omega)\), we conclude by the spectral theorem, see e.g., [3, 12.12 and 12.13], that \[\{h_{j}\}_{j\in\mathbb{N}}\text{ is an orthonormal basis in }L^{2}(\Omega)\text{ and orthogonal in }H_{0}^{1}(\Omega),\] Therefore, \(\cup_{k\in\mathbb{N}}H_{k}\) is dense in \(H_{0}^{1}(\Omega)\). We consider the Galerkin approximations \[\psi_{k}(t)=\sum_{j=1}^{k}\psi_{k}^{j}(t)y_{j}, \tag{4.21}\] where \(\psi_{k}^{j}:(0,T)\to\mathbb{R}\) are coefficient functions for all \(j\in\{1,\ldots,k\}\). We denote the orthogonal projections onto the finite-dimensional space by \(\Pi_{H_{k}}:L^{2}(\Omega)\to H_{k}\). Given the initial data \(\psi_{0}\) from the continuous system, we choose \(\psi_{0k}\in H_{k}\) such that \(\psi_{0k}=\Pi_{H_{k}}\psi_{0}\), i.e., there are coefficient \(\{\psi_{0k}^{j}\}_{j=1}^{k}\) such that \(\psi_{0k}=\sum_{j=1}^{k}\psi_{0k}^{j}y_{j}\). Moreover, due to well-known properties of the projection operator, see [3, 9.7], it holds as \(k\to\infty\) \[\|\psi_{0k}\|_{X}\leq\|\psi_{0}\|_{X}\ \text{ and }\ \psi_{0k}\to\psi_{0}\ \text{ in }\ X\in\{H^{-1}(\Omega),L^{2}(\Omega),H_{0}^{1}(\Omega)\}. \tag{4.22}\] The Galerkin equations read as follows: We want to find \(\psi_{k}\in H_{k}\) such that \(\psi_{k}(0)=\psi_{0k}\) and \[\begin{split}&(\partial_{t}\psi_{k},\zeta)_{L^{2}}+D(\partial_{t}^ {1-\alpha}\nabla\psi_{k},\nabla\zeta)_{L^{2}}-(F\partial_{t}^{1-\alpha}\psi_{ k},\nabla\zeta)_{L^{2}}\\ &=\langle f,\zeta\rangle_{H_{0}^{1}}-g_{\alpha}(D\nabla\psi_{0k}, \nabla\zeta)_{L^{2}}+g_{\alpha}\cdot(F\psi_{0k},\nabla\zeta)_{L^{2}}.\end{split} \tag{4.23}\] for all \(\zeta\in H_{k}\). We want to apply an existence result on ODEs with Riemann-Liouville derivatives and therefore, we rewrite the Galerkin system as follows: \[(\partial_{t}\psi_{k},\zeta)_{L^{2}}+D(D_{t}^{1-\alpha}\nabla\psi_{k},\nabla \zeta)_{L^{2}}-(FD_{t}^{1-\alpha}\psi_{k},\nabla\zeta)_{L^{2}}=\langle f,\zeta \rangle_{H_{0}^{1}},\] for any \(\zeta\in H_{k}\). We rewrite \(\psi_{k}\) as the sum of the basis functions \(\{\psi_{k}^{j}\}_{j=1}^{k}\) as introduced in (4.21), from which we obtain that the coefficients are governed by the system \[\partial_{t}\psi_{k}^{i}+\lambda_{i}DD_{t}^{1-\alpha}\psi_{k}^{i}-\sum_{j=1}^{ k}D_{t}^{1-\alpha}\psi_{k}^{j}(Fy_{j},\nabla y_{i})_{L^{2}}=\langle f,y_{i} \rangle_{H^{1}}, \tag{4.24}\] for any \(i\in\{1,\ldots,k\}\). Equivalently, we define the function \(\phi_{k}^{i}=D_{t}^{1-\alpha}\psi_{k}^{i}\) for any \(i\), which is governed by the equation \[D_{t}^{\alpha}\phi_{k}^{i}+\lambda_{i}D\phi_{k}^{i}-\sum_{j=1}^{k}\phi_{k}^{j} (Fy_{j},\nabla y_{i})_{L^{2}}=\langle f,y_{i}\rangle_{H^{1}}, \tag{4.25}\] for any \(i\in\{1,\ldots,k\}\). We notice that it holds \(g_{1-\alpha}\ast\phi_{k}^{i}=\psi_{k}^{i}\) due to the inverse convolution property (3.14) and we observe that (4.25) is naturally equipped with the initial \((g_{1-\alpha}\ast\phi_{k}^{i})=\psi_{0k}^{i}\). We denote the vector of components \(\left(\phi_{k}^{j}(t)\right)_{1\leq j\leq k}\) by \(\Phi(t)\). Then the approximate problem can be written as a system of ordinary differential equations for \(\Phi(t)\) of the form \(D_{t}^{\alpha}\Phi=h(t,\Phi)\), where \(h\) is continuous and locally Lipschitz continuous with respect to \(\Phi\). Therefore, the fractional variant of the Cauchy-Lipschitz theorem, see [11, Theorem 5.1], yields the existence of a unique continuous solution, defined on a short-time interval \([0,T_{k}]\) with \(0<T_{k}\leq T\). From here, we conclude \(\phi_{k}+g_{\alpha}\psi_{0k}=\partial_{t}^{1-\alpha}\psi_{k}\in C((0,T_{k}];H_{k})\) and \(g_{\alpha}*\psi_{k}\in C^{1}((0,T_{k}];H_{k})\). **(2) Energy estimates: Part 1.** Next, we derive \(k\)-uniform estimates that will allow us to extract weakly converging subsequences. We test the Galerkin equation (4.23) by \(g_{\alpha}*\partial_{t}\psi_{k}=\partial_{t}^{1-\alpha}\psi_{k}\in H_{k}\) giving \[\begin{split}&(\partial_{t}\psi_{k},\partial_{t}^{1-\alpha} \psi_{k})_{L^{2}}+D\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2}-(F\cdot \nabla\partial_{t}^{1-\alpha}\psi_{k},\partial_{t}^{1-\alpha}\psi_{k})_{L^{2 }}\\ &=\langle f,\partial_{t}^{1-\alpha}\psi_{k}\rangle_{H_{0}^{1}}-g _{\alpha}(D\nabla\psi_{0k}-F\psi_{0k},\nabla\partial_{t}^{1-\alpha}\psi_{k}) _{L^{2}}\end{split} \tag{4.26}\] For the first term on the left-hand side, we use \(\partial_{t}\psi_{k}=\partial_{t}^{\alpha}\partial_{t}^{1-\alpha}\psi_{k}\), see (3.13), to conclude with Alikhanov's inequality, see (3.19), \[\begin{split}(\partial_{t}\psi_{k},\partial_{t}^{1-\alpha}\psi_{ k})_{L^{2}}&=(\partial_{t}^{\alpha}\partial_{t}^{1-\alpha}\psi_{k}, \partial_{t}^{1-\alpha}\psi_{k})_{L^{2}}\\ &\geq\frac{1}{2}\partial_{t}^{\alpha}\|\partial_{t}^{1-\alpha} \psi_{k}\|_{L^{2}}^{2}.\end{split} \tag{4.27}\] We bring the term involving the force \(F\) to the right-hand side of (4.26) and apply the Holder inequality to conclude \[(F\nabla\partial_{t}^{1-\alpha}\psi_{k},\partial_{t}^{1-\alpha}\psi_{k})_{L^ {2}}\leq F_{\infty}\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}\| \partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}},\] where \(F_{\infty}<\infty\) is the constant as introduced in the theorem's assumptions. Further, we apply the Young inequality to give the norm of \(\nabla\partial_{t}^{1-\alpha}\psi_{k}\) a prefactor that is smaller than \(D\), i.e., we obtain \[F_{\infty}\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}\|\partial_{t}^{1 -\alpha}\psi_{k}\|_{L^{2}}\leq\frac{D}{4}\|\nabla\partial_{t}^{1-\alpha}\psi_ {k}\|_{L^{2}}^{2}+\frac{F_{\infty}^{2}}{D}\|\partial_{t}^{1-\alpha}\psi_{k}\|_ {L^{2}}^{2}. \tag{4.28}\] Using again a combination of the Holder and \(\varepsilon\)-Young inequalities, we estimate the term on the right-hand side of the tested equation (4.26) with the initials \(\psi_{0k}\) by \[\begin{split}& g_{\alpha}(F\psi_{0k}-D\nabla\psi_{0k},\nabla \partial_{t}^{1-\alpha}\psi_{k})_{L^{2}}\\ &\leq\frac{g_{\alpha}}{2\varepsilon}(F_{\infty}^{2}\|\psi_{0k}\|_ {L^{2}}^{2}+D^{2}\|\nabla\psi_{0k}\|_{L^{2}}^{2})+\varepsilon_{1}g_{\alpha}\| \nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2},\end{split}\] where \(\varepsilon_{1}>0\) is a constant that we will determine accordingly below. We are not interested in tracking the constants \(D\) and \(F_{\infty}\) and therefore, we include them in a generic constant \(C\) that may change from line to line. Moreover, we can estimate the norm of \(\psi_{0k}\) by \(\psi_{0}\) due to the projection property (4.22). Consequently, we obtain the estimate \[g_{\alpha}(F\psi_{0}-D\nabla\psi_{0},\nabla\partial_{t}^{1-\alpha}\psi_{k})_{L ^{2}}\leq Cg_{\alpha}(t)\|\psi_{0}\|_{H_{0}^{1}}^{2}+\varepsilon g_{\alpha}\| \nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2} \tag{4.29}\] Lastly, we estimate the external force \(f\) by \[\begin{split}\langle f,\partial_{t}^{1-\alpha}\psi_{k}\rangle_{H_{ 0}^{1}}&\leq\|f\|_{H^{-1}}\|\partial_{t}^{1-\alpha}\psi_{k}\|_{H_{ 0}^{1}}\\ &\leq C\|f\|_{H^{-1}}^{2}+\frac{D}{4}\|\partial_{t}^{1-\alpha} \nabla\psi_{k}\|_{L^{2}}^{2}.\end{split} \tag{4.30}\] Hence, we insert the estimates (4.27)-(4.30) in the tested equation (4.26) to obtain the inequality \[\frac{1}{2}\partial_{t}^{\alpha}\|\partial_{t}^{1-\alpha}\psi_{k} \|_{L^{2}}^{2}+D\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2}\] \[\leq F_{\infty}\|\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2}+C \|f\|_{H^{-1}}^{2}+\frac{D}{2}\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}} ^{2}\] \[\quad+\varepsilon_{1}g_{\alpha}(t)\|\nabla\partial_{t}^{1-\alpha }\psi_{k}\|_{L^{2}}^{2}+Cg_{\alpha}(t)\|\psi_{0}\|_{H^{1}_{0}}^{2},\] and we absorb the terms involving \(D\) on the right-hand side by the respective term on the left-hand side, giving \[\frac{1}{2}\partial_{t}^{\alpha}\|\partial_{t}^{1-\alpha}\psi_{k }\|_{L^{2}}^{2}+\frac{D}{2}\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^ {2}\] \[\leq F_{\infty}\|\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2}+C \|f\|_{H^{-1}}^{2}+\varepsilon_{1}g_{\alpha}(t)\|\nabla\partial_{t}^{1-\alpha }\psi_{k}\|_{L^{2}}^{2}+Cg_{\alpha}(t)\|\psi_{0}\|_{H^{1}_{0}}^{2}.\] We convolve this inequality with the kernel function \(g_{\alpha}\) to conclude \[\frac{1}{2}\|\partial_{t}^{1-\alpha}\psi_{k}(t)\|_{L^{2}}^{2}+ \frac{D}{2}(g_{\alpha}*\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2}) (t) \tag{4.31}\] \[\leq F_{\infty}^{2}(g_{\alpha}*\|\partial_{t}^{1-\alpha}\psi_{k} \|_{L^{2}}^{2})(t)+C(g_{\alpha}*\|f\|_{H^{-1}}^{2})(t)\] \[\quad+\varepsilon_{1}(g_{\alpha}*(g_{\alpha}\cdot\|\nabla \partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2}))(t)+Cg_{2\alpha}(t)\|\psi_{0}\| _{H^{1}_{0}}^{2}.\] where we used that \(g_{\alpha}*g_{\alpha}=g_{2\alpha}\), see (3.7), and \[\big{(}g_{\alpha}*(\partial_{t}^{\alpha}\|\partial_{t}^{1-\alpha }\psi_{k}\|_{L^{2}}^{2})\big{)}(t) =\|\partial_{t}^{1-\alpha}\psi_{k}(t)\|_{L^{2}}^{2}-\|(g_{\alpha }*\partial_{t}\psi_{k})(0)\|_{L^{2}}^{2}\] \[=\|\partial_{t}^{1-\alpha}\psi_{k}(t)\|_{L^{2}}^{2},\] see (3.14). Now, we observe that the term \((g_{\alpha}*\|f\|_{H^{-1}}^{2})(t)\), \(t\in(0,T_{k})\), can be bounded by \[(g_{\alpha}*\|f\|_{H^{-1}}^{2})(t)\leq\sup_{t\in(0,T)}(g_{\alpha}*\|f\|_{H^{- 1}}^{2})(t)=:\|f\|_{L^{2}_{\alpha}H^{-1}},\] see again (3.9) for the definition of the space \(L^{2}_{\alpha}(0,T)\). Further, we use (3.12) to absorb the term involving \(\varepsilon_{1}\) on the right-hand side of the inequality (4.31). In fact, we absorb it by the term \(\frac{D}{2}(g_{\alpha}*\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2}) (t)\) on the left-hand side by noting that \[\varepsilon_{1}(g_{\alpha}*(g_{\alpha}\|\nabla\psi_{k}\|_{L^{2}L^{2}}^{2}))(t )\leq\varepsilon_{1}g_{\alpha+1}(T)(g_{\alpha}*\|\nabla\psi_{k}\|_{L^{2}}^{2}) (t).\] We choose \(\varepsilon_{1}=\frac{D}{4g_{\alpha+1}(T)}\) to get \[\varepsilon_{1}(g_{\alpha}*(g_{\alpha}\|\nabla\psi_{k}\|_{L^{2}L^{2}}^{2}))(t )\leq\frac{D}{4}(g_{\alpha}*\|\nabla\psi_{k}\|_{L^{2}}^{2})(t),\] and consequently, we obtain from (4.31) the inequality \[\frac{1}{2}\|\partial_{t}^{1-\alpha}\psi_{k}(t)\|_{L^{2}}^{2}+ \frac{D}{4}(g_{\alpha}*\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}}^{2}) (t)\] \[\leq F_{\infty}^{2}(g_{\alpha}*\|\partial_{t}^{1-\alpha}\psi_{k} \|_{L^{2}}^{2})(t)+C\|f\|_{L^{2}_{\alpha}H^{-1}}^{2}+Cg_{2\alpha}(t)\|\psi_{0} \|_{H^{1}_{0}}^{2}.\] We notice that we are in the situation of the extended Henry-Gronwall lemma, see Lemma 3.2, and we obtain the energy estimate \[\begin{split}&\frac{1}{2}\|\partial_{t}^{1-\alpha}\psi_{k}(t)\|_{L^{ 2}}^{2}+\frac{D}{4}\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}_{t}L^{2}} ^{2}\\ &\leq C(F_{\infty},\alpha,T)\cdot\Big{(}(g_{0}+E)*\big{(}\|f\|_{L ^{2}_{\alpha}H^{-1}}^{2}+g_{2\alpha}\|\psi_{0}\|_{H^{1}_{0}}^{2}\big{)}\Big{)} (t)\\ &=:\operatorname{RHS}_{(\ref{eq:energy_estimate_energy_energy_2})}(t) \end{split} \tag{4.32}\] The estimate on the right-hand side is independent of \(T_{k}\) and we infer from the no-blow-up theorem that we can continue the maximal time to \(T\). However, since the right-hand side is "only" continuous in \(t\) on \((0,T]\) and not at \(t=0\) because of the presence of the term \(g_{2\alpha}\), we are not able to take the essential supremum of the inequality (4.32) over \(t\in(0,T)\). Therefore, we can obtain no bound of \(\partial_{t}^{1-\alpha}\psi_{k}\) in \(L^{\infty}\)-in-time. Nonetheless, \(\partial_{t}^{1-\alpha}\psi_{k}\) is bounded in \(L^{2}(0,T;H^{1}_{0}(\Omega))\) by inserting \(t=T\) into the inequality (4.32). Moreover, we notice that \(\|\partial_{t}^{1-\alpha}\psi_{k}(t)\|_{L^{2}(\Omega)}\) is bounded by the leading term \(\sqrt{g_{2\alpha}(t)}=t^{\alpha-1/2}=g_{\alpha+1/2}\), which is continuous at \(t=0\) for \(\alpha>1/2\) and in \(L^{p}(0,T)\) for \(\alpha+\frac{1}{2}>1-\frac{1}{p}\), which is equivalent to \(p<\frac{2}{1-2\alpha}\). Therefore, \(\partial_{t}^{1-\alpha}\psi_{k}\) is bounded in the space \(L^{p}(0,T;L^{2}(\Omega))\) with \[\begin{cases}p<\frac{2}{1-2\alpha},&\alpha<\frac{1}{2},\\ p<\infty,&\alpha=\frac{1}{2},\\ p=\infty,&\alpha>\frac{1}{2}.\end{cases} \tag{4.33}\] By the Eberlein-Smulian and Banach-Alaoglu theorems, see [3, 8.7], these bounds yield the existence of a weakly converging subsequence \(\partial_{t}^{1-\alpha}\psi_{k_{j}}\), i.e., it holds \[\partial_{t}^{1-\alpha}\psi_{k_{j}}\longrightarrow\zeta\quad\text{ in }L^{p}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{1}_{0}(\Omega)), \tag{4.34}\] as \(j\to\infty\). We still need to figure out the representation of \(\zeta\). If we are able to bound \(\psi_{k}\) again in the Bochner space \(L^{2}(0,T;H^{1}_{0}(\Omega))\), then we would obtain \(\psi_{k}\rightharpoonup\psi\) for some limit function \(\psi\), from which we can infer \(\zeta=\partial_{t}^{1-\alpha}\psi\). We want to mention that we could also have obtained a bound of \(\partial_{t}^{1-\alpha}\psi_{k}\) in the space \(L^{2}_{\alpha}(0,T;H^{1}_{0}(\Omega))\). However, this space is not known to be reflexive and therefore, we cannot apply the Banach-Alaoglu theorem to infer a limit function in this space. **(3) Energy estimates: Part 2.** In order to obtain the desired bound of \(\psi_{k}\), we test the Galerkin equation (4.23) with \(\psi_{k}\), which yields \[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{dt}}\|\psi_{k}\|_{L ^{2}}^{2}+D(\partial_{t}^{1-\alpha}\nabla\psi_{k},\nabla\psi_{k})_{L^{2}}\\ &=(F\cdot\nabla\partial_{t}^{1-\alpha}\psi_{k},\psi_{k})_{L^{2}} +\langle f,\psi_{k}\rangle_{H^{1}_{0}}-g_{\alpha}(D\nabla\psi_{0k}-F\psi_{0k}, \nabla\psi_{k})_{L^{2}}.\end{split} \tag{4.35}\] For the term on the left-hand side involving the diffusion \(D\), we apply Alikhanov's inequality (3.18) to infer \[D(\partial_{t}^{1-\alpha}\nabla\psi_{k},\nabla\psi_{k})_{L^{2}}\geq\frac{D}{2 }\partial_{t}^{1-\alpha}\|\nabla\psi_{k}\|_{L^{2}}^{2}.\] Moreover, we apply again the Holder inequality on the right-hand side of (4.35) to obtain the energy estimate \[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{dt}}\|\psi_{k}\|_{L ^{2}}^{2}+\frac{D}{2}\partial_{t}^{1-\alpha}\|\nabla\psi_{k}\|_{L^{2}}^{2}\\ &\leq F_{\infty}\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}} \|\psi_{k}\|_{L^{2}}+\|f\|_{H^{-1}}\|\nabla\psi_{k}\|_{L^{2}}\\ &\quad+g_{\alpha}(t)\|\nabla\psi_{k}\|_{L^{2}}\big{(}D\|\nabla \psi_{0}\|_{L^{2}}+F_{\infty}\|\psi_{0}\|_{L^{2}}\big{)},\end{split}\] where we used the boundedness of the projection operator, see (4.22). Again, with the Young inequality, we obtain \[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{dt}}\|\psi_{k}\|_{ L^{2}}^{2}+\frac{D}{2}\partial_{t}^{1-\alpha}\|\nabla\psi_{k}\|_{L^{2}}^{2}\\ &\leq CF_{\infty}^{2}\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{ L^{2}}^{2}+\frac{\varepsilon_{2}}{2}\|\nabla\psi_{k}\|_{L^{2}}+C\|f\|_{H^{-1}}^{2}+ \frac{\varepsilon_{2}}{2}\|\nabla\psi_{k}\|_{L^{2}}^{2}\\ &\quad+\varepsilon_{3}g_{\alpha}(t)\|\nabla\psi_{k}\|_{L^{2}}^{2 }+Cg_{\alpha}(t)\|\psi_{0}\|_{H^{1}_{0}}^{2},\end{split}\] for some \(\varepsilon_{2},\varepsilon_{3}>0\) that we determine below. After integrating this inequality over the time interval \((0,t)\), \(t\leq T\), it yields \[\begin{split}&\frac{1}{2}\|\psi_{k}(t)\|_{L^{2}}^{2}+\frac{D}{2}(g_ {\alpha}*\|\nabla\psi_{k}\|_{L^{2}}^{2})(t)\\ &\leq\frac{1}{2}\|\psi_{0k}\|_{L^{2}}^{2}+CF_{\infty}^{2}\|\nabla \partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}_{t}L^{2}}^{2}+C\|f\|_{L^{2}H^{-1}}^{2 }+\varepsilon_{2}\|\nabla\psi_{k}\|_{L^{2}_{t}L^{2}}^{2}\\ &\quad+\varepsilon_{3}\int_{0}^{t}g_{\alpha}(s)\|\nabla\psi_{k}( s)\|_{L^{2}}^{2}\,\mathrm{ds}+Cg_{\alpha+1}(T)\|\psi_{0}\|_{H^{1}_{0}}^{2},\end{split} \tag{4.36}\] where we used that \(g_{\alpha}*1=g_{\alpha+1}\), see (3.7), which is a continuous and bounded function on \([0,T]\) for any \(\alpha>0\). We use the energy estimate (4.32) from before to infer \[\|\nabla\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}_{t}L^{2}}^{2}\leq\|\nabla \partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}L^{2}}^{2}\leq\mathrm{RHS}_{\eqref{ eq:energy_1}}(T). \tag{4.37}\] Furthermore, we use the auxiliary result (3.8) to get \[\begin{split}\varepsilon_{2}\|\nabla\psi_{k}\|_{L^{2}_{t}L^{2}}^ {2}&\leq\frac{\varepsilon_{2}}{g_{\alpha}(T)}(g_{\alpha}*\| \nabla\psi_{k}\|_{L^{2}}^{2})(t)\\ &\leq\frac{D}{8}(g_{\alpha}*\|\nabla\psi_{k}\|_{L^{2}}^{2})(t), \end{split} \tag{4.38}\] where we have chosen \(\varepsilon_{2}=\frac{Dg_{\alpha}(T)}{8}\). Lastly, we use again (3.11) to infer \[\varepsilon_{3}\int_{0}^{t}g_{\alpha}(t)\|\nabla\psi_{k}\|_{L^{2}}^{2}\, \mathrm{ds}\leq\varepsilon_{3}T(g_{\alpha}*\|\nabla\psi_{k}\|_{L^{2}}^{2})(t) \leq\frac{D}{8}(g_{\alpha}*\|\nabla\psi_{k}\|_{L^{2}}^{2})(t). \tag{4.39}\] Therefore, we set \(\varepsilon_{3}=\frac{D}{8T}\) and together with the auxiliary estimates (4.37)-(4.39) we obtain from (4.36) \[\begin{split}&\frac{1}{2}\|\psi_{k}(t)\|_{L^{2}}^{2}+\frac{D}{4}(g_ {\alpha}*\|\nabla\psi_{k}\|_{L^{2}}^{2})(t)\\ &\leq C\cdot\mathrm{RHS}_{\eqref{eq:energy_1}}(T)+C\|f\|_{L^{2}H^{- 1}}^{2}+Cg_{\alpha+1}(T)\|\psi_{0}\|_{H^{1}_{0}}^{2}\\ &=:\mathrm{RHS}_{\eqref{eq:energy_2}}.\end{split} \tag{4.40}\] ### (4) Weak and strong convergences From the estimate that we derived in (4.40) we infer that that \(\psi_{k}\) is bounded in the spaces \(L^{\infty}(0,T;L^{2}(\Omega))\) and \(L^{2}(0,T;H^{1}_{0}(\Omega))\), i.e, there is a limit function \(\psi\) with \[\begin{split}&\psi_{k_{j}}\rightharpoonup\psi\quad\text{ in }L^{2}(0,T;H^{1}_{0}(\Omega)),\\ &\psi_{k_{j}}\rightharpoonup\psi\quad\text{ in }L^{\infty}(0,T;L^{2}( \Omega)),\end{split} \tag{4.41}\] as \(j\to\infty\). By linearity of the differential operators, we obtain from (4.34) \[\partial_{t}^{1-\alpha}\psi_{k_{j}}\rightharpoonup\partial_{t}^{1-\alpha}\psi \quad\text{ in }L^{p}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{1}_{0}(\Omega)), \tag{4.42}\] as \(j\to\infty\), with \(p\) as defined in (4.33). We note the compact embedding, see (3.17), \[L^{2}(0,T;H^{1}_{0}(\Omega))\cap H^{1-\alpha}(0,T;H^{1}_{0}(\Omega))\hookrightarrow \hookrightarrow L^{2}(0,T;L^{2}(\Omega)),\] from which we obtain the strong convergence \[\psi_{k_{j}}\longrightarrow\psi\qquad\text{ in }L^{2}(0,T;L^{2}(\Omega). \tag{4.43}\] The derived convergences (4.41)-(4.43) are enough to pass to the limit in the Galerkin equation (4.23). Nonetheless, we want to derive an additional estimate on \(\partial_{t}\psi_{k}\) by testing the discretized Fokker-Planck equation (4.23) with \(\Pi_{H_{k}}\zeta\) where \(\zeta\) is an arbitrary element in \(L^{r}(0,T;H_{0}^{1}(\Omega))\) with \(r\geq 1\) depending on \(\alpha\) (will be specified below). Then we obtain by the usual inequalities \[(\partial_{t}\psi_{k},\Pi_{H_{k}}\zeta)_{L^{2}L^{2}} =(F\partial_{t}^{1-\alpha}\psi_{k},\nabla\Pi_{H_{k}}\zeta)_{L^{2} L^{2}}-D(\partial_{t}^{1-\alpha}\nabla\psi_{k},\nabla\Pi_{H_{k}}\zeta)_{L^{2}L^{2}}\] \[\quad+\langle f,\Pi_{H_{k}}\zeta\rangle_{L^{2}H_{0}^{1}}-(D \nabla\psi_{0k}-F\psi_{0k},g_{\alpha}\nabla\Pi_{H_{k}}\zeta)_{L^{2}L^{2}}\] \[\leq F_{\infty}\|\partial_{t}^{1-\alpha}\psi_{k}\|_{L^{2}L^{2}}\| \nabla\zeta\|_{L^{2}L^{2}}+D\|\partial_{t}^{1-\alpha}\nabla\psi_{k}\|_{L^{2}L ^{2}}\|\nabla\zeta\|_{L^{2}L^{2}}\] \[\quad+\|f\|_{L^{2}H^{-1}}\|\zeta\|_{L^{2}H_{0}^{1}}+(F_{\infty}+D )\|g_{\alpha}\|_{L^{\prime}}\|\psi_{0}\|_{H^{1}}\|\nabla\zeta\|_{L^{\prime}L^{ 2}}\] \[\leq C\|\zeta\|_{L^{r}H_{0}^{1}},\] where \(r=\max\{q^{\prime},2\}\) and \(q^{\prime}\) is the Holder conjugate of \(q=\frac{1}{1-\alpha}-\varepsilon\) for \(\varepsilon\in(0,\frac{\alpha}{1-\alpha}]\). Therefore, \(\partial_{t}\psi_{k}\) is bounded in \(L^{r^{\prime}}(0,T;H^{-1}(\Omega))\) where \(r^{\prime}\) is the Holder conjugate of \(r\). We note the compact embeddings, see (3.16)-(3.17), \[L^{\infty}(0,T;L^{2}(\Omega))\cap W^{1,r^{\prime}}(0,T;H^{-1}( \Omega))\hookrightarrowhookrightarrow C([0,T];H^{-1}(\Omega)),\] \[W^{1-\alpha,2}(0,T;H_{0}^{1}(\Omega))\cap W^{1,r^{\prime}}(0,T;H ^{-1}(\Omega))\hookrightarrowhookrightarrow W^{1-\alpha,2}(0,T;L^{2}(\Omega)),\] which provides us with the strong convergences (as \(j\to\infty\)) \[\begin{split}\psi_{k_{j}}&\longrightarrow\psi \qquad\qquad\text{ in }C([0,T];H^{-1}(\Omega),\\ \partial_{t}^{1-\alpha}\psi_{k_{j}}&\longrightarrow \partial_{t}^{1-\alpha}\psi\qquad\text{ in }L^{2}(0,T;L^{2}(\Omega)).\end{split} \tag{4.44}\] ### (5) Limit process In this step, we pass to the limit \(j\to\infty\) in the time-integrated \(k_{j}\)-th Galerkin system (4.23). We use the derived convergences from the preceding result and show that the weak limit function \(\psi\) satisfies the variational form of the time-fractional Fokker-Planck equation, i.e., \(\psi\) is a weak solution in the sense of Definition 4.1. We consider the time-integrated \(k_{j}\)-th Galerkin system \[\int_{0}^{T}\Big{(}\langle\psi_{k_{j}}^{\prime},\zeta\rangle_{H_{ 0}^{1}}+D(\partial_{t}^{1-\alpha}\nabla\psi_{k_{j}},\nabla\zeta)_{L^{2}}-(F \partial_{t}^{1-\alpha}\psi_{k_{j}},\nabla\zeta)_{L^{2}}\Big{)}\eta(t)\,\mathrm{ d}t\] \[=\int_{0}^{T}\Big{(}\langle f,\zeta\rangle_{H_{0}^{1}}-g_{\alpha} (D\nabla\psi_{0}-F\psi_{0},\nabla\zeta)_{L^{2}}\Big{)}\eta(t)\,\mathrm{d}t\] for all \(\zeta\in H_{k_{j}}\) and \(\eta\in C_{c}^{\infty}(0,T)\). Obviously, we are able to pass to the limit in all the terms thanks to the derived weak convergences. E.g., we have \[\int_{0}^{T}(F\partial_{t}^{1-\alpha}\psi_{k_{j}},\nabla\zeta)_{L^ {2}}\eta(t)\,\mathrm{d}t \leq F_{\infty}\|\partial_{t}^{1-\alpha}\psi_{k_{j}}\|_{L^{2}L^{2} }\|\nabla\zeta\|_{L^{2}}\|\eta\|_{L^{2}}\] \[\leq C\|\partial_{t}^{1-\alpha}\psi_{k_{j}}\|_{L^{2}H_{0}^{1}},\] for all \(\zeta\in H_{k_{j}}\), \(\eta\in C_{c}^{\infty}(0,T)\). Since it holds the weak convergence \[\partial_{t}^{1-\alpha}\psi_{k_{j}}\rightharpoonup\partial_{t}^{1-\alpha}\psi _{k_{j}}\text{ in }L^{2}(0,T;H_{0}^{1}(\Omega)),\] see (4.42), it yields (as \(j\to\infty\)) \[\int_{0}^{T}(F\partial_{t}^{1-\alpha}\psi_{k_{j}},\nabla\zeta)_{L^{2}}\eta(t)\, \mathrm{d}t\longrightarrow\int_{0}^{T}(F\partial_{t}^{1-\alpha}\psi,\nabla \zeta)_{L^{2}}\eta(t)\,\mathrm{d}t,\] for all \(\zeta\in\cup_{j}H_{k_{j}}\). We observe that \(\cup_{j}H_{k_{j}}\) is dense in \(H_{0}^{1}(\Omega)\), which implies that the limit function \(\psi\) indeed solves the variational form of the time-fractional Fokker-Planck equation. **(6) Initial condition.** By the strong convergences, see (4.44), we obtain at \(t=0\) the convergence \(\psi_{k_{j}}(0)\to\psi(0)\) in \(H^{-1}(\Omega)\). However, it also holds \(\psi_{k}(0)=\Pi_{H_{k}}\psi_{0}\to\psi_{0}\) in \(H_{0}^{1}(\Omega)\) as \(j\to\infty\), from which we conclude \(\psi(0)=\psi_{0}\) by the uniqueness of limits. Therefore, \(\psi\) is a weak solution to the time-fractional Fokker-Planck equation in the sense of Definition 4.1. **(7) Uniqueness.** We consider two weak solutions \(\psi_{1}\) and \(\psi_{2}\) of the time-fractional Fokker-Planck equation in the sense of Definition 4.1. Both solutions shall have the same initial data \(\psi_{0}\) and outer force \(f\). We subtract the variational forms of \(\psi_{1}\) and \(\psi_{2}\) from each other, and we define \(\psi=\psi_{1}-\psi_{2}\), which satisfies \[\langle\partial_{t}\psi,\zeta\rangle_{H_{0}^{1}}+D(\partial_{t}^{1-\alpha} \nabla\psi,\nabla\zeta)_{L^{2}}-(F\partial_{t}^{1-\alpha}\psi,\nabla\zeta)_{L ^{2}}=0\quad\forall\zeta\in H_{0}^{1}(\Omega). \tag{4.45}\] We consider the test function \(\zeta=\partial_{t}^{1-\alpha}\psi(t)\in H_{0}^{1}(\Omega)\) for a.e. \(t\in(0,T)\), which yields together with Alikhanov's inequality, see (3.18), \[\frac{1}{2}\partial_{t}^{\alpha}\|\partial_{t}^{1-\alpha}\psi\|_{L^{2}}^{2}+D \|\partial_{t}^{1-\alpha}\nabla\psi\|_{L^{2}}^{2}\leq F_{\infty}\|\partial_{t }^{1-\alpha}\psi\|_{L^{2}}\|\nabla\partial_{t}^{1-\alpha}\psi\|_{L^{2}}.\] Furthermore, we apply Young's inequality to obtain \[\frac{1}{2}\partial_{t}^{\alpha}\|\partial_{t}^{1-\alpha}\psi\|_{L^{2}}^{2}+ \frac{D}{2}\|\partial_{t}^{1-\alpha}\nabla\psi\|_{L^{2}}^{2}\leq\frac{F_{ \infty}^{2}}{D}\|\partial_{t}^{1-\alpha}\psi\|_{L^{2}}^{2}. \tag{4.46}\] After convolving this inequality with \(g_{\alpha}\) and applying the extended Henry-Gronwall inequality with \(a\equiv 0\), see Lemma 3.2, the estimate (4.46) becomes \[\frac{1}{2}\|\partial_{t}^{1-\alpha}\psi(t)\|_{L^{2}}^{2}+\frac{D}{2}\| \partial_{t}^{1-\alpha}\nabla\psi\|_{L^{2}_{t}L^{2}}^{2}\leq 0.\] At this point, we further test the variational form (4.45) by \(\zeta=\psi(t)\in H_{0}^{1}(\Omega)\), which yields \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\psi\|_{L^{2}}^{2}+\frac{D}{2} \partial_{t}^{1-\alpha}\|\nabla\psi\|_{L^{2}}^{2}\leq F_{\infty}\|\partial_{t }^{1-\alpha}\psi\|_{L^{2}}\|\nabla\psi\|_{L^{2}}=0.\] We integrate this inequality and observe that it holds \(\|\psi(t)\|_{L^{2}}=0\) for any \(t\in(0,T)\) i.e. \(\psi_{1}=\psi_{2}\). ## 5. Numerical simulations Various numerical methods for time-fractional PDEs are summarized in the review article [12] and in the monographs [5, 21, 35]. We assume a discretization \(0=t_{0}<t_{1}<\cdots<t_{N}=T\) of the time interval \([0,T]\). We do not utilize an equispaced time mesh, but a nonuniform one by discretizing the early times in finer steps. In particular, we assume that the \(n\)-th time step is of the form \(t_{n}=(n/N)^{\gamma}T\) for \(\gamma\geq 1\). If it holds \(\gamma=1\), then we are again in a setting of a uniform mesh, see also Fig. 1 for a depiction of some time meshes for various values of \(\gamma\). We discretize the Caputo derivative by the nonuniform L1 scheme [12, Section 3.2], i.e., it reads \[\partial_{t}^{1-\alpha}\psi\approx\frac{1}{\Gamma(1+\alpha)}\sum_{j=0}^{n-1} \omega_{n-j-1,n}(\psi_{n-j}-\psi_{n-j-1}),\] where \(\psi_{n-j}\approx\psi(t_{n-j})\). The quadrature weights \(\{\omega_{k,n}\}_{k=0}^{n-1}\) are given by the formula \[\omega_{k,n}=\frac{(t_{n}-t_{k})^{\alpha}-(t_{n}-t_{k+1})^{\alpha}}{\Delta t_ {n-k}},\] where we introduced the notation \(\Delta t_{n-k}=t_{n-k}-t_{n-k-1}\). We use the finite element space \(P_{1}\) for the space discretization and consequently, the fully discrete system reads \[\begin{split}&\Big{(}\frac{\psi^{n}-\psi^{n-1}}{\Delta t_{n}}, \zeta\Big{)}_{H}+\sum_{j=0}^{n-1}\frac{\omega_{n-j-1,n}}{\Gamma(1+\alpha)}(D \nabla(\psi_{n-j}-\psi_{n-j-1}),\nabla\zeta)_{H}\\ &\quad-\sum_{j=0}^{n-1}\frac{\omega_{n-j-1,n}}{\Gamma(1+\alpha)}( \psi_{n-j}-\psi_{n-j-1},F(t_{n})\cdot\nabla\zeta)_{H}\\ &=(f(t_{n}),\zeta)_{H}-g_{\alpha}(t_{n})\cdot(D(t_{n})\nabla\psi_ {0},\nabla\zeta)_{H}+g_{\alpha}(t_{n})\cdot(\psi_{0},F(t_{n})\cdot\nabla\zeta) _{H}\end{split} \tag{5.47}\] for any test function \(\zeta\). In particular, taking \(\zeta=1\) gives \[\int_{\Omega}\psi^{n}\,\mathrm{d}x=\int_{\Omega}\psi^{n-1}\,\mathrm{d}x+ \Delta t_{n}\int_{\Omega}f(t_{n})\,\mathrm{d}x,\] i.e., the Fokker-Planck setting with \(f\equiv 0\) yields discrete mass conservation. We implement the discrete system in open-source computing platform FEniCS, see [2]. We consider the space interval \(\Omega=(-5,15)\) with \(\Delta x=1/1024\) and the time interval \([0,T]\) with \(T=5\) where the \(n\)-th time step is given by \(t_{n}=5(n/100)^{2}\). Moreover, we select as the initial data the Gaussian \[\psi(0,x)=\psi_{0}(x)=\frac{1}{\sigma\sqrt{2\pi}}\mathrm{exp}\Big{(}-\frac{1} {2}\Big{(}\frac{x-\mu}{\sigma}\Big{)}^{2}\Big{)}\] for \(\sigma=0.1\) and \(\mu=2\). Regarding model parameters, we choose \(D=1\) and \(f\equiv 0\). We take the space-time dependent force \(F(t,x)=\sin(t)+x\) in Sec. 5.2 similar to [34, 38, 4, 22]. However, Figure 1. Nonuniform time meshes on the interval \([0,T]\) with \(t_{n}=(n/N)^{\gamma}T\) for \(\gamma\in\{1,1.5,2,3\}\) (top to bottom) and \(N=20\); the red nodes are \(\{0,N/2,N\}\) in all cases. we first consider the case of an absent force \(F\equiv 0\) in Sec. 5.1, i.e., we are in the setting of a classical subdiffusion equation. In Sec. 5.3, we consider the physically defeasible time-fractional Fokker-Planck equation with the Caputo derivative on the left-hand side, see again Sec. 2, and compare this model numerically to the physically meaningful model that we have analyzed in this work. ### Example 1: Subdiffusion equation As we consider \(F\equiv 0\) in this example, we essentially study the time-fractional heat equation \[\partial_{t}^{\alpha}\psi(x,t)=\Delta\psi(x,t),\] which is also referred to as subdiffusion equation. We observe the typical behavior of a subdiffusive equation in the numerical simulations. At early times, the time-fractional model evolves faster stand the integer-order model. In Fig. 2 (a), we see that the solution is more damped for \(\alpha<1\) than for \(\alpha=1\) at \(t=0.02\). Moreover, the damping is larger for smaller values for \(\alpha\). However, this behavior is exactly flipped if one considers a point further in time, e.g. \(t=0.5\) as depicted in Fig. 2 (b). After the initial fast evolution of the subdiffusion equation, the process is slower, and we observe that the smallest maximal value is represented by \(\alpha=1\) at \(t=0.5\). We can also observe that for \(\alpha=1\) the typical round shape is present, whereas for \(\alpha<1\) the tip at \(x=2\) is less round. We consider the time evolution for \(\alpha=1\) in Fig. 3 (a) and for \(\alpha=\frac{1}{2}\) in Fig. 3 (b). The typical diffusion process can be observed and again, we notice the spikier tip for \(\alpha=\frac{1}{2}\). Moreover, the support of the function is larger for smaller \(\alpha\). Lastly, we try to fit the solution \(\psi\) for different values of \(\alpha\). The goal is to analyze whether it is necessary to consider the more complicated (analytically and numerically) time-fractional model, or whether this model's behavior can be replicated by an integer-order model. This is done in Fig. 4, and we observe that the subdiffusive behavior cannot be imitated directly by the standard Fokker-Planck equation. Again, we observe the different support for each curve and the difference in the tip at \(x=2\). ### Example 2: Space-time dependent force This time, we consider the space-time dependent force \(F(x,t)=\sin(x)+t\) and therefore, we study the time-fractional Fokker-Planck equation \[\partial_{t}\psi(x,t)-\Delta\partial_{t}^{1-\alpha}\psi(x,t)+\text{div}(F(x,t) \partial_{t}^{1-\alpha}\psi(x,t))=g_{\alpha}D\Delta\psi_{0}-g_{\alpha}\text{ div}(F\psi_{0}).\] Again, we observe the typical initial behavior of a subdiffusive equation. At the start, the time-fractional model evolves much faster stand the integer-order model. In Fig. 5 (a), we see that the solution is more damped for \(\alpha<1\) than for \(\alpha=1\) at \(t=0.02\). However, this time, we observe that the symmetry of the probability density functional \(\psi\) is lost for \(\alpha<1\). In the case of \(\alpha=\frac{1}{2}\) and \(\alpha=\frac{1}{4}\), the solution admits a large support up to the right end of the domain. that In Fig. 5 (b), we have plotted \(\psi\) at a later time. We observe that \(\alpha=1\) is vastly different from the case of \(\alpha<1\). This is also pronounced by the fact that \(g_{\alpha}(t)\to 0\) as \(t\to\infty\) for \(\alpha<1\), but in the case of \(\alpha=1\) it holds \(g_{\alpha}(t)\equiv 1\), i.e., the right-hand side is just as large for all times. We consider the time evolution for \(\alpha=1\) in Fig. 3 (a) and for \(\alpha=\frac{1}{2}\) in Fig. 3 (b). The typical diffusion process can be observed and again, we notice the edgier tip for \(\alpha=\frac{1}{2}\). Moreover, the support of the function is larger for smaller \(\alpha\). Lastly, we try to fit the solution \(\psi\) for different values of \(\alpha\). This is done in Fig. 4, and we observe that the subdiffusive behavior cannot be imitated by an integer-order model. Again, we observe the different support for each curve and the difference in the tip at \(x=2\). ### Example 3: Model comparison We consider the model as introduced in (2.4) with no right-hand side, i.e., \[\partial_{t}^{\alpha}\psi(x,t)-D\Delta\psi(x,t)+\text{div}\big{(}F(t,x)\psi(x, t)\big{)}=0, \tag{5.48}\] and we discretize it in the same manner as done for the time-fractional Fokker-Planck equation in (5.47). Since this model has been studied in literature, we want to give it some attention by comparing it to the physically meaningful model. Again, we consider \(F(x,t)=\sin(x)+t\). We compare it for \(\alpha=\frac{1}{4}\) in Fig. 8 (a) and for \(\alpha=\frac{3}{4}\) in Fig. 8 (b) for several time steps. We notice that the error gets larger for increasing time, and it is also more pronounced for smaller values for \(\alpha\). We argue that this results from the fact that these models coincide for \(\alpha=1\) and by continuity of the fractional parameter, the difference only gets larger the further one is from \(\alpha=1\). Moreover, it holds \(g_{\alpha}(t)\to 0\) as \(t\to\infty\) for \(\alpha<1\) and therefore, it makes sense that asymptotically the right-hand side is negligible. ## Acknowledgments Supported by the state of Upper Austria.
2309.15868
Mechanical properties of single and polycrystalline solids from machine learning
Calculations of elastic and mechanical characteristics of non-crystalline solids are challenging due to high computation cost of $ab$ $initio$ methods and low accuracy of empirical potentials. We propose a computational technique towards efficient calculations of mechanical properties of polycrystals, composites, and multi-phase systems from atomistic simulation with high accuracy and reasonable computational cost. It is based on using actively learned machine learning interatomic potentials (MLIPs) trained on a local fragments of the polycrystalline system for which forces, stresses and energies are computed by using $ab$ $initio$ calculations. Developed approach is used for calculation the dependence of elastic moduli of polycrystalline diamond on the grain size. This technique allows one to perform large-scale calculations of mechanical properties of complex solids of various compositions and structures with high accuracy making the transition from ideal (single crystal) systems to more realistic ones.
Faridun N. Jalolov, Evgeny V. Podryabinkin, Artem R. Oganov, Alexander V. Shapeev, Alexander G. Kvashnin
2023-09-26T20:47:00Z
http://arxiv.org/abs/2309.15868v1
# Mechanical properties of single and polycrystalline solids from machine learning ###### Abstract Calculations of elastic and mechanical characteristics of non-crystalline solids are challenging due to high computation cost of _ab initio_ methods and low accuracy of empirical potentials. We propose a computational technique towards efficient calculations of mechanical properties of polycrystals, composites, and multi-phase systems from atomistic simulation with high accuracy and reasonable computational cost. It is based on using actively learned machine learning interatomic potentials (MLIPs) trained on a local fragments of the polycrystalline system for which forces, stresses and energies are computed by using _ab initio_ calculations. Developed approach is used for calculation the dependence of elastic moduli of polycrystalline diamond on the grain size. This technique allows one to perform large-scale calculations of mechanical properties of complex solids of various compositions and structures with high accuracy making the transition from ideal (single crystal) systems to more realistic ones. ## I Introduction Diamond is widely used material due to its unique properties and, first of all, its unsurpassed hardness (varying from 60 to 120 GPa [1; 2; 3; 4] depending on conditions) attracting a constant demand in the manufacturing industry. Synthetic diamonds, which are mainly used in industry, usually synthesized in a polycrystalline structure. Depending on the method of production and parameters of the technological process the size of crystallites (grains) of such diamonds may vary from a few nanometers to tens of microns [5]. The mechanical properties of polycrystalline diamonds depend on the size of the grains [6]. In the case of large grains (about a micron) the specific volume of intergrain boundaries is not large, and the basic mechanical properties of such diamonds are close to those of single crystal. However, the specific volume of inter-granular boundaries increases with decreasing grain size, which significantly affects the mechanical properties of diamonds. According to Refs. [7; 8] the elastic properties of polycrystalline diamond may even exceed the mechanical properties of single crystal diamond. Understanding of how the properties of polycrystalline diamond depend on the grain size is important from a practical point of view and taking into account the technologies for synthesis of polycrystalline diamonds from ultrafine diamond dust. The practical need for comprehensive and accurate theoretical study of the effect of grain size in polycrystalline diamonds on their mechanical properties motivated this work. Perhaps the most adequate approach to study this problem is to simulate the system at the atomistic level. However, a critical aspect of atomistic simulation is the choice of a model of interatomic interaction. Traditionally, there are two approaches for such models, namely empirical potentials and _ab initio_ calculations. Empirical potentials are used to perform simulations of large atomistic systems because of their computational efficiency. Such models have a fixed functional form, constructed by insight, and have only several fitting parameters, which are chosen to reproduce the basic properties of single crystals and experimental results in simulation. The widely used empirical potentials for diamond are Tersoff potential [9], Brenner potential [10], and ReaxFF force field [11]. However, the accuracy of empirical potentials may not be sufficient to reproduce the complex nature of interactions in the region of inter-granular boundaries, where the structure is different from the regular crystal lattice where the potentials have not been fitted to. In the work by Erohin et al. [12] the nature of ultra-high hardness of polycrystalline diamonds was theoretically studied by using molecular dynamics simulations with Brenner potential [10]. Authors traced the evolution of the bulk modulus with the grain size and found structures with bulk modulus higher than that of single crystal diamond. Despite of the fact that description of new atomic configurations in polycrystals by classical empirical potentials is negotiable this study showed an idea that unusually high bulk modulus may caused by anisotropic response of the particular grains to the hydrostatic stress. This hardening mechanism seems quite plausible in view of agreement with the reference experimental data. Among the quantum-mechanical methods the most widely used for description of materials properties is the density functional theory (DFT) [13; 14]. DFT provides a high-accuracy calculations of the energies and forces, but its practical application is limited to atomistic systems with several hundred atoms, which makes them inapplicable for describing the inter-granular boundaries. Recently, models of interatomic interaction based on machine learning have been rapidly developing and gaining popularity. They are designed to combine the computational efficiency of empirical potentials and the accuracy of quantum-mechanical models. In contrast to empirical potentials, the machine-learning interatomic potentials (MLIPs) have a flexible functional form that allows one to approximate any potential energy surface with a predetermined accuracy (at least theoretically) by increasing the number of parameters. Nowadays, there are several MLPs which use different types of representations of crystal structures, such as GAP [15], MTP [16], NNP [17; 18] etc. The use of machine learning (ML) techniques in the context of atomistic simulation of materials has gained considerable momentum in the past decade [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Generally in the training procedure, the potential parameters are determined from the requirement of minimizing the deviation between the forces and energies predicted and calculated from the first principles on the configurations from the training set. However, if the atomistic configuration for which the energies and forces are calculated is significantly different from that presented in the training set, extrapolation occurs and the prediction error may be unacceptably high. To resolve the extrapolation problem, MLIP must recognize the configurations on which the extrapolation will occur. Efficiently, this procedure can be organized as learning on-the-fly [35]. This scheme [35] ensures that there is no extrapolation when calculating the energy of forces for atomistic configurations. In this work we propose an active learning method for MLIPs with automatic build up of the local configuration fragments on which the potential extrapolates to a periodic configuration with regular periodic joint. The size of such configurations are small enough, so they are suitable for DFT calculations. Thus, this work has two aims: (1) to study the dependence of the elastic properties of polycrystalline diamond on the grain size with the accuracy close to DFT, and (2) to test the active learning method on the local environments with their build up to a periodic configurations. ## II Methods ### Machine learning interatomic potentials The development and dissemination of MLIPs have revolutionized computational materials science. Application of MLIPs makes it quite easy to solve issues previously considered unsolvable or unreasonable for solving due to enormous resource consumption. First of all, MLIPs enable solving the problems of simulation of the systems with a large number of atoms, or problems where the calculations of physical properties of a huge number of systems are required to be done in a reasonable time. In particular, MLIPs enable the calculations of the nanoharness of various materials based on the first principles [36], high-throughput screening and accelerating crystal structure prediction [35; 37], long molecular dynamics simulations [38]. In this work we use the Moment Tensor Potentials (MTPs) [16] as interatomic interaction model. MTPs belong to the class of local machine-learning potentials, where the total energy of the configuration is composed of contributions \(V\) of individual atoms (site energies) as \[E^{\text{mtp}}(\text{cfg})=\sum_{i=1}^{n}V(\mathfrak{n}_{i}). \tag{1}\] The site energy of atom \(i\) depends on a local atomic neighborhood \(\mathfrak{n}_{i}=z_{i},z_{j},\mathfrak{r}_{ij}\), which is determined by the type of central atom \(z_{i}\), by the types \(z_{j}\), and relative positions \(\mathbf{r}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{i}\) of neighboring atoms within the cutoff radius \(\mathbf{r}_{j}-\mathbf{r}_{i}\leq R_{\text{cut}}\). The site energies \(V(\mathfrak{n}_{i})\) are calculated as a linear combination of basis functions \(B_{\alpha}(\mathfrak{n}_{i})\) \[V(\mathfrak{n}_{i})=\sum_{\alpha}\xi_{\alpha}B_{\alpha}(\mathfrak{n}_{i}). \tag{2}\] Coefficients \(\xi_{\alpha}\) of this linear combination are the subset of parameters of the potential and are found in the training procedure. Definition of the basis functions is based on the moment tensor descriptors: \[M_{\mu,\nu}(\mathfrak{n}_{i})=\sum_{j}f_{\mu}(|r_{ij}|,z_{i},z_{j})\underbrace {\mathbf{r}_{ij}\otimes...\otimes\mathbf{r}_{ij}}_{\nu\text{ times}}. \tag{3}\] Here \(\underbrace{\mathbf{r}_{ij}\otimes...\otimes\mathbf{r}_{ij}}_{\nu\text{ times}}\) is a tensor of rank \(\nu\), \[f_{\mu}(|\mathbf{r}_{ij}|,z_{i},z_{j})=\sum_{\beta=1}^{N_{Q}}c^{(\beta)}_{\mu, z_{i},z_{j}}Q^{(\beta)}(|r_{ij}|), \tag{4}\] is a scalar radial function, where \(\left\{c^{(\beta)}_{\mu,z_{i},z_{j}}\right\}\) is the set of "radial" parameters, \[Q^{(\beta)}(|r_{ij}|)=\begin{cases}\varphi^{(\beta)}(|r_{ij}|)(R_{\text{cut}} -|r_{ij}|)^{2}&|r_{ij}|<R_{\text{cut}}\\ 0&|r_{ij}|\geq R_{\text{cut}}\end{cases} \tag{5}\] are the radial basis function \(\left\{c^{(\beta)}_{\mu,z_{i},z_{j}}\right\}\), based on Chebyshev polynomials \(\varphi^{(\beta)}\). \(B_{\alpha}(\mathfrak{n}_{i})\) are constructed from \(M_{\mu,\nu}(\mathfrak{n}_{i})\) as various convolutions of tensors of different ranks yielding a scalar. In addition to the energy of the configurations, the implementation of the MTP allows the calculation of the forces on atoms and virial stresses of the configuration based on the analytical derivatives of \(E\) with respect to the positions of atoms. The parameters of the radial functions \(\left\{c^{(\beta)}_{\mu,z_{i},z_{j}}\right\}\) together with the linear parameters \(\xi_{\alpha}\) form a set of parameters \(\theta\) of MTP, which are found in the training procedure. This procedure minimizes the standard deviation between the energies, forces, and stresses computed by DFT and MTP over a set of configurations (training set): \[c\sum_{k=1}^{K}\Bigl{[}w_{\mathrm{e}}\left(E^{\mathrm{mtp}}( \mathrm{cfg}_{k};\theta)-E^{\mathrm{dtf}}(\mathrm{cfg}_{k})\right)^{2}+w_{ \mathrm{f}}\sum_{i=1}^{N_{k}}\left|\mathbf{f}_{i}^{\mathrm{mtp}}(\mathrm{cfg}_{ k};\theta)-\mathbf{f}_{i}^{\mathrm{dtf}}(\mathrm{cfg}_{k})\right|^{2} \tag{6}\] \[+w_{\mathrm{s}}\bigl{|}\sigma^{\mathrm{mtp}}(\mathrm{cfg}_{k}; \theta)-\sigma^{\mathrm{dtf}}(\mathrm{cfg}_{k})\bigr{|}^{2}\Bigr{]}\to\min_{ \theta}. \tag{7}\] The Newton's method of second order is used as minimization algorithm. ### Active learning on-the-fly with local atomistic environments Probably the main difficulty in using a MLIP is related to their transferability. Since the calculation of energies and forces by the MLIP can be seen as an interpolation of these quantities over the training set, it is important that the training set covers the domain of the configuration space where the energy and forces are calculated. Otherwise, extrapolation will occur and such predictions are likely to have very low accuracy. For example, a MLIP learned only on bulk configurations will extrapolate when calculating energies and forces of a free surface. Therefore, when using a MLIP, it is important to have a mechanism for recognizing the extrapolations (without performing first-principles calculations), which also referred as active learning methods. When an extrapolation is recognized, the corresponding configuration can be computed with the DFT and then added to the training set. The training domain expands and MLIP will not extrapolate further on that configuration. It should be noted that during MD simulations, the trajectory can go beyond the training set, even if there was no extrapolation at the initial part of the MD trajectory. Therefore, one of the most efficient method of using MLIP is to make MD simulations with extrapolation control and learning the potential on-the-fly. Different MLIPs have their own methods allowing the recognition of extrapolations. For example, MLIPs based on Gaussian Processes, as such a mechanism, use prediction variation estimation [39]. Neural network-based MLIPs, detect extrapolation based on monitoring model committee disagreement [40]. For MTPs, the degree of extrapolation is calculated from the principle of maximum volume of the domain in configuration space spanned on the training set and is calculated with MaxVol algorithm [41]. The degree of extrapolation can be estimated for the whole configuration, as well as for atomistic neighborhoods \(\mathfrak{n}\) of individual atoms [42]. The second method allows one to detect local fragments of the configuration with potentially low accuracy of force calculations. This is especially in demand when working with configurations with a large number of atoms. However, the problems arise here with the obtaining of _ab initio_ data due to the practical impossibility of calculating large configurations with DFT. This problem can be solved by somehow cutting out the extrapolation fragments from a large configuration, with the number of atoms suitable for DFT calculations (in practice, usually not more than a couple of hundred of atoms). In recent papers [36; 43] the extrapolated atomistic environments were simply cut out and further computed as non-periodic atomic clusters. Such an approach is reasonable when we deal with free surfaces in the simulated system. However, in the our work only bulk configurations are treated, and training the potential on fragments with a free surfaces will lead to an unreasonable expansion of the training domain to non-relevant areas with subsequent decrease in the accuracy. Therefore, in this paper we realized another approach based on the construction of periodic configurations from cut fragments. Namely, this is done as follows. 1. We identify atomistic environments \(\mathfrak{n}\) on which MLIP extrapolates (step (1) in Fig. 1). 2. From the whole configuration, we cut the atoms inside the cube containing the cutoff sphere with the extrapolative environment \(\mathfrak{n}\). The size of the cube may be slightly larger then \(2R_{\mathrm{cut}}\) (step (2) in Fig. 1). 3. Next we construct a periodic supercell with this cube having cell parameters \(0.5\) A larger at each side of the cube than the cut one to avoid appearance of extremely short interatomic distances after applying periodicity (step (3) in Fig. 1). 4. In the resulting periodic configuration we relax the lattice vectors and positions of all atoms outside the extrapolation sphere. The atoms inside the extrapolation sphere remain fixed and do not change their positions, which guarantees that the extrapolative environment does not change during relaxation. The relaxation in the last step is carried out in two steps: (1) pairwise repulsive potential is used to fix too short interatomic distances, and (2) DFT for calculations of energies, forces, stresses. This essentially constructs the periodic joint similar to a regular intergranular boundary in the cell, and eliminates the formation of irrelevant atomistic fragments on it. ## III Computation details ### Generation of polycrystalline structure samples The very first step in elastic moduli calculation is generation of periodic polycrystalline samples. For this pur pose we use the Voronoi tessellation method [44; 45; 46] as implemented in Atomsk[47]. The method splits a given periodic domain into specified number of grains with the random shape and orientation. The computational domain had a cubic shape with the size \(4\times 4\times 4\) nm. By variation number of grains we generated several diamond polycrystals with different grain sizes. For example, there are 4 grains in polycrystalline with 16 \(nm^{3}\) average grains size and only 1 grain in polycrystalline with 64 \(nm^{3}\) average grains size. To study the dependence of mechanical properties on the grain size we generated polycrystalline samples with the average grain volumes of 16, 21, 30, 40, 50 and 64 \(nm^{3}\), see Fig. 2. In addition we have tested the convergence of mechanical properties with respect to the size of the simulation box for the same average grain size (namely \(2\times 2\times 2\), \(4\times 4\times 4\), and \(8\times 8\times 8\) nm). ### _Ab initio_ calculations We used Density Functional Theory (DFT) as a first-principles method for training the MLIPs and validation of the results. DFT calculations were performed with the projector augmented-wave density functional theory (PAW-DFT) [48; 14] as implemented in the VASP package [49; 50; 51; 52]. The generalized gradient approximation with Perdew-Burke-Ernzerhof (GGA-PBE) [53] parametrization for exchange-correlation functional was used. For each considered single crystal the PAW potentials were used according to the corresponding number of valence electrons to describe the electron-ion interactions. The plane-wave energy cutoff of 500 eV and Methfessel-Paxton [54] smearing of electronic occupations ensured the convergence of total energies. The \(\Gamma\)-centered \(k\)-point mesh of \(8\times 8\times 8\) was used for Brillouin zone sampling. For potential energy minimization we used a built-in conjugate gradient method with the maximum net force tolerance of less than 0.01 eV/A. For initial training of a MLIP we actively selected atomistic configurations from _ab initio_ molecular dynamics. Timestep for AIMD was chosen to be equal to 1 fs. The total time of each simulation was 2 ps. The plane wave energy cutoff of 500 eV, the Methfessel-Paxton smearing [54] of electronic occupations, and \(\Gamma\)-centered \(k\)-point meshes with a resolution of \(2\pi\times 0.04\AA^{-1}\) of the Brillouin zone sampling were used as implemented in VASP [49; 50; 51; 52]. This ensures the convergence of the energy differences and stress tensors. For more details about training procedure, calculation of MTP forces, readers are encouraged to check Ref. [55]. ### Elastic moduli calculation The independent elastic constants for polycrystals were calculated following the standard atomistic simulation methodology as described in Ref. [56]. This methodology involve 5 steps. 1. Structure relaxation. 2. Applying a finite (about 1%) positive and negative Figure 1: Schematic illustration of learning on the local atomistic environment. Region highlighted by red (1) contains atoms with highest extrapolative grade, which then cut from the structure (2) and used to build the periodic configuration (3) for further DFT calculations of energy, forces, and stresses. strain to the structure in all nonequivalent direction. 3. Relaxation of the strained structure (with the fixed shape of the supercell). 4. Calculation of the stresses for the strained structures. 5. Calculation of the elastic constants using the stresses by finite differences. Elastic constants C relate the strain \(\epsilon\) and the stress \(\sigma\) in a linear fashion: \[\sigma_{ij}=\sum_{kl}C_{ijkl}\epsilon_{kl} \tag{8}\] For elastic tensor calculation the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) package was used [57]. The values of elastic moduli have been calculated for different polycrystalline samples with the same average grain size and averaged. It should be noted that the generated samples are typically not isotropic and \(C_{11}\)\(\neq\)\(C_{22}\)\(\neq\)\(C_{33}\), \(C_{12}\)\(\neq\)\(C_{13}\)\(\neq\)\(C_{23}\), \(C_{44}\)\(\neq\)\(C_{55}\)\(\neq\)\(C_{66}\). At the same time a polycrystalline diamond can be considered as isotropic at large scale. This fact allows us to consider \(C_{22}\) and \(C_{33}\) calculated for the same polycrystalline structure as additional sampled values for \(C_{11}\). Similarly \(C_{13}\) and \(C_{23}\) are the sampled values for \(C_{12}\), and \(C_{55}\), \(C_{66}\) are the sampled values of \(C_{44}\). Thus elastic constants calculated for one polycrystalline structure yield 3 values of \(C_{11}\), \(C_{12}\) and \(C_{44}\). To control the statistical error we used k-means method with k=8. The statistical accumulation continued until sample variance of k-means were larger than 5% of the average value. ### MTP construction via active learning on-the-fly Statistically reliable values of elastic moduli are averaged from the values calculated for dozens or even hundreds of samples. At the same time, calculation of elastic Figure 2: Crystal structure of polycrystals with different grain volumes of a) 16, b) 21, c) 30, d) 40, e) 50, and f) 64 \(nm^{3}\) generated and considered in our work. By orange and violet colors the carbon atoms in amorphous and diamond structure are shown respectively. moduli for one sample assume a number of deformations and applying of relaxation procedure to the sample structure. Thus, the energy, forces and stresses are evaluated with MTP for atomistic configurations at each step of deformation-relaxation procedure. Some of these configurations may have local fragments where MTP extrapolate. In our scheme of active learning on-the-fly we evaluate the degree of extrapolation for each atomic environment in each configuration. If the extrpolation degree exceeds some critical value, the extrpolation fragment is processed with DFT and learned. This procedure is schematically shown in Figure 3. Active learning on-the-fly of MTP from scratch, however, is not computationally efficient. Therefore we pre-trained our MTP in passive manner with the atomistic configurations sampled from _ab initio_ molecular dynamics trajectories (step 1 in Fig. 3). For this purpose we perform _ab initio_ molecular dynamics with DFT of 64 atoms of two-grain diamond over 1 ps (1000 timesteps). After MTP training (step 2 in Fig. 3) we started calculation of elastic tensor of studied system (Steps 3-7 in Fig. 3). This was performed with the active selection of extrapolative configurations, and the so-called one-extrapolation-threshold scheme (\(\mathcal{Y}_{break}\)) was used [55]. Exceeding \(\mathcal{Y}_{break}\) indicates very high extrapolation degree and possible low accuracy in prediction of the energy, forces and stresses. Therefore we terminate elastic tensor calculation in order to retrain MLIP. These values \(\mathcal{Y}_{break}\) was chosen to be 11 as providing according to our experience the optimal balance between the accuracy of MLIP and frequency of retraining. Detailed description of scheme is described in Ref. [55]. After reaching the termination condition (\(\mathcal{Y}\leq\mathcal{Y}_{break}\)) we select the configurations to be added to the training set among all extrapolative configurations for which the extrapolation was detected (step 4 in Fig. 3). Selection procedure is necessary to construct new active set from a pool of extrapolative configurations. On the step 5 we extract from the large configuration a cubic box containing the local atomic environments causing the extrapolation. The extrapolative atomic environment include the central atom and its neighborhood in the cutoff sphere which was taken of 5 A. Constructed local atomic structure extracted from polycrystalline typically contains around 100 atoms. At the next step (step 6) this atomic configuration expands to a periodic structure with the relaxation on DFT of atoms outside the extrapolative environment in order to minimize energy of the periodic interface. At the same step the DFT calculations of energy, forces, and stresses is performed. Further steps of adding to the initial training set with subsequent retraining of MTP (steps 7 and 2 in Fig. 3). By using this scheme there are no needs to consider entire polycrystalline structure in the DFT calculations in order to actively learn MTP. Thus, as the first iteration of active learning of MTP was finished and MD simulation of elastic tensor can be continued with updated actively learned MTP until the critical value of extrapolation is reached again or calculation of all configuration is finished. Each iteration of this scheme expands the training domain and improves the transferability of the MTP (i.e., the amount of extrapolations and the extrapolation degree is reduced). As was discussed above, similar approach was recently used in Ref. [36] for simulation of nanohardness of single crystal compounds and in MD run for copper[43]. The MTPs for single crystals was applied for diamond, Si, SiC, WC, and CrB\({}_{4}\). Detailed information about obtained results for studied single crystals is shown in Tables S1-S6 in Supporting Information. ## IV 3. Results and Discussion ### MTP for polycrystalline diamond The accuracy of obtained MTP for single crystal diamond, as the base for learning of MTP for polycrystals was estimated. Calculated by DFT total energies, forces, stresses and their fitted quantities by MTP for single crystal diamond are presented in Fig. 4. All metrics are presented for every configuration in training set. For calculated and fitted energies (Fig. 4a) the maximal absolute difference is \(6.7\times 10^{-2}\) eV, average absolute difference is \(4.1\times 10^{-3}\) eV, and RMS absolute difference is \(7.1\times 10^{-3}\) eV. Error distribution shows the relation between calculated and fitted energies. It is highly symmetrical around zero and might be considered as Gaussian type. From this fact we can conclude that MTPs have no systematic bias towards the overestimation and underestimation of results. Figure 3: Developed active learning bootstrapping iteration scheme for calculations of mechanical properties of crystalline and non-crystalline solids. For calculated and fitted forces (see Fig. 4b) maximal absolute difference, average absolute difference, and RMS absolute difference are \(1.3\) eV/A, \(2\times 10^{-2}\) eV/A, and \(2.2\times 10^{-2}\) eV/Arespectively. For stresses we obtained the following values of \(2.5\), \(0.7\) and \(0.7\) kBar for maximal absolute difference, average absolute difference, and RMS absolute difference, respectively, see Fig. 4c. All obtained trend lines and calculated absolute differences for energies, forces and stresses can interpret an accurate predictive power of used MLIP. The accuracy of actively learnt MTP on local atomistic environment for polycrystalline diamond was also estimated. Calculated by DFT total energies, forces, stresses and fitted by MLIP only for local configurations extracted from polycrystal are presented in Fig. 5. For calculated and fitted energies (Fig. 5a) the maximal absolute difference is \(7.6\times 10^{-2}\) eV, average absolute difference is \(1.9\times 10^{-3}\) eV, and RMS absolute difference is \(7.9\times 10^{-3}\) eV. Error distribution shows the relation between DFT and MTP energies. For stresses we obtained the values of \(10.7\), \(2.7\) and \(3.1\) kBar for maximal absolute difference, average absolute difference, and RMS absolute difference, respectively, see Fig. 5b. All obtained trend lines and calculated absolute differences for energies, forces and stresses demonstrate an accurate predictive power of used MLIP. ### Mechanical properties of polycrystalline diamond Polycrystals can be considered as orthotropic materials where \(9\) independent second order elastic constants are presented and should be calculated, namely \(C_{11}\), \(C_{22}\), \(C_{33}\), \(C_{44}\), \(C_{55}\), \(C_{66}\), \(C_{12}\), \(C_{13}\), and \(C_{23}\). By using the combination of these components of elastic tensor the elastic moduli were determined via the Voigt-Reuss-Hill averaging. The results of calculations of elastic moduli of polycrystalline diamond with different grain size by using actively learned MTP on local environments are shown in Fig. 6. One can see that bulk modulus of polycrystalline diamond increases with increasing the average size of the grains tending to the bulk modulus of single crystal diamond as limiting case (horizontal dashed purple line in Fig. 6). For each grain size number of structures (from \(23\) to \(100\)) were generated explaining the deviation of calculated bulk modulus. For each grain size the sample variance \(S\) and sample mean \(M\) values were calculated. We continue generating and calculating elastic moduli until the statistical error become less than \(1\%\). The average size of grains for which the diamond polycrystals were generated was selected by using Gaussian process (GP) with the radial basis function (RBF) kernel. Initially we simulated \(2\) diamond polycrystals with average grains size \(16\) and \(64\)\(nm^{3}\) and calculated elas Figure 4: Calculated by DFT and fitted by MLIP values of (a) total energy with error distribution, (b) forces, and (c) stresses, obtained for MTP for single crystal diamond. tic constants for them according to our setup (Fig. 3). Results of bulk modulus for these two polycrystals are shown in Fig. S2 in Supporting Information. According to these results we determined the confidence parameters in GP to define the grain size for further polycrystals in order to minimize the confidence parameters, see Fig. S2 in Supporting Information. Then the other sizes, namely 40, 30, 50, and 22 \(nm^{3}\) (in this order) were added for consideration to minimize the confidence parameter in GP (grey area in the Figure 6). Obtained results on the bulk moduli of considered polycrystals show monotonic growth starting from 400 GPa (near to amorphous carbon structure) to 480 GPa for structure with average grain volume of 64 nm\({}^{3}\), see Figure 6. The average value of the bulk modulus for polycrystal with largest grain size is about 480 GPa, which is under to calculated value for single crystal diamond of 550 GPa and is within the confidence interval of our calculations. To understand how the grain size influence the ductility and brittle behavior of polycrystals we have calculated the Pugh-Pettifor[58] criterion as shown in Fig. 6b. Correlation between (C\({}_{12}\)-C\({}_{44}\))/B and G/B allows us to determine the ductility and britteness of polycrystals. As one can see the polycrystals with small grain size (16, 22 \(nm^{3}\)) are more ductile compared to larger grain sizes, see Fig. 6b. As the grain size increases the polycrystals become more brittle. The average G/B ratio for polycrystals with grain of 64 \(nm^{3}\) is 0.775 and the maximum value is about 0.82, see Fig. 6b. According to this data mechanical stiffness of considered polycrystals does not exceed the value of single crystal diamond (G/B is 0.81). Thus, all considered polycrystals with various grain sizes are less brittle according to Pugh-Pettifor[58] criterion compared to single crystal diamond. Figure 5: Calculated by DFT and fitted by MLIP values of (a) total energy with error distribution and (b) stresses only for local configurations extracted from the polycrystal for active learning. ## V Conclusion We have developed the active learning bootstrapping iteration scheme for calculations of elastic tensor of complex solids, namely composites, polycrystals, and multiphase systems, by using machine learning interatomic potentials with active learning on local atomic environments. Our scheme allows one to achieve high accuracy in simulating the elastic properties of complex solids. The proposed scheme was used to calculate the elastic tensor and elastic moduli both for single crystals with various structures and compositions and polycrystalline structure. To evaluate our approach, diamond polycrystals were assessed, and the resulting elastic properties were compared to existing reference data, demonstrating excellent conformity and precision. Developed approach allows one to study mechanical properties of materials that usually synthesized and used in experiments, i.e. noncrystalline materials. This enables comprehensive investigations into the mechanical properties of complex materials, such as polycrystals and composites, bringing the obtained data closer to that found in experiments. ## VI Competing Interests The Authors declare no Competing Financial or Non-Financial Interests ###### Acknowledgements. This work was carried out using our Oleg supercomputer of Computational Materials Discovery Laboratory and the _ElGatito_ and _LaGatita_ supercomputers of the Industry-Oriented Computational Discovery group at the Skoltech Project Center for energy Transition and ESG.
2309.06182
Quantum algorithms for the simulation of perturbative QCD processes
Quantum computers are expected to give major speed-ups for the simulation of quantum systems. In these conference proceedings, we discuss quantum algorithms for the simulation of perturbative Quantum Chromodynamics (QCD) processes. In particular, we describe quantum circuits for simulating the colour part of the interactions of quarks and gluons. We implement our circuits on a simulated noiseless quantum computer and validate them by calculating colour factors for various examples of Feynman diagrams.
Herschel A. Chawdhry, Mathieu Pellen
2023-09-12T12:50:37Z
http://arxiv.org/abs/2309.06182v1
# Quantum algorithms for the simulation of perturbative QCD processes ###### Abstract: Quantum computers are expected to give major speed-ups for the simulation of quantum systems. In these conference proceedings, we discuss quantum algorithms for the simulation of perturbative Quantum Chromodynamics (QCD) processes. In particular, we describe quantum circuits for simulating the colour part of the interactions of quarks and gluons. We implement our circuits on a simulated noiseless quantum computer and validate them by calculating colour factors for various examples of Feynman diagrams. Pre-print numbers: FR-PHENO-2023-10, OUTP-23-09P Introduction Perturbative Quantum Chromodynamics (QCD) calculations provide high-precision predictions of the scattering of fundamental particles, especially at hadron colliders, and are therefore a vital part of the Large Hadron Collider (LHC) physics program. Calculational complexity presents a key limiting factor in producing these predictions and so the development of new computational techniques is central to advancing the state of the art. In this conference proceedings paper, based on our article [1], we explore whether future quantum computers could help perform perturbative QCD calculations. In particular, as a first step towards this goal, we focus here on the simulation of colour in perturbative QCD using a quantum computer. Quantum computing was first proposed 4 decades ago [2, 3] and has been of great interest over the years because for certain problems it promises large speed-ups. In particular, it promises exponential speed-ups for prime factorisation [4] and quadratic speed-ups for generic unstructured search problems [5] (of which Monte Carlo integration is an example). A further application is the simulation of quantum systems: since quantum computers perform calculations by manipulating the quantum states of a system, it is natural to use a quantum computer to simulate other quantum systems. In particular, active fields of research exist studying methods to use quantum computers to perform simulations of quantum chemistry [6, 7], condensed matter systems [8, 9], and lattice QCD [10, 11]. In contrast to the many proposals in recent years for the quantum simulation of lattice QCD, the quantum simulation of perturbative QCD has largely remained unexplored, with the exception of some work on parton showers [12, 13, 14, 15]. In this work we take the first steps towards the simulation of generic perturbative QCD processes by presenting algorithms for the quantum simulation of colour. Compared to the kinematic components of QCD calculations, colour is relatively simple but it is still a good starting point since it presents some of the general challenges of using a quantum computer to simulate perturbative QCD. The colour parts of calculations therefore provide a useful simplified setup in which to develop general techniques, while allowing the results to be verified against analytic expectations. One should note, however, that for a sufficiently complicated QCD process, even the colour part would become non-trivial to calculate analytically, and in those cases a quantum simulation of colour could be a valuable standalone result. Research on this topic is timely. Although the idea of quantum computer has been around for 30-40 years with steady incremental progress on the hardware and software sides, recent years have seen notable commercial interest and increased prospects for the emergence of practical machines in the coming years. In particular, IBM has since 2019 produced a series of quantum computers with several hundred qubits, albeit subject to hardware noise and without full connectivity, and over the next few years the company aims to increase this to several thousand qubits and implement error correction. Other companies such as Google and Microsoft have also invested in this area and are aiming to produce an error-corrected general-purpose quantum computer within a decade. In light of this, there have been various applications proposed in the experimental and theoretical branches of high-energy physics [1, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. There are several specific motivations for applying quantum computing to simulate perturbative QCD. One reason is that perturbative QCD requires the quantum-coherent summation of many contributions (e.g. from many Feynman diagrams), and this is something that quantum computers are naturally suited to do because they are inherently designed to manipulate quantum superpositions of states. It also follows that QCD processes with high-multiplicity final states, which are currently described by parton showers, could be well-suited to these quantum computing techniques. More generally, if perturbative QCD processes can be simulated on a quantum computer, then one could subsequently use existing quantum algorithms that are known to give quantum speed-ups, for example quadratic speed-ups when using quantum computers for Monte-Carlo integration. ## 2 Quantum circuits for colour In this section we will describe quantum circuits for simulating colour in perturbative QCD. We will work in the quantum circuit model of quantum computing, which is one of the most widely used models. It is based on the concept of qubits, i.e. two-state quantum systems like spin-half particles, which are represented on a quantum circuit diagram as horizontal lines. The operations performed on them are called gates, analogously to the and or gates used in classical computing. Since quantum mechanical operations are linear, they are represented as matrices. A single-qubit operation is represented by a 2-by-2 matrix, and in general an operation acting on \(n\) qubits at the same time is represented by a \(2^{n}\)-by-\(2^{n}\) matrix acting on the \(2^{n}\) basis states of those \(n\) qubits. The matrices must be unitary, since quantum mechanical operators are always unitary. Let us start by briefly recalling how colour is calculated in QCD. Given a Feynman diagram, the corresponding term in the amplitude contains a factor \(T_{ij}^{a}\) for each quark-gluon vertex, and a factor \(f^{abc}\) for each triple-gluon vertex, where \(T_{ij}^{a}\) are the generators of \(\mathfrak{su}(3)\) in the defining representation and \(f^{abc}\) are the structure constants of \(\mathfrak{su}(3)\). For example, the quark self-energy diagram shown on the left of Fig. 1 has colour factor \[\mathcal{C}=\sum_{\begin{subarray}{c}a\in\{1,\ldots,8\}\\ i,j,k\in\{1,2,3\}\end{subarray}}T_{ij}^{a}T_{jk}^{a}\delta_{ik}, \tag{1}\] where the Feynman rules require us to sum over intermediate states \(j\in\{1,2,3\}\) and \(a\in\{1,\ldots,8\}\), and in this case we have further opted to trace over the initial colour \(i\) and final colour \(k\) of the quark line. Noting that the generators \(T_{ij}^{a}\) are linear operators (and are by convention written in terms of the Gell-Mann matrices \(\lambda^{a}\) by defining \(T^{a}=\frac{1}{2}\lambda^{a}\)) and that quantum gates are linear operators, it is natural to ask whether the \(T_{ij}^{a}\) can be implemented as quantum gates and hence be used to simulate the colour part quark-gluon interactions. We will find that the short answer is yes, but there are complications. One relatively minor complication is that the matrices are not of the form \(2^{n}\)-by-\(2^{n}\), required for the reasons stated above. A second, more important complication is that the Gell-Mann matrices are not unitary (but are instead Hermitian), as can be immediately seen by observing that most of them contain a row that is entirely zero. Details on the resolution of these issues can be found in our article [1]. The key results of this work are two quantum gates, \(Q\) and \(G\), which simulate the colour parts of the quark-gluon and triple-gluon interactions respectively. Our intention is that these gates can then be composed together, matching the factors appearing in a Feynman diagram calculation, and hence simulate the colour part of the perturbative calculation of a scattering process. In this conference proceeding we will only give a high-level overview of how these gates are used, and refer the interested reader to our article [1] for the detailed designs of these gates. Since each gluon has 8 basis colour states, it can be represented by the \(2^{3}=8\) basis states of 3 qubits. The 3 basis colour states of a quark are represented by 3 of the \(2^{2}=4\) basis states of 2 qubits, the \(4^{\text{th}}\) state remaining unused. The \(Q\) gate acts on 3 qubits representing a gluon, 2 qubits representing a quark line, and some extra qubits \(\mathcal{U}\) (whose purpose will be described later). If the gluon qubits are in a colour basis state \(\ket{a}_{g}\), where \(a\in\{1,\ldots,8\}\), and the quark qubits are in a colour basis state \(\ket{k}_{q}\), where \(k\in\{1,2,3\}\), and if the qubits \(\mathcal{U}\) are in a special reference state \(\ket{\Omega}_{\mathcal{U}}\equiv\ket{0\ldots 0}_{\mathcal{U}}\), then \(Q\) acts in the following way: \[Q\ket{a}_{g}\ket{k}_{q}\ket{\Omega}_{\mathcal{U}}=\sum_{j=1}^{3}T_{jk}^{a}\ket{ a}_{g}\ket{j}_{q}\ket{\Omega}_{\mathcal{U}}+\left(\text{terms orthogonal to }\ket{\Omega}_{\mathcal{U}}\right). \tag{2}\] Since quantum gates are linear operators, if the quark qubits (or gluon qubits, or both) are in superpositions of colour basis states, possibly entangled with other qubits in the circuit, then \(Q\) acts linearly on each basis component in the superposition. For the triple-gluon interaction, we have designed a gate \(G\) acting on 3 registers, each of which comprises 3 qubits to represent the colour of a gluon as before. Given 3 gluon registers \(g_{1}\), \(g_{2}\), \(g_{3}\) in colour basis states \(\ket{a}_{g_{1}}\), \(\ket{b}_{g_{2}}\), \(\ket{c}_{g_{3}}\), the \(G\) gate acts in the following way: \[G\ket{a}_{g_{1}}\ket{b}_{g_{2}}\ket{c}_{g_{3}}\ket{\Omega}_{\mathcal{U}}=f^{ abc}\ket{a}_{g_{1}}\ket{b}_{g_{2}}\ket{c}_{g_{3}}\ket{\Omega}_{\mathcal{U}}+ \left(\text{terms orthogonal to }\ket{\Omega}_{\mathcal{U}}\right), \tag{3}\] where we include the same extra qubits \(\mathcal{U}\) as above. The reason for including \(\mathcal{U}\) can now be seen: while multiplying by \(f^{abc}\) is not a unitary operation, the inclusion of extra qubits \(\mathcal{U}\) allows a unitary operation (3) to be defined. We call these extra qubits the _unitarisation register_ and in our article [1] we give a detailed description of its usage. For now we just mention that the number of extra qubits is very small (logarithmic in the number of vertices in the Feynman diagram). We can then interpret eq. (3) to mean that when projected onto the special reference state \(\ket{\Omega}_{\mathcal{U}}\) of the unitarisation register, the equation simulates the colour interaction of 3 gluons. ## 3 Illustrative example We will now work through a simple example in order to illustrate how the \(Q\) and \(G\) gates can be used. A generalisation to arbitrarily complicated cases will be given in sec. 4. Consider the Feynman diagram shown in Fig. 1. It has one quark and one gluon. As mentioned above, we use 2 Figure 1: Example Feynman diagram (left) and a graphical representation of its corresponding circuit (right). qubits to represent the colour of the quark and 3 qubits to represent the colour of the gluon. There is a complication: in order to be able to compute the trace, we introduce for each quark line a pair of 2-qubit registers \(q\) and \(\tilde{q}\), rather than just a single 2-qubit register. The extra register \(\tilde{q}\) is not affected by the simulation gates \(Q\) or \(G\), but instead exists solely to allow the computation of the trace, as will be seen shortly. We start the circuit in a reference state \(\ket{\Omega}_{g}\ket{\Omega}_{q}\ket{\Omega}_{\tilde{q}}\ket{\Omega}_{\mathcal{ U}}\), where \(\ket{\Omega}_{r}\) indicates that each qubit of a register \(r\) is in the state \(\ket{0}\). We then apply a gate \(R_{g}\) to the gluon register to rotate it into an equal superposition of all 8 basis colour states: \[R_{g}\ket{\Omega}_{g}=\sum_{a=1}^{8}\frac{1}{\sqrt{8}}\ket{a}_{g}. \tag{4}\] The explicit form of \(R_{g}\) can be found in the Appendix of ref. [1]. The gate \(R_{q}\) (also defined in the Appendix of ref. [1]) is now applied to the quark registers to place them into the following equal superposition of states: \[R_{q}\ket{\Omega}_{q}\ket{\Omega}_{\tilde{q}}=\sum_{k=1}^{3}\frac{1}{\sqrt{3}} \ket{k}_{q}\ket{k}_{\tilde{q}}, \tag{5}\] where it should be observed that the \(q\) and \(\tilde{q}\) registers are entangled. Thus, after applying the \(R_{g}\) and \(R_{q}\) gates, the quantum computer is in the state \[\frac{1}{\sqrt{24}}\sum_{a=1}^{8}\sum_{k=1}^{3}\ket{a}_{g}\ket{k}_{q}\ket{k}_{ \tilde{q}}\ket{\Omega}_{\mathcal{U}}. \tag{6}\] We now perform the key simulation steps, where we apply two \(Q\) gates corresponding to the two interaction vertices in the Feynman diagram in Fig. 1. We emphasise that \(Q\) does not act on the \(\tilde{q}\) register. We see from eq. (2) that after applying the \(Q\) gate once, the state of the quantum computer becomes \[\frac{1}{\sqrt{24}}\sum_{\begin{subarray}{c}a\in\{1,\ldots,8\}\\ j,k\in\{1,2,3\}\end{subarray}}T_{jk}^{a}\ket{a}_{g}\ket{j}_{q}\ket{k}_{\tilde{q }}\ket{\Omega}_{\mathcal{U}}+\left(\text{terms orthogonal to }\ket{\Omega}_{ \mathcal{U}}\right) \tag{7}\] and after applying the second \(Q\) gate, the state becomes \[\frac{1}{\sqrt{24}}\sum_{\begin{subarray}{c}a\in\{1,\ldots,8\}\\ i,j,k\in\{1,2,3\}\end{subarray}}T_{ij}^{a}T_{jk}^{a}\ket{a}_{g}\ket{i}_{q} \ket{k}_{\tilde{q}}\ket{\Omega}_{\mathcal{U}}+\left(\text{terms orthogonal to }\ket{\Omega}_{ \mathcal{U}}\right). \tag{8}\] This looks somewhat like the desired colour factor, but it is not immediately accessible. In particular, the state contains a sum over \(a\) but each term \(T_{ij}^{a}T_{jk}^{a}\) multiplies a distinct state \(\ket{a}_{g}\) of the gluon register, which means that the desired summation \(\sum_{a}T_{ij}^{a}T_{jk}^{a}\) has not yet been performed. In order to perform the sum, we first observe by inverting eq. (4) that \(R_{g}^{-1}\) acting on any state \(\sum_{a=1}^{8}c_{a}\ket{a}_{g}\) of the gluon register would produce the state \[R_{g}^{-1}\sum_{a=1}^{8}c_{a}\ket{a}_{g}=\left(\frac{1}{\sqrt{8}}\sum_{a=1}^{8 }c_{a}\right)\ket{\Omega}_{g}+\left(\text{terms orthogonal to }\ket{\Omega}_{g}\right), \tag{9}\] effectively averaging over the coefficients of the 8 colour states \(\ket{a}_{g}\). Similarly, it can be seen by inverting eq. (5) that \(R_{q}^{-1}\) acting on any state \(\sum_{i,k\in\{1,2,3\}}c_{ik}\ket{i}_{q}\ket{k}_{\tilde{q}}\) of the \(q\) and \(\tilde{q}\) registers would produce the state \[R_{q}^{-1}\sum_{i,k\in\{1,2,3\}}c_{ik}\ket{i}_{q}\ket{k}_{\tilde{q}}=\left( \frac{1}{\sqrt{3}}\sum_{i=1}^{3}c_{ii}\right)\ket{\Omega}_{q}\ket{\Omega}_{ \tilde{q}}+\left(\text{terms orthogonal to }\ket{\Omega}_{q}\ket{\Omega}_{ \tilde{q}}\right), \tag{10}\] effectively performing a trace over quark colours. Note that tracing over external colours is not essential, but we have chosen to do so in order to allow each Feynman diagram to be validated by comparing a single number to the output of our quantum circuits. Thus, after applying the \(R_{g}^{-1}\) and \(R_{q}^{-1}\) gates to the state produced in eq. (8), we obtain the state \[\frac{1}{24}\left(\sum_{\begin{subarray}{c}a\in\{1,\ldots,8\}\\ i,j\in\{1,2,3\}\end{subarray}}T_{ij}^{a}T_{ji}^{a}\right)\ket{\Omega}_{g}\ket{ \Omega}_{q}\ket{\Omega}_{\tilde{q}}\ket{\Omega}_{\tilde{q}}+\left(\text{terms orthogonal to }\ket{\Omega}_{g}\ket{\Omega}_{q}\ket{\Omega}_{\tilde{q}}\ket{\Omega}_{ \tilde{q}}\ket{\Omega}_{\tilde{q}}\right). \tag{11}\] It can be observed in this state that the coefficient of the original reference state \(\ket{\Omega}_{g}\ket{\Omega}_{q}\ket{\Omega}_{\tilde{q}}\ket{\Omega}_{\tilde{ q}}\) encodes the colour factor (1) of the diagram. This result can be generalised to arbitrarily more complicated diagrams by adding more qubits and more \(Q\) and \(G\) gates, as will be explained in the next section. We note that the procedure described in this example can be applied at the level of either an unsquared diagram or a squared diagram, since for colour calculations the Feynman rules remain the same in both cases. ## 4 Calculating the colour factor of arbitrary Feynman diagrams It is straight-forward to generalise the illustrative example from sec. 3 to now calculate colour factors for Feynman diagrams with arbitrary numbers of quarks and gluons. Given an arbitrary Feynman diagram with \(N_{q}\) quark lines and \(N_{g}\) gluons, the procedure is as follows: 1. Create a quantum circuit with a 3-qubit gluon register \(g\) for each gluon, a pair of 2-qubit quark registers \(q\), \(\tilde{q}\) for each quark line, and a single unitarisation register \(\mathcal{U}\). 2. Initialise each register \(r\) to a reference state \(\ket{\Omega}_{r}\) in which each qubit is in the state \(\ket{0}\). 3. For each gluon, apply \(R_{g}\) to the corresponding register \(g\). 4. For each quark line, apply \(R_{q}\) to the corresponding pair of registers \(q\), \(\tilde{q}\). 5. For each quark-gluon vertex, apply a \(Q\) gate to the corresponding registers \(g\) and \(q\). 6. For each triple-gluon vertex, apply a \(G\) gate to the 3 corresponding \(g\) registers. 7. For each gluon, apply \(R_{g}^{-1}\) to the corresponding gluon register. 8. For each quark, apply \(R_{q}^{-1}\) to the corresponding pair of quark registers \(q\), \(\tilde{q}\). Just as with the illustrative example in sec. 3, the colour factor \(\mathcal{C}\) for the diagram is now found encoded in the final state of the quantum computer, which is \[\frac{1}{\mathcal{N}}\mathcal{C}\left|\Omega\right\rangle_{all}+\left(\text{ terms orthogonal to}\left|\Omega\right\rangle_{all}\right)\,, \tag{12}\] where \(\mathcal{N}=N_{c}^{n_{q}}\left(N_{c}^{2}-1\right)^{n_{q}}\) and \[\left|\Omega\right\rangle_{all}=\left(\prod_{m=1}^{n_{g}}\left|\Omega\right \rangle_{g_{m}}\right)\left(\prod_{l=1}^{n_{q}}\left|\Omega\right\rangle_{q_{l }}\left|\Omega\right\rangle_{\hat{q}_{l}}\right)\left|\Omega\right\rangle_{ \hat{q}}\,. \tag{13}\] ## 5 Validation To validate our methods, we implemented our circuits in Python using the IBM Qiskit framework. We used this to run our circuits on a simulated noiseless quantum computer. To verify that the state (12) is indeed being correctly produced, we ran each simulation \(10^{8}\) times, measuring the final state each time, and inferred the colour factor \(\mathcal{C}\) from the fraction of times that the output state was measured to be \(\left|\Omega\right\rangle_{all}\). While this is a simple and transparent way to verify the state (12), it is not the most efficient way and we emphasise that more sophisticated measurement schemes are possible such as quantum amplitude estimation [38, 39, 40, 41], which offers a quadratic speed-up. Nonetheless, it can be seen from the results in Table 1 that the measurements are fully consistent with the analytical expectation of the colour factors. ## 6 Summary and Outlook In these proceedings, based on our article [1] in which full details can be found, we have designed quantum circuits to simulate the colours parts of perturbative QCD. As an example application, we have shown how they can be used for calculating the colour factors of arbitrary Feynman diagrams. This is a first step towards a full quantum simulation of perturbative QCD processes. Our work opens up several natural avenues for further exploration. Firstly, there is the interference of multiple Feynman diagrams. Quantum computers are naturally well-suited to this task, due to their ability to coherently manipulate quantum states, and we therefore believe that this extension should be straight-forward. Secondly, one can try to implement the kinematic parts of Feynman diagrams. This could re-use some of the ideas from this work, particularly the unitarisation register for implementing non-unitary operations. However, one will also require methods to handle the much larger Hilbert space associated with kinematics. Thirdly, one can eventually seek to combine these components into a quantum computer-based Monte Carlo simulation of cross-sections in order to obtain a quadratic speed-up over classical Monte Carlo simulations. ## Acknowledgements The authors are grateful to Fabrizio Caola, Stefano Gogioso, Michele Grossi, and Joseph Tooby-Smith for helpful discussions. The research of H.C. is supported by ERC Starting Grant \begin{table} \begin{tabular}{c|c|c} Diagram & Analytical & Numerical \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) \\ \hline \(\begin{array}{c}\\ \\ \end{array}\) & \(\begin{array}{c}\\ \end{array}\) & \(\begin{array}[] 804394 hipQCD. M.P. acknowledges support by the German Research Foundation (DFG) through the Research Training Group RTG2044. H.C. is grateful to the Galileo Galilei Institute for hospitality and support during the scientific program on "Theory Challenges in the Precision Era of the Large Hadron Collider," where part of these this proceedings paper was written.
2309.15698
Deep Model Fusion: A Survey
Deep model fusion/merging is an emerging technique that merges the parameters or predictions of multiple deep learning models into a single one. It combines the abilities of different models to make up for the biases and errors of a single model to achieve better performance. However, deep model fusion on large-scale deep learning models (e.g., LLMs and foundation models) faces several challenges, including high computational cost, high-dimensional parameter space, interference between different heterogeneous models, etc. Although model fusion has attracted widespread attention due to its potential to solve complex real-world tasks, there is still a lack of complete and detailed survey research on this technique. Accordingly, in order to understand the model fusion method better and promote its development, we present a comprehensive survey to summarize the recent progress. Specifically, we categorize existing deep model fusion methods as four-fold: (1) "Mode connectivity", which connects the solutions in weight space via a path of non-increasing loss, in order to obtain better initialization for model fusion; (2) "Alignment" matches units between neural networks to create better conditions for fusion; (3) "Weight average", a classical model fusion method, averages the weights of multiple models to obtain more accurate results closer to the optimal solution; (4) "Ensemble learning" combines the outputs of diverse models, which is a foundational technique for improving the accuracy and robustness of the final model. In addition, we analyze the challenges faced by deep model fusion and propose possible research directions for model fusion in the future. Our review is helpful in deeply understanding the correlation between different model fusion methods and practical application methods, which can enlighten the research in the field of deep model fusion.
Weishi Li, Yong Peng, Miao Zhang, Liang Ding, Han Hu, Li Shen
2023-09-27T14:40:12Z
http://arxiv.org/abs/2309.15698v1
# Deep Model Fusion: A Survey ###### Abstract Deep model fusion/merging is an emerging technique that merges the parameters or predictions of multiple deep learning models into a single one. It combines the abilities of different models to make up for the biases and errors of a single model to achieve better performance. However, deep model fusion on large-scale deep learning models (e.g., LLMs and foundation models) faces several challenges, including high computational cost, high-dimensional parameter space, interference between different heterogeneous models, etc. Although model fusion has attracted widespread attention due to its potential to solve complex real-world tasks, there is still a lack of complete and detailed survey research on this technique. Accordingly, in order to understand the model fusion method better and promote its development, we present a comprehensive survey to summarize the recent progress. Specifically, we categorize existing deep model fusion methods as four-fold: (1) "Mode connectivity", which connects the solutions in weight space via a path of non-increasing loss, in order to obtain better initialization for model fusion; (2) "Alignment" matches units between neural networks to create better conditions for fusion; (3) "Weight average", a classical model fusion method, averages the weights of multiple models to obtain more accurate results closer to the optimal solution. (4) "Ensemble learning" combines the outputs of diverse models, which is a foundational technique for improving the accuracy and robustness of the final model. In addition, we analyze the challenges faced by deep model fusion and propose possible research directions for model fusion in the future. Our review is helpful in deeply understanding the correlation between different model fusion methods and practical application methods, which can enlighten the research in the field of deep model fusion. ## 1 Introduction In recent years, deep neural networks (DNNs) [129] have made remarkable development, which is widely used in computer vision (CV) [175], natural language processing (NLP) [30] and other fields. Generally speaking, a single deep learning model often has certain limitations and cannot fully capture all underlying information behind complex networks [195]. Therefore, the classic ensemble learning [193, 15, 198] combines the outputs of multiple models to improve the final performance of model in deep learning (DL). But it suffers from the high cost of storing and running multiple models at test time [65, 204], especially as the complexity and size of models increase. Especially, for example, GPT-3 [172] has billions of parameters, and PaLM [31] even reaches 540 billion parameters and 780 billion tokens. In addition, from the perspective of loss landscape of DNNs [134, 196], gradient-optimized solutions usually converge to points near the boundary of the wide flat region instead of the central point [99]. It means that a trained network is not exactly close to the optimal solution with minimum test error. The solutions near the relative optimal point need to be fused for a better result. It inspires researchers not only to limit the the fusion scope to predictions (e.g., logits, etc.), but also to include the fusion of model parameters without accessing the training data or maintaining all individual models [110]. Accordingly, deep model fusion [111, 159] aims at fusing several DNNs into a single network, which preserves their original capabilities and even outperforms multi-task training [3, 135]. In addition, deep model fusion can reduce the tendency of a single model to overfit particular samples or noise so as to improve the accuracy, diversity and robustness of predictions [207, 223]. Deep model fusion has attracted increasing interest due to the data privacy and practical resource-saving issues. Although the development of deep model fusion has brought many technical breakthroughs, it also produces a series of challenges, such as high computational load, model heterogeneity, and slow speed of alignment via combinatorial optimization [133, 204], etc. Some approaches are limited to specific scenarios [227, 254], which inspires researchers to investigate the principles of model fusion in different cases. Nevertheless, there is a lack of comprehensive reviews to summarize the approaches so as to indicate the internal mechanism of deep model fusion currently. Some work only focuses on model fusion from a single perspective (e.g., feature fusion, etc.) [45, 195] and a specific scene [213], or the fusion of information from different ways (multi-modal fusion [1, 103]) rather than the fusion of parameters. In order to give the developers insight into deep model fusion, we analyze the principles and methodologies of deep model fusion. In addition, we review the recent progress and representative applications, such as federated learning (FL) [160] and fine-tuning [29], etc. Our survey aims to illustrate the latest trends and potential directions in deep model fusion and provide a guideline for researchers to enhance the performance and reduce costs. Accordingly, we group the approaches into four-fold according to the internal mechanisms and purposes as Figure 1. For the models trained independently that are not in the vicinity of each other, "mode connectivity" and "alignment" bring the solutions closer so as to obtain better original conditions of average. For the similar models with certain differences in the weights space, "weight average (WA)" tends to average the models directly and obtain solutions closer to the optimal point in the region of the parameter space where the value of loss function is low [118]. Furthermore, for the predictions of existing models, "ensemble learning" integrates different forms of predictions of the models to get better results. Specifically, the four categories are as follows: Figure 1: Schematic diagram of the overall model fusion process, as well as classification and connection of various classification methods. * **Mode connectivity.**[61, 162], The solutions obtained by gradient-based optimization can be connected in weight space by a path (connector) with no obstacles, which is referred to as mode connectivity [46, 50]. We can obtain other models that are more suitable for model fusion along the low-loss path. According to the mathematical form of path and the space where the connector is located, we divide this section into three parts "linear mode connectivity (LMC) [66]", "non-linear mode connectivity" and "mode connectivity in subspace". Mode connectivity can solve local optimization problems during training. The geometric relationships of paths of mode connectivity [61, 162] could also be used to accelerate the convergence, stability and accuracy of optimization procedures like stochastic gradient descent (SGD). In a word, mode connectivity provides a new perspective for interpreting and understanding the behaviors of model fusion [66]. But the difficulties of computational complexity and parameter tuning should be solved, especially when training models on large datasets. * **Alignment.** Alignment [140, 218] matches the units of multiple models and average the models to obtain the final model. The specific mathematical metrics (e.g., Euclidean distance [218]) between different models can be closer after alignment, which can reduce the differences between models, thus enhancing the effect of deep model fusion. Alignment can be divided into "activation matching" and "weight matching" depending on whether data distribution needs to be considered. Moreover, Re-basin [3] is introduced based on alignment, which explores the mechanism that solutions can be transported into a single basin (i.e., area of the flat parameter space where with relatively low loss [61, 96]) by permutation invariance [50]. However, it is often faced with the obstacles of large computation, slow speed pf combinatorial optimization and architecture difference, which makes it is not easy to be extended to other scenarios with different objectives. For example, the memory burden that comes with graph matching [142, 230] limits the application of deep model fusion. * **Weight average.** WA [227] is the most direct and efficient way to fuse several parent networks into a single network [159, 204]. Compared to mode connectivity and alignment, WA does not require additional computational complexity or training to find a superior starting point, which performs well on models contain a degree of similarities. According to the space of aggregation, WA can be classified into two parts "weight average" and "average in subspace". In addition, the typical approaches "model soup", "model arithmetic" and "stochastic weight averaging (SWA)" also provide significant improvements over the existing methods. Furthermore, some bias may be introduced in the case of large differences in model structure or number of parameters when the parameters are normalized and merged. Nonetheless, WA is still the mainstream method of deep model fusion because of its simplicity and efficiency. * **Ensemble Learning.** The outputs of several different models are combined to improve the prediction performance and robustness, which is regarded as "ensemble learning" [195]. In this review, we focus on the ensemble learning in DL. Based on ensemble learning, "model reuse" provides specifications for each model so that useful models can be identified and merged from the pool of models when given new learning tasks [177, 266]. Ensemble learning has various frameworks with convenient interfaces, which is often used in practical areas such as object detection [20], etc. Although ensemble learning requires maintaining the multiple trained models and running each of them at test time [204], it is still one of the powerful techniques that have been widely adopted in DL. * **Applications of Model Fusion.** As a technology to improve the accuracy and robustness of deep models, model fusion promote the improvement to many application fields. "federated learning [160]", an application of aggregating clients' models on a central server, makes it possible for various parties to contribute data to the computation of functions (e.g., various statistics, classifiers [177]) without the risks of privacy disclosure. "fine-tuning" makes small adjustments to pre-trained models, which combined with model fusion to reduce training costs and adapt to the needs of a specific task or domain. Model fusion is also involved in "distillation". That is, combine soft target knowledge from multiple complex models (teachers) to train a small model for specific requirements. "model fusion on foundation/LLMs" includes the work on large foundation models or large language models (LLMs), such as vision transformers (ViT) [79] and GPT [17], etc. The applications of model fusion help developers adapt to the needs of various tasks and domains and promote the development of DL. In brief, our survey reviews deep model fusion techniques. In the first three sections "mode connectivity", "alignment" and "weight average", we mainly conduct a comprehensive study from the perspective of the fusion of model parameters. In the "ensemble learning", we mainly investigate the issue from the perspective of model outputs aggregation. The main contributions of this work are summarized as: * We propose a new deep model fusion classification method from the perspectives of "mode connectivity", "alignment", "weight average" and "ensemble learning", which covers the theoretical synthesis approaches of model fusion, and provides guidance for the realization of high generalization and accuracy training of DNNs. * We compare the advantages and disadvantages of fusion approaches, and explain the mechanism and relationship between them, which provides inspiration for designing advanced model fusion methods in the future. * We summarize extensive application of deep model fusion. We also discuss current research trends so as to attract more attention and reflection in the future. Moreover, the remainder of the paper is organized as follows: In Section 2 to Section 5, we introduce the approaches of deep model fusion according to the four perspectives "mode connectivity", "alignment", "weight average" and "ensemble learning". Section 6 introduces the applications of deep model fusion "federated learning", "fine-tuning", "distillation" and "model fusion on foundation/LLMs". Finally, in Section 7, we summarize the deep model fusion and discuss the challenges and potential directions in the future. In addition, we illustrate the notations and their corresponding definitions in the full text. \(\mathbf{W}_{i}\) is the \(i_{th}\) neural network with weights \(W_{i}\in\mathbb{R}^{d}(i=1,2,...k)\) and bias term \(\mathbf{b}\). \(\lambda\) denotes weighted parameters. \(\sigma\) denotes a non-linear neuron activation function. \(\mathcal{L}\) is loss function that quantify the discrepancy between the predicted and actual values. ## 2 Mode Connectivity In this section, we introduce the definition, principles and related methods of mode connectivity. When training neural networks, the solutions trained by gradient-based optimization algorithms (e.g., SGD, etc.) can be merged without superior results [46, 61]. It is discovered that solutions can be connected via continuous paths (connectors) in the network weight space without increasing loss, which is referred to as mode connectivity [50, 66]. The models on the low-loss path can be fused to leverage the advantages of multiple models by mode connectivity, which is of great significance to produce a better aggregation model. \begin{table} \begin{tabular}{l l l l} \hline \hline Mode & & & \\ connectivity & The form of path & Ref. & Eq. \\ \hline Linear & segment & [54, 58] & \(\phi(t)=\left(1-t\right)w_{1}+tw_{2}\) \\ path & polygonal chain & [66, 69] & \(\phi(t)=\left\{\begin{array}{ll}2\left(tw+(0.5-t)w_{1}\right),&0\leq t\leq 0.5\\ 2\left((t-0.5)w_{2}+(1-t)w\right),&0.5\leq t\leq 1\end{array}\right.\) \\ \hline Non-linear & quadratic Bezier curve & [52, 152] & \(\phi(t)=\left(1-t\right)^{2}w_{1}+2t(1-t)w+t^{2}w_{2},\quad 0\leq t\leq 1\) \\ path & Fourier series & [234] & \(\hat{\phi}(t)=\frac{\beta_{0}}{2}+\sum_{i=1}^{n}\beta_{i}\cos\left(w_{i}t+ \zeta_{i}\right)\) \\ \hline \hline \end{tabular} \end{table} Table 1: The summary of standard training pipelines of LMC and non-linear mode connectivity. First, we explain the principles of mode connectivity. In a representative process of DL, the minima is usually described as a point at the bottom of a convex valley, the network parameters are determined by the location of the minima [85, 116, 118]. The traditional view is that the number of local minima and saddle points is large [71, 228], and different local minima will converge to different isolated regions in the parameter space [10, 27, 39]. Recent work [125, 196] demonstrates that the minima obtained by gradient-based optimizer are not walled off in isolated valleys [61]. Gotmare et al. [72] explore the potential relationship between the minima found by different training process. Other work [33, 46, 169, 182] manifest that neural network solutions form a connected manifold (i.e., solutions in the loss landscape are connected by pipelines in weight space). Compared with mode connectivity, a direct linear path connecting two such independently trained networks usually always leaves a low-loss manifold, which creates a high loss barrier at the points on the linear path. For example, the error at the midpoint of the line segment directly connecting two points is closed to 90\(\%\) (VGG-16 on CIFAR-10 [66]). The above work proves the existence and effect of mode connectivity. Second, some work [50, 59, 66] quantifies the pipelines of the mode connectivity. Let \(\mathcal{L}\left(tw_{1}+(1-t)w_{2}\right)\) for \(t\in(0,1)\) be the loss (train or test error) of a neural network created by linearly interpolating between \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\). The random data augmentations in each epoch can be seen as noise when using SGD with the initialization and hyperparameters fixed. To determine whether the result of a trained network is stable to SGD noise, the loss barrier (error barrier) \(B\left(w_{1},w_{2}\right)\)[60] is defined as the maximum difference between the linear interpolation of the loss at each point and the loss of the linear connection of two points [50], as shown in Eq. (1): \[B\left(w_{1},w_{2}\right)=\sup_{t}\left[\mathcal{L}\left(tw_{1}+(1-t)w_{2} \right)\right]-\left[t\mathcal{L}\left(w_{1}\right)+(1-t)\mathcal{L}\left(w_ {2}\right)\right]. \tag{1}\] The loss barrier illustrates whether the error is constant or increased when we optimize the landscape [56, 61] along the path between \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\). If there is a tunnel between two networks with a barrier approximately equal to 0, which is equivalent to mode connectivity [46, 59, 60]. That is to say, the local minima obtained by SGD can be connected by a path \(\phi\) with the lowest maximum loss as shown in Eq. (2): \[\phi\left(w_{1},w_{2}\right)=\operatorname*{argmin}_{\phi\text{ from }\mathbf{W}_{1}\text{ to }\mathbf{W}_{2}}\left\{\max_{w\in\phi}\mathcal{L}(w)\right\}, \tag{2}\] which means that the loss is low along the pathway and the network is stable to SGD noise [46], as shown in Figure 2. There are two steps to conduct mode connectivity: first determine the form of the tunnels (e.g., polygonal chain, Bezier curve [66], etc.) as Table 1; then find the optimal low-loss pathway to connect different solutions, as shown in Table 2. According to the form of path and the space in which it is located, this section introduces "Linear mode connectivity", "Non-linear mode connectivity" and "Mode connectivity in subspace". ### Linear Mode Connectivity In order to connect two points on an optimized low-loss path, we first need to determine the form of the tunnel. If the optimal path \(\phi^{*}\) is linear, then it is called LMC. Common linear paths are linear segment, polygonal chain as Eq.(3): \[\phi_{w}(t)=\left\{\begin{array}{ll}2\left(tw+(0.5-t)w_{1}\right),&0\leq t \leq 0.5\\ 2\left((t-0.5)w_{2}+(1-t)w\right),&0.5\leq t\leq 1\end{array}\right., \tag{3}\] The parametric path train using the same hyperparameters from different random initialization. \(\phi_{w}(0)=w_{1},\phi_{w}(1)=w_{2}\). After deciding on the mathematical form of tunnel, the specific parameters need to be determined. Garipov et al. [66] suggest to minimize the expectation of loss \(\ell(w)\) over a uniform distribution as Eq.(4): \[\min_{w}\ell(w)=\min_{w}\mathbb{E}_{t\sim U(0,1)}\left[\mathcal{L}\left(\phi_ {w}(t)\right)\right], \tag{4}\] In addition, the tunnel found by this way is not unique. Nevertheless, vanilla mode connectivity are not robust enough to resolve various types of adversarial attacks. Robust mode connectivity (RMC) [229] uses adversarial training (AT) [156] to find tunnels between neural networks that exhibit robustness to different types of adversarial attacks as Eq.(5): \[\min_{w}\ell(w)=\min_{w}\mathbb{E}_{t\sim U(0,1)}\sum\max_{\mathrm{Dist}_{i}( \mathbf{x}^{\prime},\mathbf{x})\leq\delta_{i}}\mathcal{L}\left(\phi_{w}(t);(x^ {\prime},y)\right), \tag{5}\] where \(\delta_{i}\) are minimal values, \(Dist_{i}\) denotes distance measurement function. The RMC path in the parameter space improves robustness to different types of attack. Some work complements the LMC from a global connectivity perspective. Nguyen et al. [168] prove that when the number of neurons in a hidden layer is larger than a certain amount of training samples, the loss function has no so-called bad local valleys, and all the global minima are connected in a large global valley. Shevchenko et al. [202] demonstrate that as the number of neurons increases (over-parameterization), the landscape of the multi-layer network is connected, which is more conducive to LMC. Although previous studies speculate that interconnected local minima in over-parameterized networks mean the mode connectivity of the loss function, which does not always hold true (e.g., over-parameterized two-layer networks [125]). Kuditipudi et al. [125] explain mode connectivity by noise stability [8, 60], which is somewhat equivalent to dropout stability. In other words, all noise stabilization solutions can be connected in a sufficiently over-parameterized network. As for the practical application of LMC, Zhao et al. [263] suggest to use LMC to repair backdoored or error-injected models. Neyshabur et al. [167] show the application of LMC to pre-trained visual models. Qin et al. [186] explore the relationship between different downstream configurations and mode connectivity of language model models. ### Non-linear Mode Connectivity In this subsection, we focus on the non-linear pathway connected solutions in weight space, which is known as non-linear mode connectivity [112, 186]. Bezier curve is one of the representative form of non Figure 2: Mode connectivity schematic diagram in two-dimensional loss landscape and other dimensional subspace. **Left:** Linear interpolation of the minima in the two basins results in high-loss barriers[46]. The lower two optimums follow a path of near constant low loss (e.g., Bezier curve, Polygonal chain, etc.)[66]. \(\pi(W_{2})\) is the equivalent model of \(W_{2}\) by permutation symmetry, which is located in the same basin as \(W_{1}\). Re-Basin merges models by delivering solutions to individual basins [3]. **Right:** Low loss paths connect multiple minima in subspace(e.g., a low-loss manifold composed of \(d\)-dim wedges [56]), etc.). linear path as Eq.(6): \[\phi_{w}(t)=(1-t)^{2}w_{1}+2t(1-t)w+t^{2}w_{2},\quad 0\leq t\leq 1. \tag{6}\] Compared with non-linear connectivity, the convex combinations (LMC) of minima within the loss basin remain in the same basin. in contrast, the nonlinear connectivity between minima are not located in the same basin, which means that the LMC is not available in some cases. Recent work [46, 125, 152] show that different independently trained networks can be connected by nonlinear pathways that remain in the low-loss manifold in the weight space. Qin et al. [186] speculate that there may be multiple loss basins connected by low loss nonlinear paths. Yun et al. [253] indicate that output can be obtained by connecting the Bezier curves of the two network parameters in the absence of an actual forward passing network in the Bridge network. Gotmare et al. [72] manifest that non-linear mode connectivity is widely applied to networks trained with different optimizers, data enhancement strategies and learning rate schedules. Futhermore, Lubana et al. [152] explain the principle of mode connectivity by mechanistic similarity, which is defined as the fact that two models are mechanistically similar if they make predictions using the same properties (e.g., shape or background) of the input. The mechanistic similarity of the induced models is related to LMC of two minimizers (minima). There is no LMC between mechanistically dissimilar minimizers, but mode connections can be made via relatively non-linear paths. The representative approach for finding nonlinear path [66] is similar to LMC, as Eq.(7): \[\min_{w}\ell(w)=\min_{w}\mathbb{E}_{\alpha\sim q_{w}(t)}\left[\mathcal{L} \left(\phi_{w}(t)\right)\right], \tag{7}\] where \(q_{w}(t)\) is the distribution for sampling the models along the path. Moreover, Draxler et al. [46] use AutoNEB [122] and minimum spanning tree (MST) to generate the approximation of \(\phi^{*}\) connecting the minima of networks on CIFAR-10 and CIFAR-100. AutoNEB connects two solutions, which updates the pivot after each iteration until AutoNEB-tunnel approaches the optimal low-loss path \(\phi^{*}\). Nevertheless, the approximation of \(\phi^{*}\) may fall into a local minima tunnel with unreasonable high saddle point losses. To sum up, both linear and nonlinear paths can result in low test errors. While linearly connected pathways are simple, it could have certain limitations. As for non-linear mode connectivity, it is difficult to calculate the gradient on some non-linear path such as Bezier curve. ### Mode Connectivity in Subspace Previous work of mode connectivity [54, 56, 66] focuses on low-loss tunnels in weight space without explicitly addressing other dimensional structure. This subsection explores the mode connectivity and model training in subspace of another dimension rather than in a native parameter space. Subspace in machine learning typically describe linear structures generated by vectors in the initial vector space. There are also concepts of non-linear subspace, such as nonlinear dimensionality reduction and manifold learning [98]. Standard neural network training is performed on a full parameter space \(\mathbb{R}^{D}\). Limiting the optimization to a random low-dimensional affine subspace (e.g., low-dimensional hyperplanes and hyperspheres, etc.) also leads to the similar results as full-space optimization in some cases [57, 132], which lay the foundation for mode connectivity in subspace. Definitely, mode connectivity in oriented subspace constrain the representation ability of the model and the value range of the weights, so as to overcome the over-fitting problem of model fusion. Recent work attempts to implement mode connectivity in different subspace. Fort et al. [56] extend the concept of low-loss connectors (tunnels) between solutions to \(m\)-dimensional connectors (\(m\) is smaller than the dimension of full parameter space). Randomly initialized points that are not on the same wedge (i.e., a union of m-dimensional manifolds) can always pass through the intersection of their wedges, thus building a low-loss path between the different minima, as shown in Figure 2.Based on the speculation, the \(m\)-dimensional hyperplanes are constructed on the piece-wise linear interpolation between the points, in which the low-loss connectors can be found. Benton et al. [11] propose simplicial point-wise random optimization (SPRO) to connect models through a multi-dimensional manifold.\(\mathcal{K}\left(S_{(w_{0},\varepsilon_{0})},S_{(w_{1},\varepsilon_{0})}\right)\) denote simplicial complex composed of disjoint \(0\)-simplexes. SPRO adds the join points \(\varepsilon_{i}\) to connect 0-simplexes in the complex iteratively so as to keep the loss low within the simplicial complex. It obtains a complex \(\mathcal{K}\) by sharing multiple \(\varepsilon_{i}\). When a join point \(\varepsilon_{k}\) connects the two modes, the pathway of complex \(\mathcal{K}\left(S_{(w_{0},\varepsilon_{0})},...,S_{(w_{n},\varepsilon_{0})}\right)\) can be found by previous method [66]. When some joint points connects multiple modes, the solution to \(\mathcal{K}\left(S_{(w_{0},\varepsilon_{0},\varepsilon_{1},\varepsilon_{2})},...,S_{(w_{n},\varepsilon_{0},\varepsilon_{1},\varepsilon_{2})}\right)\) is similar to the above work [56]. For narrow architectures of networks, geodesic optimization [215] finds a low-loss pathway connecting the solutions where general tunnels of mode connectivity can not pass through a region of high loss. The mode connectivity pathways in weight space is associated to the geodesics \(\gamma\) (i.e., shortest paths in the space of parameterized distributions, which is regarded as a Riemannian manifold with fisher information matrix \(f_{ij}\)). The geodesics \(\gamma\) is obtained by minimizing the loss \(\mathcal{L}(\gamma)\), which is equivalent to the integral of the square root Jensen-Shannon Divergence (JSD) [35] as Eq.(8): \[\mathcal{L}(\gamma)=\int_{t}\sqrt{\frac{d\gamma^{i}}{dt}f_{ij}\frac{d\gamma^{ j}}{dt}}dt=\sqrt{8}\int_{\gamma}\sqrt{d\mathrm{JSD}} \tag{8}\] Further, the mode connectivity in subspace is affected by the properties of the subspace, such as the relationship between dimension of the plane and the inherent dimension specific to the problem, the radius in the weight space, the dimensions of the hyperplane [132], etc. Moreover, Fort et al. [55] explore training tracks and subspace sampling (e.g., dropout, diagonal Gaussian, low-rank Gaussian and random subspace), which further complement relevant work of mode connectivity in subspace. In addition, recent work [42] inspires us to explore the mode connectivity in Pareto manifold to be applied to multi-task learning. In sum, the trained solutions can be found in both the full parameter space and the random low-dimensional hyperplane, as long as the points are distributed densely enough in most cases. ### Discussion In summary, mode connectivity provides a more novel and flexible perspective for deep model fusion. The training of neural networks tends to fall into local optima, which leads to degradation of performance. On the basis of model connectivity, we can find other models with better performance and use that as a starting point for further optimization and fusion. We can use the already trained model to move in the parameter space to reach the new target model, which can save time and computing overhead, and is suitable for situations where data is limited. Nevertheless, additional complexity and flexibility may be introduced \begin{table} \begin{tabular}{l l l l} \hline \hline Connectors & Methods & Ref. & Introduction \\ \hline 2-dim path & line segment & [71] & produce big error \\ & GDSS & [61] & approximate the geodesic \\ & AutoNEB & [46, 122] & path via GDSS \\ & minimize the & [46, 122] & minimize MST to obtain approximation of \(\phi^{*}\) \\ & expectation & [66] & representative approach that connects solutions in a simple way \\ & RMC & [229] & enhance the robustness of DNNs \\ & & against different perturbations \\ \hline N-dim space & MPO & [36, 206] & obtain substantial memory savings \\ & N-dimensional connectors & [56] & connect low-dimensional wedges \\ & train parametric subspace & [238] & learn the parameters of \\ & & lines, curves and simplexes \\ & SPRO, ESPRO & [11] & find simplexes and simplicial \\ & geodesic optimization & [215] & complexes to seek connectors \\ & & speculate the geodesics in \\ & & the curved distribution space \\ \hline \hline \end{tabular} \end{table} Table 2: The methods of finding tunnels between different local minima. to increasing the risk of overfitting when connecting different models. Therefore, the relevant hyperparameters and degree of variation should be carefully controlled. Also, mode connectivity requires fine-tuning or parameter changes, which can increase training time and resource consumption. In summary, model connectivity has many advantages in model fusion, including helping to overcome local optimal problems, providing new perspectives to explain network behavior, etc. In the future, mode connectivity is expected to help understand the inner mechanism of neural networks and provides guidance for more efficient deep model fusion designs in the future. ## 3 Alignment Due to the randomness of channels and components from diverse networks, the active components of the networks interfere with each other [204]. So unaligned weighted averages could ignore correspondence between units from diverse models and damage useful information. For example, there is a relationship between two neurons in different models that could be completely different but functionally similar. Alignment matches the units of different models so as to obtain better initial conditions for deep model fusion. It aims to make multiple models have smaller differences and, thus enhancing the deep model fusion effects. Also, alignment can be regarded as a combinatorial optimization issue in essence. In this section, we introduce a representative mechanism "Re-basin", which delivers solutions to individual basins so as to merge models with better original conditions. Following this, we divide the alignment into two types "Activation matching" and "Weight matching" depending on whether the aligned target is data-driven as Table 3. ### Re-basin Before introducing the specifics, we illustrate the permutation symmetry and Re-basin, which is the basic premise of alignment. Generally speaking, the number of saddle points and local optima can increase exponentially with the number of parameters even for shallow neural networks [10, 66]. It is discovered that there are invariances in training that leads to the same representation of some points among these local optima [22, 81, 140]. Specifically, the function of the network will not change if the units of hidden layer are exchanged by permutation, which is referred to as permutation symmetry [43, 50]. Formally, a \(\ell\)-layer function of DNN \(f^{(\ell)}(x,w)=\sigma(W^{(\ell-1)}f^{(\ell-1)}+b^{(\ell-1)})\) can be described as Eq.(9) [3]: \[f^{(\ell)}(x,w)=\mathbf{P}^{T}\sigma\left(\mathbf{P}W^{(\ell-1)}f^{(\ell-1)}+\mathbf{P}b^ {(\ell-1)}\right), \tag{9}\] \begin{table} \begin{tabular}{l l l l} \hline \hline & Alignment & Methods & Ref. \\ \hline Activation & metrics & coefficient of correlation & [140, 218] \\ matching & & mutual information & [140] \\ & & \(\ell\)2 distance & [3, 204, 218] \\ & pre \(\&\) post & pre-activation & [204, 218] \\ & activation & post-activation & [140, 218] \\ \hline Weight & metrics & Wassertain distance & [4, 204, 232] \\ matching & & Euclidean distance & [3, 178] \\ & graph matching & bipartite matching & [127, 140] \\ & other alignment & graph matching & [142] \\ & & Bayesian & [227, 254] \\ & & Sinkhorn Re-basin & [178] \\ & & SA & [50] \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of representative alignment methods. where \(\mathbf{P}\) denotes the permutation matrix. We can obtain the functional equivalent model \(f(x;w)=f\left(x;\pi(w)\right)\) by rearranging the input. On the basis of permutation symmetry, solutions from diverse area in weight space can generate equivalent solutions. A equivalent solution is located in a same region as the original solution with low-loss barrier (basin), as shown in Figure 2, which is referred to as "Re-basin" [3] as Eq.(10): \[\text{Re-basin: }f^{(\ell)}(x,w)=\sigma\left(P^{(\ell)}W^{(\ell)}(P^{(\ell-1)})^ {T}f^{(\ell)}+P^{(\ell)}b^{(\ell)}\right) \tag{10}\] Once the optimal permutation matrix \(\mathbf{P}^{*}\) is obtained, it is theoretically possible to implement model fusion: \(W=\lambda_{1}W_{1}^{(\ell)}+\lambda_{2}P^{(\ell)}W_{2}^{(\ell)}(P^{(\ell-1)})^ {T}\). Compared with mode connectivity, Re-basin tends to transport the points into a basin by permutation instead of low-loss tunnels. At present, alignment is a representative approach of Re-basin[3, 178]. However, how to efficiently search for all possibilities of permutation symmetry so that all solutions point to the same basin is a current challenge. Permutation symmetries imposed by these invariances help us understand the structure of loss landscapes better [22, 66]. The invariances also can be seen as the source of saddle points in loss landscapes [14]. Godfrey et al. [68] investigate the algebraic structure of symmetries in neural networks and how this structure manifests itself in loss landscape geometry. Brea et al. [14] introduce permutation point in high-dimensional plateaus, at which the neurons can be exchanged without increasing losses or parameter jumps as Figure 3. Conduct gradient descent on the loss and adjust the parameter vectors \(\vartheta_{m}\) and \(\vartheta_{n}\) of neuron \(m\) and \(n\), until the vectors reach the permutation point. At this time, the parameter configuration is called permutation point, and the parameter vectors and function of the two neurons are the same. Furthermore, Tatro et al. [218] explore the permutation symmetry of the nonlinear mode connectivity. Benzing et al. [12] speculate that two random initialization of a network after permutation can lead to a good performance. Furthermore, the alignment method does not always generate good low-loss connections between solutions due to variance collapse of activations. REnormalizing Permuted Activations for Interpolation Repair (REPAIR) [111] mitigates the variance collapse by rescaling the preactivation of networks, which eliminate the 90\(\%\) barrier for ResNet-18 on CIFAR-10 after alignment. ### Activation Matching In this subsection, based on permutation symmetry, we focus on the matching of activation values. The initial models for fusion can be improved by reducing the differences in activation. Minimizing the cost functions between activations is a representative way to calculate \(\mathbf{P}^{*}\), which can be transformed into assignment Figure 3: **Left:** general alignment process. Model \(A\) is transformed into model \(A_{p}\) by reference to model \(B\). Then the linear combination of \(A_{p}\) and \(B\) produces C. **Right:** adjust the parameter vectors of the two neurons \(\vartheta_{m}\),\(\vartheta_{n}\) in different hidden layers are close to the replacement point. At the replacement point, [14], \(\vartheta^{\prime}_{m}=\vartheta^{\prime}_{n}\), and the two neurons compute the same function, which means that two neurons can be exchanged. problems, such as linear assignment problem (LAP) or quadratic allocation problem (QAP), etc. They can be solved by Hungarian algorithm or Sinkhorn algorithm. The common cost functions \(\mathcal{C}\) used in alignment are cross-correlation [218] as Eq.(11), mutual information (information entropy) [140] as Eq.(12), \(\ell 2\) distance [3] as Eq.(13), KL divergence, Wasserstein distance, etc. \[\mathcal{C}(A_{m},A_{n})_{cor}=\mathbb{E}\left[\left(A_{m}-\mathbb{E}\left[A_{ m}\right]\right)\left(A_{n}-\mathbb{E}\left[A_{n}\right]\right)\right]/\xi_{m} \xi_{n}, \tag{11}\] \[\mathcal{C}(A_{m},A_{n})_{info}=\sum_{a\in A_{m}^{(\mathbf{W}_{1})}}\sum_{b\in A_ {n}^{(\mathbf{W}_{2})}}p(a,b)\log\left(\frac{p(a,b)}{p(a)p(b)}\right), \tag{12}\] \[\mathcal{C}(A_{m},A_{n})_{\ell 2}=\left\|A_{m}^{(\mathbf{W}_{1})}-\mathbf{P}A_{n}^{(\mathbf{ W}_{2})}\right\|^{2}, \tag{13}\] where \(A_{m}\) denotes the activation of unit \(m\) with standard deviation \(\xi\). \(p(a)\) denotes marginal probability distributions. In addition, it is discovered that using post-activation is better than using pre-activation in some cases [218]. Besides the cost functions, Singh et al. [204] use the optimal transport (OT) and Wasserstein barycenter to match the activations of different neural networks. The transport map \(\mathbf{T}\in\mathbb{R}^{(n\times m)}\) transports neurons of \(\mathbf{W}_{1}\) optimally to neurons of \(\mathbf{W}_{2}\) in the same layer. The permutation matrix and \(\mathbf{T}\) have a similar function, which can be obtained as Eq.(14): \[\mathbf{T}\leftarrow\mathrm{OT}\left(\mu,\nu,d_{s}\right), \tag{14}\] where \(d_{s}\) denotes the support measure (reflect the \(\ell 2\) distance between activations here). \(\nu\) and \(\mu\) are the probability measure. This kind of methods based on OT lay the foundation for some recent work [3, 4, 178]. Nevertheless, if the alignment problem is simply defined as linear problems, the second-order proximity of weights and the abundant edge information between channels could be ignored [142]. ### Weight Matching Instead of matching activation, we could alternatively align the models based on weight without data distribution. First, the basic approaches of weight matching is also based on minimizing the cost function to obtain \(\mathbf{P}^{\star}\). Singh et al. [204] use the weights of the incoming edges to calculate support and probability measures to obtain the transport map \(\mathbf{T}\) as Eq.(14). Ainsworth et al. [3] arrange the rows and columns of the modes to minimize the \(\ell 2\) distance between the weight vectors (restricted by ordinary least squares) as Eq.(15): \[\mathcal{C}(w_{1},w_{2})_{\ell 2}=\left\|\mathrm{vec}\left(w_{1}\right)- \mathrm{vec}\left(\pi\left(w_{2}\right)\right)\right\|^{2}. \tag{15}\] It results in the sum of bilinear linear assignment problem (SOBLAP), which can be divided into sub-problems and solved by LAP. Different from activation matching, weight matching is not affected by data distribution. It means that all \(\mathbf{P}\) need to be obtained by LAP, which is a complicated issue in essence. And it is difficult to leverage the gradient-based optimization. Pena et al. [178] extend the scope of cost function to all differentiable objectives, such as a midpoint as Eq.(16) and random point between \(w_{1}\) and \(w_{2}\) as Eq.(17): \[\mathcal{C}_{mid}\left(w_{1},w_{2}\right)=\mathcal{C}\left(\frac{w_{1}+\pi \left(w_{2}\right)}{2}\right), \tag{16}\] \[\mathcal{C}_{random}\left(w_{1},w_{2}\right)=\mathcal{C}\left[(1-\alpha)w_{1} +\alpha\pi\left(w_{2}\right)\right], \tag{17}\] where \(\alpha\sim U(0,1)\). Moreover, Sinkhorn operator \(S_{\tau}\) is added to the LAP process and Sinkhorn Re-basin is shown as Eq.(18): \[f^{(\ell)}(x,w)=\sigma\left[S_{\tau}\left(P^{(\ell)}\right)W^{(\ell)}S_{\tau }\left(\left(P^{(\ell-1)}\right)^{T}\right)f^{(\ell-1)}+S_{\tau}\left(P^{(\ell )}\right)b^{(\ell)}\right]. \tag{18}\] It solves non-differentiable problems and can be applied to more scenarios, such as FL [160]. Based on Beta-Bernoulli Process (BBP) [219], Yurochkin et al. [254] max the posterior of random variables \(p_{i}\) that match neurons at any batch and the global neurons. Hungarian algorithm can be used to solve this problem to obtained \(P_{i}\). In addition to minimizing the cost function, Wang et al. [227] regard the units of the model as a random permutation of global nodes based on the Beta-Bernoulli Process (BBP) [219] The permutation matrix can be obtained by BBP-MAP [254]. A simulated annealing (SA)-based method [50] searches for the valid permutations in the weight space Re-basin. Due to the high cost, it unrealistic to be applied, especially for large models. Stoica et al. [211] calculate merge matrix \(P_{i}\) and unmerge matrix \(\bar{P}_{i}\) to fuse the models and unmerge operations, which can be applied within the model or across the models. Instead of calculating the optimal matrix, Ainsworth et al. [3] optimize the approximate equivalent model \(\bar{w_{2}}\) iteratively and keep looking for the closest equivalent model until convergence, which minimizes \(\mathcal{L}\) as Eq.(19): \[\min_{\bar{w}_{2}}\mathcal{L}\left(\frac{1}{2}\left(w_{1}+\mathrm{proj}\left( \bar{w}_{2}\right)\right)\right), \tag{19}\] where projection operations can be solved by straight-through estimator (STE), which is expensive in practice. Based on Gromov-Wasserstein barycenter (GWB) [179], Akash et al. [4] update the coupling matrix \(\Pi\) and \(W\) alternately to optimize Gromov-Wasserstein barycenter distance until convergence. Let \(k\) be the number of nodes, the final aligned model can be obtained as Eq.(20): \[W^{\ell}\gets k^{\ell}k^{\ell-1}\frac{1}{|\!\!|\mathcal{L}_{k^{-1}}|\!\!| \mathcal{L}_{k^{\ell}}^{T}}\frac{1}{n}\sum_{i=1}^{n}\Pi_{i}^{\ell}W_{i}^{\ell }\left(\Pi_{i}^{\ell-1}\right)^{\star T} \tag{20}\] Moreover, recent research [227, 254] proposes to alternate for a number of iterations between finding an alignment and retraining to minimize the loss barriers between SGD minimas. Furthermore, another significant approach of alignment is graph matching (GM) [150], which aims to match nodes in the graph using structural characteristics in the graph. Since network channels and weight can be treated as nodes and edges, the alignment issues could be turned into GM [142, 247]. General approaches could use Bipartite semi-matching or Bipartite matching [127, 140] to solve GM. Liu et al. [142] propose graduated assignment model fusion (GAMF) [230] uses second-order similarity of model weights to align neurons build on gradient assignment as Eq.(21): \[\max_{P}=\sum_{i=0}^{d_{\Sigma}-1}\sum_{j=0}^{d_{\Sigma}-1}\sum_{a=0}^{d_{ \Sigma}-1}\sum_{b=0}^{d_{\Sigma}-1}\mathbf{P}_{[i,j]}\mathbf{K}_{[i,j,a,b]}\mathbf{P}_{[a, b]}, \tag{21}\] where \(d_{\sum}\) denotes the sum of dimensions,\(\mathbf{K}\) denotes affinity tensor that calculate the affinity between the edges \((i,a)\) and \((j,b)\). The problem can be transformed into QAP by unifying the relationships of nodes and edges into a incidence matrix. In contrast, multi-graph matching (MGM) [108, 130, 246] ensures that the matching of two graphs is not affected by another graph, and it applies to the alignment of multiple models. Further, Uriot et al. [222] explore merging models that take into account more possible permutations. ### Discussion Alignment makes the models more similar by adjusting the parameters of the models, which can improve the information sharing between the models, and thus improve the generalization ability of the fused model. In addition, alignment helps improve the performance and robustness of the model on complex tasks. However, alignment methods face the problems of slow combinatorial optimization. Alignment requires additional computational overhead to adjust the model's parameters, which can lead to a more complex and time-consuming training process, especially in large depth models [142, 204]. In summary, alignment can improve the consistency and overall effect between different models. With the diversification of DL application scenarios, alignment will become one of the key methods to optimize deep model fusion, improve generalization ability. In the future, alignment could play a role in areas such as transfer learning, domain adaptive [63], knowledge distillation, etc. For example, alignment can reduce the differences between source and target domains in transfer learning, improve the learning on new domains. ## 4 Weight Average "Weight average" combines multiple weights of networks for the final model with better performance, robustness and generalization. It is also known as vanilla average [204], weight summation [131], as shown in Eq.(22): \[\sum\lambda_{i}W_{i}, \tag{22}\] where each model is assigned a weighted parameter \(\lambda_{i}\) that controls how much it contributes to the fused model. However, different from alignment or mode connectivity, the pre-conditions of WA are relatively strict. For example, the original models must share part of the training trajectory or located in the same basin [99, 133], etc. It means that the final model can benefit from all models when the weights are similar enough but have certain differences [110]. In a flat basin, the solutions tend to demonstrate good performance. Conversely, points in narrow regions are easily accessible to energy-barriers, resulting in increased losses [167]. Previous sections focus on transporting solutions from different regions to the same basin through mode connectivity or alignment. This section will focus on the fusion of convex combinations of solutions in the same basin, which makes the merged solution closer to the midpoint (optima) of the basin with better generalization performance than endpoints, such as SWA [99], model soup [239], etc. The models discussed in this section includes the following cases: * Multiple similar models with certain differences. * Multiple models after appropriate fine-tuning on foundation models (e.g., model soup, model arithmetic, etc.). * Multiple checkpoints from networks with the same architectures and sharing part of the training trajectory (e.g. SWA [99], tail average [166], etc.). Accordingly, in this section, we review two-fold approaches of weight average "Weight average" and "Average in subspace". Next, we introduce representative approaches of WA "Model soup", "Model arithmetic" and "SWA". The representative approaches are listed in Table 4. ### Weight Average Because of the high redundancy of neural network parameters, there is usually no one-to-one correspondence between weights of different neural networks. Accordingly, there is usually no guarantee that WA will perform well by default. For trained networks with widely varying weights, the vanilla average performs poorly [204]. From a statistical point of view, WA allows the individual model parameters in the model to be controlled, which reduces the variance of the final model, resulting in a reliable effect on regularization properties and output result [77, 166]. First, the weights of neural networks could be merged directly. Generally speaking, the linear interpolation of two well-trained model in different regions does not necessarily generate a well-performing model because of the nonlinear structure of neural networks [167]. However, for the solutions before and after fine-tuning are usually within a basin [95, 240], the linear interpolation of the solutions could improve he accuracy of fused model and the robustness of the distribution shift as Eq.(23): \[W=(1-t)\cdot W_{0}+t\cdot W_{ft}. \tag{23}\] In addition to simple linear interpolation, the fusion of weights could be transformed into another mathematical form of aggregation. Matena et al. [159] propose Fisher merging, which regards model fusion as a approximately maximization of the joint likelihood of the posterior distribution over parameters. It use the \begin{table} \begin{tabular}{l l l l} \hline \hline Method & Method & Ref. & Introduction \\ \hline choose the best & [239] & \(\operatorname*{argmax}_{i}ValAcc(W_{i}))\) & simple but without \\ vanilla average & [204] & \(W=\sum\lambda_{i}W_{i}\) & often have bad performance \\ Fisher & [159] & \(W=\frac{\sum\lambda_{i}f_{i}w_{i}}{\sum\lambda_{i}f_{i}}\) & maximize joint likelihood \\ & & of the posterior distribution \\ RegMean & [109] & \(W=\left(\sum X_{i}^{T}X_{i}\right)^{-1}\sum\left(X_{i}^{T}X_{i}W_{i}\right)\) & minimize differences \\ MLP fusion & [232] & \(W=\sum\left[\sigma\left(\mathbf{X}W_{1,\cdot,i}+\mathbf{b}_{1,i}\mathbf{1}\right)W_{2,i, \cdot}\right]+\mathbf{1}\mathbf{b}_{2}^{T}\) & cluster the sub MLPs via NTK \\ BTM & [135] & \(W=\sum\lambda_{i}W_{i}\) & \\ PAPA & [110] & \(W\leftarrow\) Averaging \((W,N)\) & \\ ratatouille & [188] & \(W=\sum\lambda_{i}\left(w_{i},\phi_{featurizer}\right)\) & \\ Lookahead & [259] & \(W_{slow,t+1}=ema(W_{fast})+(1-\alpha)^{t}W_{slow,0}\) & \\ SMA & [9] & \(W=\frac{t-t_{0}}{t-t_{0}+1}\cdot W_{t-1}+\frac{1}{t-t_{0}+1}\cdot W_{t}\) & \\ WiSE-FT & [240] & \(W=(1-\lambda)\cdot W_{0}+\lambda\cdot W_{ft}\) & \\ EWC & [131] & \(W=\frac{H_{1}W_{1}+H_{2}W_{2}}{H_{1},+H_{2}}\) & \\ gradient & [65] & \(W=\sum\lambda_{i}W_{i}-\frac{1}{i}\nabla X_{gradient}\) & \\ PAINT & [95] & \(W_{\text{patch}}=\left(1-\sum\lambda_{i}\right)W_{0}+\sum\lambda_{i}W_{\text{ ft}}\) & \\ HiPro & [147] & \(w_{i}=\frac{\sum w(\mathbf{p}_{i})\mathbb{I}(\tau_{i}\in\mathcal{T}_{i})}{\sum \mathbb{I}(\tau_{i}\in\mathcal{T}_{i})}\) & \\ EWR & [37] & \(W=\frac{\lambda_{0}\cdot f_{W_{0}}\cdot W_{0}-\lambda_{1}\cdot f_{\tau_{1}} \cdot\tau_{1}+\lambda_{2}\cdot f_{\tau_{2}}\cdot\tau_{2}}{\lambda_{0}\cdot f_ {W_{0}}+\lambda_{1}\cdot f_{\tau_{1}}+\lambda_{2}\cdot f_{\tau_{2}}}\) & \\ experts merging & [102] & \(W=W_{\text{pre}}+(\sum\lambda_{i}\tau_{i})\) & \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of representative methods and formulas of weight average. Fisher information \(F_{i}\) of the model as the posterior precision matrix to perform a Laplacian approximation, so as to obtain the Gaussian approximation \(\log p\left(w\mid w_{i},F_{i}\right)\) of the posterior distribution as Eq.(24): \[\max_{w}\sum\lambda_{scale}\log p\left(w\mid w_{i},F_{i}\right), \tag{24}\] where \(\lambda_{scale}\) denotes model scalar hyperparameters. Jin et al. [109] tend to minimize the \(\ell 2\) distance between the merged model and other multiple models trained on different datasets \(\left\langle X_{i},Y_{i}\right\rangle\), which is called Regression Mean (RegMean). Accordingly, the optimization problem can be converted into linear regression problem as Eq.(25): \[\min_{W}\left\|W^{T}X_{1}-W_{1}^{T}X_{1}\right\|^{2}+\left\|W^{T}X_{2}-W_{2}^{T }X_{2}\right\|^{2}. \tag{25}\] Compared with Fisher average [159], RegMean obtain the inner product matrix of the linear layer input in the forward pass process, which improves the efficiency of the operation. Besides, Wei et al. [232] regard each layer of multi-layer perceptrons (MLPs) as the distribution of corresponding weights. The sub-MLPs can be clustered by neural tangent kernel (NTK) approximating, which can be solved with GWB [179]. Moreover, other works choose to average the weights of multiple experts[135] or leverage Bayesian algorithm [254] to improve the generalization and efficiency. Also, some recent work focuses on increasing the diversity of models with well-behaved and varieties of weights. PopulAtion Parameter Averaging (PAPA) [110] start at the same initialization and train each models on a slightly different data set (e.g., data orderings, augmentations, regularizations, etc.), averaging these models every few epochs. It is equivalent to training a larger batch size, helping to improve the generalization of the model [86]. Further, another possible interpretation is that PAPA fuse the models under better initial conditions by improving the cosine similarity between networks (29\(\%\)-61\(\%\) to 95\(\%\)-99\(\%\)), which is similar to some work on alignment [3]. Based on the idea of maximizing the diversity of weights, Rame et al. [188] fine-tune the base model for multiple times on different auxiliary tasks and re-fine-tune these auxiliary weights so as to obtain a variety of weights. Gao et al. [65] utilize development data and softmax normalized logarithm with temperature to adjust the parameters. The models are re-parameterized and updated iteratively to ensure normalization, which could reduce overfitting and increase robustness. In addition, the mean of gradient information \(\nabla X_{gradient}\) could be used to optimize the WA [65]. Let \(\eta\) be step size. The merged model is shown as Eq.(26): \[W=\sum\lambda_{i}W_{i}-\eta\nabla X_{gradient}. \tag{26}\] Next, from the perspective of iterative averaging, we can average the weights at different times during the training process of the same or architecturally identical model [149, 65, 131]. It reduces the variance and updates the model more smoothly but need to share a portion of the training history [207]. Early iterative average has the problem of convergence rate [183, 194], especially for high-dimensional problems. Then, geometric Polyak-Ruppert [166] use the weight average instead of uniform average, and its weights decay geometrically. It uses regularization properties (control deviation characteristics of corresponding SGD estimators) to produce stable fusion results. Geometric Polyak-Ruppert helps to capture the overall trend of the gradient when training conditions are poor. In contrast, tail average [101] is more appropriate when data conditions are good. Tail average average the weights of each iteration during the last period of the training, which can prevent large fluctuations of parameter in the late stage. When the model is close to convergence, and the tail part of the gradient may contain information closer to the real gradient. Moreover, a great deal of factors (e.g., decaying step size [183], constant step size [165], form of linear interpolation, etc.) in the iteration average will affect the final result. Further, checkpoint average [226, 225, 91, 149] uses checkpoints from the same training run. Nevertheless, simple coordinate-wise weight average may result in poor performance. Hierarchical aggregation improves model performance by combining parameters from multiple models at different layers or structures. The network architecture suitable for a specific aggregation approach has certain limitations [159, 254], so recursively processing layers with matching averages may affect the final performance. Wang et al. [227] propose a hierarchical aggregation scheme. The server obtains the first layer weight of the model and broadcasts it to the client, which continues to train all the layers with the matching layers frozen. And then repeats the procedures until the last layer before aggregation. Hierarchical Prompt learning (HiPro) [147] constructs a hierarchical task tree and average classifier weights generated from the global prompt and individual prompt \(\mathbf{p}_{i}\). The classifier average weights on \(\mathrm{i}_{th}\) task \(\tau_{i}\) is shown as Eq.(27): \[W_{i}=\frac{\sum W\left(\mathbf{p}_{i}\right)\mathbb{I}\tau_{j}}{\sum\mathbb{I} \tau_{i}}, \tag{27}\] where \(\mathbb{I}\) is the indicator function. Its layer-wise structure helps to gain knowledge of diverse granularity. Some other work [186, 203] propose layer-wise, module-wise and matrix-wise structure of parameter division, which reduces the cost of calculation and storage and inspires more directions of WA. Further, WA is often used to weight scaling rules, which average the predictions of the distribution over the weights [164, 209]. To ensure the efficiency of model average, Akhlaghi et al. [5] propose that activation functions should restrict postsynaptic activity to a limited range(e.g., sigmoid, hyperbolic tangent, etc.). Leontev et al. [131] propose other constraints that network generates presynaptic activity in the presence of native features and the mean of the weights' probability distribution should be zero [13]. In addition, for heterogeneous issue, they can be approximated by introducing additional zero-valued weights [131]. ### Swa Inspired by Fast Geometric Ensembling (FGE) [66] and checkpoint average [149], Izmailov et al. [99] utilize a constant or periodic learning rate to average multiple points along the SGD trajectory, which is regarded as SWA. SWA improves the training on a series of important baslines, providing better time scalability. Instead of training a set of collected models like vanilla fusion, SWA trains a single model to find smoother solutions than SGD. In Table 5, we list the approaches related to SWA. Also, SWA can be applied to any architecture or datasets and demonstrate performance than snapshot ensemble (SSE) [91] and FGE. At the end of each cycle, the SWA model \(W_{SWA}\) is updated by averaging the newly obtained weights over the existing weights, as shown in Eq.(28): \[W_{\text{SWA}}\,\leftarrow\,\frac{W_{\text{SWA}}\cdot n+W}{n+1}. \tag{28}\] Nevertheless, SWA can only average the points near the local optimal point, and finally get a relatively minimum value rather than accurately approximating the optima. Also, the final input sample deviation \begin{table} \begin{tabular}{l l l} \hline \hline Method & Ref. & Introduction \\ \hline SWA & [99] & weight can be manually weighted after training \\ EMA & [115, 242] & smoothing model weights \\ EWA & [93] & improve the performance without increasing inference delay and weights \\ SWAG & [155] & approximate Bayesian model averaging in Bayesian DLand achieves the \\ SWAG & [155] & state-of-the-art uncertainty calibration results in various settings \\ SWALP & [248] & match the performance of SGD training with quantized parameters \\ SWAP & [78] & speed up the training of NN by using large batch size \\ SWAD & [21] & improve the OOD generalization performance of DNNs \\ LAWA & [113] & record up-to-date checkpoints at the end of each epoch \\ HWA & [75] & combine online WA and offline WA \\ PSWA & [77] & find high-quality local optima quickly \\ TWA & [137] & conduct subspace training to implicitly adjust \\ & & the averaging coefficients and approach better to the minima \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of characteristics of Weight Average methods based on SWA could be large or insufficient due to some factors (e.g., poor convergence at early stage, large learning rate, fast weight change rate, etc.), which results in bad overall effect. There is a good deal of work tends to change the sampling schedule of SWA. For example, SWA-Densely (SWAD) [21] uses more dense sampling points to solve the problem of insufficient random weights. Periodic-SWA (PSWA) [77] is initialized during the early stage of the operation of SGD instead of in the late convergence phase like SGD. Latest weight averaging (LAWA) [113] averages only the checkpoints collected at the end of each epoch given the large weight variation during the initial training phase. In Figure 4, we summarize several ways to optimize SWA with different sampling schedules. Some work based on SWA optimizes the polymerization process to gain competitive outcome. SWA in Low-Precision (SWALP) [248] tends to reduce the influence of quantization noise and low learning rate so as to converge to the optima. SWA-Gaussian (SWAG) [155] obtains Gaussian distribution from the points of SWA, then average the Bayesian models sampled from the distribution. Trainable Weight Averaging (TWA) [137] adjusts the fuse solution to better approximate the minimum by projecting the gradient onto the subspace as Eq.(29): \[W_{\mathrm{TWA}}\gets W_{\mathrm{TWA}}-\eta_{l}\mathbf{B}\left(\mathbf{B}^{\top} g\right), \tag{29}\] where \(\mathbf{B}\) denotes the matrix of a set of base vectors. \(\eta_{l}\) is the learning rate. \(g\) is the gradient. TWA could eliminate errors caused by static averaging in full parameters space. Different from the above approaches, Hierarchical Weighted Average (HWA) [75] combines online and offline WA into a common training framework, Online WA is designed to speed up convergence, offline WA tends to increase generalization performance. HWA tends to combines the advantages of both. Similar to SWA, Exponential Moving Average (EMA) [184, 214] is often used to smooth the model weights in order to reduce the noise and volatility of update on weights as Eq.(30): \[W_{EMA}\leftarrow\lambda_{d}W_{EMA}+(1-\lambda_{d})W, \tag{30}\] Figure 4: **Comparison of sampling and learning rate schedule of different SWA related methods. (a) SWA: constant learning rates. (b)SWA: cyclical learning rates c. (c)SWAD: sample densely. (d)HWA: leverages both online and offline WA, which sampled at different synchronization cycles with a slide window of length \(h\), i.e. \(\overline{w_{i}}=\frac{\sum_{t=i-h+1}^{i}\overline{w_{t}}}{h}\).** where \(\lambda_{d}\) denotes the decay rate (\(\approx 0.99\)). Some recent work [19] combines KD with EMA, using the weights of EMA (e.g., student models [217] or branches [242]) as teacher models to transfer knowledge. Huang et al. [93] replace the networks with Mixture-of-Experts (MoEs) [201] and perform the EMA on MOEs at the end of each iteration. It can be used to improve generalization on a variety of 2D and 3D vision tasks on ViT architectures. Arput et al. [9] propose simple moving average (SMA), which conducts moving average in the later stages of training (after \(t_{0}\) rounds of iteration) to improve the performance in out of domain as Eq.(31): \[\hat{W}_{t}=\left\{\begin{array}{ll}W_{t}&t\leq t_{0}\\ \frac{t-t_{0}}{t-t_{0}+1}\hat{W}_{t-1}+\frac{1}{t-t_{0}+1}W_{t}&t\leq t_{0} \end{array}\right.. \tag{31}\] Lookahead algorithm [259] interpolates fast and slow weights linearly from the optimized trajectory. as Eq.(32): \[w_{slow,t+1}=t\left[w_{fast,t}+(1-t)w_{fast,(t-1)}+\ldots+(1-t)^{(t-1)}w_{ fast,0}\right]+(1-t)^{(t)}w_{slow,0}. \tag{32}\] The trajectories of fast weights \(w_{fast,t}\) are updated quickly by EMA in the direction of low curvature. The slow weights \(w_{slow,t}\) smooth the oscillations by interpolating the parameters. Lookahead reduces variance, speeds up convergence and bring the results closer to the regions with high test accuracy. ### Model Soup Model soup [239] refers to the method of averaging the models fine-tuned with different hyperparameters. It is simple but effective, achieving an accuracy of 90.94\(\%\) on the ImageNet-1K, which surpasses the previous work on CoAtNet-7 (90.88\(\%\)) [38] and ViT-G (90.45\(\%\)) [255]. In Table 6, we summarize the different soups. Model soup reduces the inference time required for ensemble learning \(\frac{1}{n}\sum_{i=1}^{n}f\left(x,W_{i}\right)\)[195], which includes three soups as follows: The uniform soup average all the weights of the model directly \(f\left(x,\frac{1}{n}\sum_{i=1}^{n}W_{i}\right)\). The greedy soup adds the models to the soup in sequence, keeping the model in the soup if the accuracy of the verification set does not decrease, which performs the best of the three soups as Eq.(33): \[\text{ingredients }\leftarrow\text{ ingredients }\cup\{W_{i}\}\text{ if }Acc(\ Avg(\ ingredients\cup\{W_{i}\}))\geq Acc(Avg( ingredients)). \tag{33}\] Greedy soups [239] can be regarded as another form of SWA [99], which take a subset of weights as the input sample of the SWA. The learned soup removes the order rules of greedy soup, learns the mixing coefficient \(\lambda_{mix}\) and temperature scaling parameters \(\lambda_{temp}\) for each component in the verification set, and optimizes the soup by gradient-based optimization as Eq.(34): \[\operatorname*{arg\,min}_{\lambda_{mix}\in\mathbb{R}^{k},\lambda_{temp}\in \mathbb{R}}\sum_{j=1}^{n}\ell\left(\lambda_{temp}\cdot f\left(x_{j},\sum_{i =1}^{n}\lambda_{mix,i}W_{i}\right),y_{j}\right). \tag{34}\] \begin{table} \begin{tabular}{l l l} \hline \hline Method & Ref. & Introduction \\ \hline Uniform Soup & [239] & average the fine-tuned models directly \\ Greedy Soup & [239] & simple operation, good performance \\ Learned Soup & [239] & high memory cost (especially in large-scale model) \\ Sparse Soup & [269] & flexible and transparent alleviates scaling issue \\ Adversarially-robust soup & [34] & improve adversarial robustness to multiple threat models \\ Rewarded Soup & [189] & merge networks according to user preferences \\ DiWA & [190] & leverage the full potential of WA \\ Fed Soup & [26] & alleviate overfitting and seek flat minima \\ Adapter Soup & [32] & maintain performance on in-domain and new domains. \\ \hline \hline \end{tabular} \end{table} Table 6: Summary of different methods of Model Soup. The adversarially-robust model soup [34] moves the convex hull of parameters of each classifier to adjust the weights of soup, in order to balance the robustness to different threat models and adapt to potential attacks. Based on reinforcement learning from human feedback (RLHF), rewarded soup [189] fine-tunes the models according to the diverse rewards. It selects the proper interpolating coefficients \(\left\{\lambda_{i}^{j}\right\}_{i=1}^{N}\) form \(N\)-simplex that maximize the reward \(\hat{R}\) as Eq.(35): \[\operatorname{argmax}_{j=1}^{n}\hat{R}\left(\sum_{i=1}^{N}\lambda_{i}^{j}W_{i }\right). \tag{35}\] ### Model Arithmetic Different from traditional single-task learning, MTL is a kind of joint learning. The multiple tasks are learned in parallel so as to take advantage of data resources for different tasks [44, 261]. In general, MTL could be regarded as a parameter sharing, or ensemble [42], that can include major information of multiple individual tasks. In the process of MTL, participants fine-tune the latest model on the corresponding task in each iteration. The multiple fine-tuned models are merged to produce the final model or base model for the next iteration [29, 44]. The general fusion method adopted in MTL is linear combination. Patching with interpolation (PAINT) [95]combines fine-tuning and initial model so as to improve performance for specific task while also maintaining accuracy for other tasks. PAINT reduces the time of migration and adaptation between multi-tasks. HiPro [147] explore the shared information from a plenty of tasks via hierarchical structure, which adapts pre-trained vision-language models (VLMs) to multiple downstream tasks. In addition, there are some other approaches group similar tasks could together, which is conducive to obtain shared model parameters conveniently [48, 53, 158, 210]. Moreover, recent work set up metrics to measure the performance of the shared model, such as, uncertainty to weight tasks [117], loss weighting strategies [128], etc. Huang et al. [90] introduce Low-rank adaptations Hub (LoraHub), a framework that ensembles LoRA modules trained on different given tasks, which improves flexibility and scalability in MTL. In MTL, the pre-trained model and tasks vectors (i.e., \(\tau_{i}=W_{ft}-W_{pre}\), the difference between the pre-trained model and the fine-tuned model) are combined to result in better performance on all tasks. Based on this observation, task arithmetic [94] improves the performance of the model on tasks by adding and linear combination of fine-tuning task vectors, which has become a flexible and efficient method for editing pre-trained models directly as Figure 5. Ortiz et al. [174] fine-tune the pre-trained model in the tangent space and provide a more reliable way to edit the pre-trained model by NTK linearization [100], improving the task algorithm significantly by reducing the accuracy gap of individual tasks [205]. Similar to the task algorithm, Daheim et al. [37] propose elastic weight removal (EWR), which calculates difference vectors between original models and expert models (fine-tuned on positive behaviours). EWR uses Fisher [159] to average the weights of the model and task vectors as Eq.(37): \[W=\frac{\lambda_{0}\cdot\mathrm{f}_{W_{0}}\cdot W_{0}-\lambda_{1}\cdot\mathrm{ f}_{\tau_{1}}\cdot\tau_{1}+\lambda_{2}\cdot\mathrm{f}_{\tau_{2}}\cdot\tau_{2}}{ \lambda_{0}\cdot\mathrm{f}_{W_{0}}+\lambda_{1}\cdot\mathrm{f}_{\tau_{1}}+ \lambda_{2}\cdot\mathrm{f}_{\tau_{2}}} \tag{36}\] It combine Fisher merging and task arithmetic to preserve positive behaviour in the model while removing the negative behaviours. Jang et al. [102] add the sum of the vectors of a particular experts to the pre-trained language models (LMs) so as to cover the information from multiple experts trained on diverse tasks. In sum, the essence of task arithmetic is to preserve pre-trained model behavior, thereby avoiding expensive joint fine-tuning on multiple tasks [95, 135, 240]. ### Average in Subspace Due to the large dimension of conventional full-parameter space, from tens of millions to hundreds of millions of dimensions, model fusion in subspace will constrain the training trajectory in a low-dim subspace so as to reduce the loads and difficulties [73, 132, 136, 73]. In general, DNNS are over-parameterized. The Low-dimensional Trajectory Hypothesis [138] speculates that the intrinsic dimension required for network training is not as large as the number of parameters given. The parameters trained and redundant information are reduced in a subspace, which could accelerate the convergence speed and improves robustness and generalization [136, 138]. Recently, Li et al. [137] demonstrate that each point in the subspace corresponds to a base. The linear combination of bases is equivalent to a weighted average [132]. Liu et al. [145] extract submodels by sparse training to fuse multiple local models in low-dimensional subspace. Leontev et al. [131] propose Elastic Weight Consolidation (EWC) to average the models in multi-dimensional space as Eq.(37): \[W=\frac{H_{1}W_{1}+H_{2}W_{2}}{H_{1,}+H_{2}}, \tag{37}\] where \(H_{i}=\mathbb{E}_{p(x|w)}\left[\left(\frac{\partial L}{\partial w_{i}}\right)^ {2}\right]\) represents Hessian matrix. EWC changes the weights of individual models in the direction of the minimum change in the loss function so as to prevents catastrophic forgetting [121]. But there are difficulties in the applications of WA in subspace, such as low efficiency of random basis [132], or expensive computation cost[138], etc. Moreover, when working with high-dimensional or large models, the projection matrix for projecting the gradient into the subspace can be too large for a single GPU to bear. Wortsman et al. [238] provide a way to learn model subspace in a supervised learning. Gaya et al. [67] learn a convex subspace in online adaptation in reinforcement learning. In short, how to explore the mechanism of vanilla average in subspace with numerous examples of training DNNs in subspace is a challenge for the future. ### Discussion WA gets the final model by averaging the weights of different deep models without additional computational complexity or training processes [109, 159]. In general, if random models have significant differences in presentation capabilities, structure, or training data, the results of fusion may not achieve the expected performance. The linear interpolation of models from scratch using the same hyperparameter configuration but with different data orders is even less effective than stochastic models [59]. Therefore, a large number of approaches described in this section aim to optimize the WA process in other mathematical ways. Further, Figure 5: The flow chart of Task Arithmetic and LoRA Hub[90] in multi-task scenarios. when models share part of their optimized trajectories (e.g., checkpoint averaging, tail averaginhg, SWA [99, 149], etc.) or fine-tuned on the same pre-trained model (e.g., model soup [239], etc), the accuracy of interpolated models performs better [167]. Moreover, model soup [239] averages the models with different hyperparameter configurations to get the final result. In addition, selection of proper weights in model average can also be a challenge, which is often fraught with subjectivity. More complex weight selection mechanisms may need plenty of complex trials and cross-validation. WA is a promising technique in DL, which can be used as model optimization techniques in the future to reduce the weight fluctuation between different iterations, and improve the stability and convergence rate. WA can improve the aggregation stage of FL to protect privacy better and reduce communication costs in the future. Moreover, it is expected to reduce the storage space and computing overhead of the model on resource-constrained devices by implementing network compression on the terminal devices [250]. In short, WA is a promising and cost-effective DL technique, which can be applied in areas such as FL to improve performance and reduce storage overhead. ## 5 Ensemble Learning Ensemble learning, or multi-classifier system, is a technique that integrates multiple single models to generate final predictions, including voting, average [195], etc. It improves overall performance and reduces the variance of the models, addressing issues such as overfitting, instability, and limited data volume. In this section, we demonstrate "Ensemble learning" in DL and related techniques "Model reuse". ### Ensemble Learning Ensemble learning combines the outputs of networks, which surpasses than the result obtained from any model alone [225, 7, 198]. The general WA averages the model weights, that is, \(f\left(x,\frac{1}{n}\sum_{i=1}^{n}W_{i}\right)\), which ends up with only one model. In contrast, ensemble learning averages the output value after inference \(\frac{1}{n}\sum_{i=1}^{n}f\left(x,W_{i}\right)\), resulting in multiple models [239]. Ensemble learning has a long history of research. There are plenty of typical algorithms, such as Adaboost [62], Bagging [15], Stacking [236], etc. In order to make the network show better generalization ability, some previous work [80, 16] applies the ensemble learning (e.g., random forest, etc.) to DNNs, which can be used to adjust the output and take full advantages in feature selection, noise filtering. Kontschieder et al. [123] propose deep neural decision forests, which uses the random decision function in the optimization algorithm of CNN to reduce the complexity of parameters. Zhou et al. [267] introduce a decision-tree ensemble approach to demonstrates the possibility of building models without backpropagation, which needs fewer hyperparameters than a typical deep neural network. Moreover, Dropout [209] typically needs to ensemble the output of all subnets to reduce prediction errors.Nevertheless, if multiple models are too similar, the predictions of different networks will be too close to make sense of ensemble learning. To find enough diverse models, snapshot ensemble [91] uses long learning rates, combining the predictions of multiple neural networks saved at the end of each learning rate cycle to produce one final result. As an improvement on snapshot, FGE [66] uses a linear piece-wise cyclic learning rate and smaller steps to find models along the low-loss path [46], which inspires the relevant work of LMC. Similarly, Laine et al. [126] tend to ensemble the predictions over multiple previous training epochs. Arpit et al. [9] ensemble a set includes independent models and corresponding moving average models, which is referred to as ensemble of averages (EoA) as Eq.38: \[\hat{y}=\arg\max_{n}\mathrm{Softmax}\left(\sum f\left(x;\hat{W}_{i}\right) \right)_{n} \tag{38}\] WAK et al. [231] present a distributed robust optimization (DRO) framework to learn from a black box model, fusing multiple models using a distributed robust optimization approach. Hoang et al. [84] demonstrate the ensemble of black-box experts with no access to black-box architectures. Besides, there is a variety of work [126, 135] combines the ensemble learning with WA. The ensemble learning in DL achieves remarkable results and is widely used in facial recognition [233], speech recognition [40], and other practical fields. ### Model Reuse Based on existing pre-trained source models, model reuse [266] provides a required model applied to a new task without having to retrain the new model from scratch. It can save time and computing resources and provide better performance in the case of limited resources [249]. In addition, because the focus of transfer learning is to solve prediction tasks on the target domain, model reuse can be regarded as a kind of transfer learning. But transfer learning requires labeled data for both source and target, while in model reuse, only unlabeled data can be collected and data from source domain can not be used [153]. Different from multi-classifiers ensemble learning, most current approaches reuse the existing features, labels or modalities to obtain the final prediction [176, 266] without storing a large amount of training data [245]. Fixed model reuse (FMR) [249] could be regarded as features reuse essentially. Based on fixed models or features, FMR decreases the data required during training and provides privacy protection for fixed components. But it can only use one type of source feature. Jha et al. [105] present Bag of Experts (BoE) architecture to reuse annotated data from reusable slots rather than one source domain train the target model training. Pre-trained multi-model reuse (PM\({}^{2}\)R) forms the predictions from pre-trained models into matrices and obtains the final predictions based on the consistency among different modalities. But these type of methods ignore the potential information and only can be applied to limited scenarios. Another crucial challenge of model reuse is to identify useful models from a set of pre-trained models for a given learning task. Wu et al. [244] propose reduced kernel mean embedding (RKME) specification to obtain available pre-trained models in the deployment stage. Tang et al.[216] use optional calibration strategies and types of specifications, which combines the advantages of RKME and HMR. Using a single model for model reuse produces too much homogenous information (e.g., a model trained in one domain may not fit data in another domain), and it is difficult to find a single pre-trained model that is perfectly suited to the target domain. In general, we use a set of similar models to produce better performance than a single model, which is denoted as Multiple Model-Reuse (MMR) [153]. Based on MMR, Xiang et al. [245] propose PM\({}^{2}\)R without training data or validation instances. Heterogeneous model reuse (HMR) [244] tends to reuse the local models for global predictions at first and improve the local model by the multiparty multiclass margin (MPMC-margin). Instead of using the output features or labels, Lou et al. [151] improve the way of representation, and use the hidden layer representation of the source model to train the target depth model, which is superior to the approach using the limited data in target domain. nonlinear multi-model reuse (NMMR) Nevertheless, some MMR methods will assume the linear relationship between the source model and the target model strictly, which is difficult to define in practice. NMMR [153] improves \begin{table} \begin{tabular}{l l l} \hline \hline Methods & Ref. & Introduction \\ \hline FMR & [249] & reuse the fixed models to reduce the cost during training \\ PM\({}^{2}\)R & [245] & utilize consistency on different modalities \\ MMR & [151] & reuse multiple source models \\ NMMR & [153] & take advantage of the nonlinear relationship \\ RKME & [244] & identify available pre-trained models by specifications in the deployment stage \\ HMR & [243] & combine and adjust the output of local models to generate a global model, \\ HMR for ML & [216] & reuse of biased models trained on local datasets to construct a global model \\ RKHS & [244] & does not require calibration \\ ZhiJian & [260] & the merge module integrates the features, \\ & weights, or predictions of the pre-trained models \\ \hline \hline \end{tabular} \end{table} Table 7: Summary of multiple model reuse methods based on model fusion. performance significantly by introducing a manifold regularization scheme to take advantage of arbitrary nonlinear relationships between the source and target models. Specifically, we compare the characteristics of different reuse methods in Table 7, Brifly, model reuse can significantly reduce the amount of data required by using pre-trained models to solve the problem of consuming a lot of bandwidth when transferring data between different ends. Multi-model reuse also has a wide range of applications, such as speech recognition, security and privacy interaction system, digital retina [64], etc. ### Discussion Compared with related model fusion algorithms such as federated learning [88, 89, 160], which have certain requirements on model parameters and sizes, ensemble methods use prediction to combine multiple heterogeneous weak classifiers without such limitations. In addition, networks with different architectures in the ensemble approaches will have a more obvious comparison effect than weight average. Ensemble methods, however, requires maintaining and running multiple trained models and running them together when tested. Given the larger scale and complexity of deep learning models, this approach is not suitable for applications with limited computational resources and costs [204]. Due to the diversity of ensemble learning frameworks, it is possible to achieve model diversity and enhance generalization. In the future, this will be important for dealing with changes in data and adversarial attacks. Ensemble learning in DL is expected to provide confidence estimation and uncertainty measurement for model predictions, which is critical for safety and reliability in decision support systems, autonomous driving [74], medical diagnostics, etc. ## 6 Application In recent years, a plenty of new research has appeared in the field of deep model fusion, which has also promoted the development of this related application field. Based on the reviews of the development of model fusion and the current mainstream methods, this section summarizes some representative applications of the existing model fusion research "Federated Learning", "Fine-tuning", "Distillation" and "Model Fusion on Foundation Models/LLMs". In the future, more work will try to further improve the accuracy and ease of model fusion, and gradually apply the model fusion method to real-world problems. ### Federated Learning With the development of artificial intelligence, mobile devices, edge devices (e.g., IoT devices, sensors, etc.), and cloud computing platforms access to large amount of data. However, due to the restrictions of practical scenarios and network bandwidth, it is is fraught with risk to collect all data from edge devices [139, 208]. To address the challenges of security and centralization of data storage, FL [160, 170] allows many participants to collaborate to train shared global models while protecting data privacy, without the need to centralize datasets on a central server. It also could be regarded as a multi-party learning problem [177]. Particularly, aggregation is a significant procedure of FL, which incorporates model or parameter updates trained by various parties (such as devices, organizations, or individuals). In Figure 6, we demonstrate two different aggregation approaches in centralized and decentralized FL. Because of the efficient use of computing resources, low-cost nature (i.e., no need to transfer the entire datasets or maintain local parameters during training, etc.), Federated Averaging (FedAvg) [160] is the most influential FL algorithms. In the process of FedAvg, the local clients update the weights as Eq.39: \[\mathbf{w}_{i}^{(t+1)}=\mathbf{w}_{i}^{(t)}-\eta\nabla g_{i}\left(\mathbf{w}_ {i}^{(t)},\xi_{i}^{(t)}\right), \tag{39}\] where \(\nabla g_{i}\left(\mathbf{w}_{i}^{(t)},\xi^{(t)}\right)\) represents stochastic gradient on the mini-batch \(\xi_{i}^{(t)}\) at \(t_{th}\) round [114, 160]. The global model \(\mathbf{w}^{(t)}\) is updated as Eq.40: \[\mathbf{w}^{(t+1)}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{w}_{i}^{(t)}. \tag{40}\] Due to the heterogeneity of models (e.g., data distribution, bandwidth environment, network structure, permutation invariance [50], etc.), a simple aggregation of weights can adversely affect the performance of the final model and put the pressure on communication [161]. We list the common aggregation methods in Table 8. Probabilistic federated neural matching (PFNM) [254] uses the Bayesian nonparametric mechanism to adjust the global model size to accommodate the heterogeneity of data. But it can only be applied to simple architectures. FedMA [227] proposes to hierarchically match neurons of a network, which is quite difficult in practice (participant models need to have the same number of layers and structure). FedBABU [171] only aggregates the body in the aggregation phase instead of the whole network, where body is related the generality of the network and head represents personalization. It is more adaptable to adapt to the heterogeneous data of each client, and improves the representation and personalization ability of a single global model. Moreover, centralized gradient aggregation puts pressure on communication bandwidth and computing costs. In order to avoid the risk of failure of large-scale centralized fusion of local models, Hoang et al. [84] compare centralized and distributed gradient aggregation that occurs only in the local experts. Other recent work [88, 192] regards client updates as pseudo-gradients \(\Delta_{i}\), which is aggregated as Eq.(41), and the global model is updated as Eq.(42): \[\bar{\Delta}^{(t)}=\frac{1}{n}\sum_{i=1}^{n}\Delta_{i}^{(t)} \tag{41}\] \[\mathbf{w}^{(t+1)}=\mathbf{w}^{(t)}-\eta\bar{\Delta}^{(t)} \tag{42}\] Based on it, Jhunjhunwala et al. [106] propose FedExp, a dynamically varying pseudo-gradient self-adaptive method for caculating the server step size. FedExp accelerates convergence and reduces the overhead, which Figure 6: **Two aggregation modes of federated learning. Left:** Centralized federated learning transfer models or gradients between the central server and the terminals of clients, which are aggregated on the server finally. **Right:** Decentralized federated learning transfers and aggregates models between terminals of clients without a central server. uses the extrapolation to accelerate Projection Onto Convex Sets (POCS) as Eq.(43): \[\mathbf{w}_{\mathrm{POCS}}^{(t+1)}=\mathbf{w}_{\mathrm{POCS}}^{(t)}-\lambda \left(\frac{1}{n}\sum_{i=1}^{n}P_{i}\left(\mathbf{w}_{\mathrm{POCS}}^{(t)} \right)-\mathbf{w}_{\mathrm{POCS}}^{(t)}\right) \tag{43}\] Huang et al. [92] aggregate personalized sparse gradients and masks trained from local models to generate new global model as Eq.(44): \[\mathbf{w}^{(t+1)}=\mathbf{w}^{(t)}-\frac{1}{|S_{t}|}\sum\left(\tilde{\mathbf{ w}}_{0}^{(t)}-\tilde{\mathbf{w}}_{n}^{(t)}\right), \tag{44}\] where \(S_{t}\) denotes the clients. It reduces the communication overhead and solves the issues of sparse personalized FL. In addition, the application of personalized model to FL could adapt the preferences of local users and decrease the costs [43, 51, 127]. Since ensemble learning does not require averaging weights, it could be a good tool for aggregation and support heterogeneous client models. One-shot [76] utilizes ensemble learning to aggregate the local model, which achieves a relative gain of 51.5 \(\%\) over the baseline on the AUC. Similarly, there are plenty of researches that applies the ensemble learning to FL [82, 257]. Under certain conditions ( \(i_{m}<\sqrt{i_{s}}\) where \(i_{m}\) denotes machines, \(i_{s}\) is samples), the performance of the direct weight aggregation can be comparable to centralized algorithm that can access all samples in data distributed communication [262]. Nevertheless, it is not available to apply ensemble learning techniques directly in FL due to the heavy burden of keeping all the received models on the server. KD could solve these problems and regularize the size of global model and local learning using multi-teachers ensemble methods [268]. Recent work [70, 104, 141] present some novel FL framework based on ensemble distillation. FedFTG [258] does not directly broadcast the aggregate model back to each client, but uses knowledge extracted from the local model to fine-tune this preliminary global model in the server, mitigating the performance degradation after the model is aggregated. FedDF breaks the communication barrier between heterogeneous client models [87]. FedCVAE-KD [82] uses a lightweight knowledge distillation process to aggregate the client decoders, which generates substantially samples than FedAvg. It address the statistical heterogeneity and pipeline security [265] (i.e., outside attacker who obtains transferred data cannot train a classifier) concerns. In short, the essence of the aggregation step in FL is a model fusion technique. Selecting a reasonable model fusion method can reduce the impact of specific participants or individual data on the final model, so \begin{table} \begin{tabular}{l l l l} \hline \hline Model fusion in FL & Methods & Ref. & Aggregation \\ \hline Aggregation & FedAvg & [160] & aggregate the parameters of the participants directly \\ & FedExp & [106] & determine the server step size based on pseudo-gradients \\ & FedMA & [227] & hierarchically match neurons of a network \\ & PFNM & [254] & match the neurons of the networks \\ & FedBABU & [171] & updates only the body of the models during training \\ & FedSPA & [92] & aggregate the sparse gradients and masks from local clients \\ \hline Ensemble & FedCVAE-ENS & [82] & leverage CVAE to address statistical heterogeneity \\ & one-shot & [76] & ensemble the predictions of clients in a single iteration \\ & DENSE & [257] & ensemble local models for the global model \\ \hline Distillation & FedDF & [141] & address the quality loss [87] of BN \\ & FedFTG & [258] & use a data-free KD method to fine-tune the global model \\ & FedCVAE-KD & [82] & compress the ensemble of client decoders into a decoder \\ & FedBE & [24] & use Bayesian methods and ensemble distillation \\ & FedAUX & [197] & weight the logits of local models by certainty score \\ \hline \hline \end{tabular} \end{table} Table 8: The different aggregation approaches in Federated Learning as to improve the generalization ability and adaptability of the model in the global scope. In future work, a good aggregation approach is expected to be helpful in facing a series of challenges in federated learning. In future work, a high-quality and scalable aggregation approache are expected to face a series of challenges in FL, such as client heterogeneity, non-i.i.d heterogeneous data, limited computing resources [141], etc. FL is expected to show its potential in many more areas, such as NLP, recommendation systems [146], medical image analysis [144], etc. ### Fine-tuning Fine-tuning a base mode, such as pre-trained model, is an efficient approach for adjusting models to perform downstream tasks [23, 41], which results in better generalization and more accurate output with less labeled data. Compared with random initialization, a pre-trained model is trained by a relatively set of task-specific data, which is always a better standard starting point for training. Nevertheless. the average of existing fine-tuned models [28, 29] is even a better base model than the vanilla pre-trained model for fine-tuning on the downstream tasks. Besides, there is a great deal of recent work combining WA with fine-tuning as shown in Figure 7, such as model soup [239], DiWA [190], etc. Fine-tuning improves the accuracy on target distribution, but often leads to a decrease in the robustness of distribution shift. WiSE-FT [240] combines the weights of the zero-shot and fine-tuned models to improve the distribution shift accuracy while retaining the high accuracy of the target distribution. Local fine-tuning (Lo-fi) [237] fine-tunes each node independently without any communication, and then averages the nodes. Lo-fi can also improve the Figure 7: **Different methods of applying Weight Average in fine-tuning scenarios.** (General Fine-tuning [173], WiSE[240], Inter-training[181]: selects the appropriate fine-tuning model on intermediate tasks as the base model. Fusing[29]:average models fine-tuned on intermediate(Source) tasks. Model Soup[239]: average models that are fine-tuned on the target task. Source task \(T_{s}\). Ratatouille[188] : recycle the multiple fine-tunings on diverse auxiliary tasks, then averages all the fine-tuned weights to get the final model. target task \(T\), Auxiliary task \(T_{aux}\). performance of distributed shifts. Collaborative Descent fusion (ColD) [44] replaces base models with fusion models that can be recycled, which can continually improve the pre-trained models on which they are based. ColD [44] is superior to RoBERTa [148] and even previous multitasking models. While these strategies for averaging the fine-tuned models may be simple, they do not take full advantage of the connections between each fine-tuned model. Therefore, training on an intermediate task before before training on a target task can explore the capabilities of the base models [180, 185, 224]. Inspired by inter-training strategies [185], Rame et al. [188] fine-tune the models on auxiliary tasks, which utilize diverse auxiliary tasks and improve the out-of-distribution (OOD) generalization. The average of fine-tuned models reduces the training time required to achieve the goal [28] and generates more accurate and better generalized models. Essentially, different ways of fine-tuning (e.g., fine-tuning with frozen layers, top-layer fine-tuning, etc.) also have a certain impact on final accuracy and distribution shift [240]. However, the combination of WA and fine-tuning is an expensive overhead, which has a certain limitation on specific application. Also, it may face a problem of explosion of preservation checkpoints, or catastrophic forgetting [121], especially applied to transfer learning. ### Distillation Knowledge distillation (KD) [83] is a significant method to ensemble multiple models, which involves the following two types of models. A teacher model denotes large and powerful model trained on large-scale Figure 8: **Two aggregation modes of distillation. Top: in contrast to standard KD, this framework incorporates multiple teacher models for distillation. Bottom: the framework incorporates multiple student models for distillation.** data and has high predictive and expressive power. A student models is a relatively smaller model with fewer parameters and computational resource [18, 199]. Using the knowledge of the teacher (e.g., the output probability distribution, hidden layer representation, etc.) to guide the training, the student could achieve the prediction ability closed to the large model with fewer resources and faster speed [2, 119, 124, 221]. Given that multiple teachers or students are expected to have a preferable performance than a single model [6], we divide KD into two categories according to the aggregated objects as Figure 8. **The first type of approach is to merge multiple teacher models and distill the student model directly**, as shown in Table 9. Currently, recent work mainly integrates the output of teachers (e.g., logits [6, 49, 252] or feature-based knowledge [143, 241], etc.). Ensemble distillation (ED) [141, 157] distills the average output of multiple teachers to a student model, which can make up for the shortcomings of a single teacher model and provide more diversified and comprehensive information to the student model. FedDF [141] distills a collection of client-teacher models \(\left|S_{t}\right|\) into a server-student model. It averages the logit output \(f\left(\hat{\mathbf{x}}_{t}^{k}\right)\) of teachers as Eq.(45): \[\mathbf{x}_{t,j}:=\mathbf{x}_{t,j-1}-\eta\frac{\partial\mathrm{KL}\left( \sigma\left(\frac{1}{\left|\mathcal{S}_{t}\right|}\sum_{k\in\mathcal{S}_{t}}f \left(\hat{\mathbf{x}}_{t}^{k}\right)\right),\sigma\left(f\left(\mathbf{x}_{t,j-1}\right)\right)\right)}{\partial\mathbf{x}_{t,j-1}}, \tag{45}\] where \(t\) is the communication round, KL means KL divergence. The ensemble part of FedDF does not affect the overall workflows of clients and solves the loss problem of network batch normalization (BN) [87], Wu et al. [241] propose a multi-teacher adaptive distillation framework that can transfer knowledge from multiple teachers to student without the need for source domain data. Although merging multiple teachers makes up for the shortcomings of a single teacher, some of the teacher's information may be overlooked when conflict exist among teachers. **The other way is to use the teacher model to distill multiple students and then merge these student models.** Co-distillation (CD) [6] regards each client device as a student model, treats the average of the logits output of the other devices as teacher's output. However, the same training data sample should be used to synchronize the output of the teacher and the local student model. In order to solve the problem of CD, FD [104] uploads these local average logit vectors to the server periodically. Each average logits with the associated label as the current training sample will be used as the distillation regularizer for the next round of local device computation. FD improves performance and reduces communication rounds significantly. However, merging multi-students also has some problems, such as large demand of computing resources, poor interpretation and over-dependence on the original model. \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline \begin{tabular}{c} Merge mode \\ of distillation \\ \end{tabular} & Methods & Ref. & Introduction \\ \hline \begin{tabular}{c} Merge multiple \\ teachers \\ \end{tabular} & FedDF & [141] & addresses the quality loss issue[87] of BN, break the knowledge barriers among heterogeneous client models distilling the distribution of the predictions from an ensemble instead of the average prediction regard the ensemble knowledge distillation as a multi-objective optimization problem a data-free knowledge distillation which relieves the issue of direct model aggregation \\ \hline \begin{tabular}{c} Merge multiple \\ students \\ \end{tabular} & Batch Ensemble & [235] & mini-batch friendly, parallelizable within a device, minor memory overhead improve distillation performance while capturing the uncertainty behavior of the original ensemble average a student with multiple subnetworks, giving a single student network with no additional inference cost \\ \hline \hline \end{tabular} \end{table} Table 9: Classification of KD according to the differences of the aggregated objects ### Model Fusion on Foundation Models/LLMs Foundation models show strong performance and emergent abilities when dealing with complex tasks, Large foundation models are characterized by their sheer size, containing billions of parameters that help them learn complex patterns in the data. Especially, with the emergence of new LLMs [200, 264] recently, such as GPT-3 [17, 172], T5 [187], BERT [41], Megatron-LM, the application of WA [154, 212, 256] to LLMs attracts more attention. You et al. [251] propose B-tuning using Bayesian learning to calculate posterior prediction distribution, which tunes top-K ranked pre-trained models by their transferabilities. Zoo-tuning [203] aggregates the weights of pre-trained model with aligned channels to obtain the final model adapt to downstream tasks, which improve the issue of high cost of migrating on large models. Besides, recent work [120, 256] tends to craft better framework and modules adapted to the application LLMs. Izacard et al. [97] present fusion-in-decoder (FiD), a novel framework to perform evidence fusion in the decoder only, which aims to efficiently aggregate multiple passages. Based on FiD, Ravaut et al. [191] introduce Summa Fusion to concatenate the representations of summary candidates, which further explores the effectiveness of fusion in the context of text summarization. However, their results improve little because they do not filter out poor quality candidates before using the algorithm. In contrast, Jiang et al. [107] propose an ensemble framework LLM-BLENDER, which focus on identifying subtle differences in the output of different candidates by PairRanker algorithm, and then ranking and summarizing the candidates to achieve better performance. Huang et al. [90] introduce Low-rank adaptations hub (LoRAHub), a framework to combine multiple LoRA modules trained on different tasks, which is designed to increase the adaptability of the LLMs and reduce training costs. due to the high performance and low computational resources, the application of fine-tuning to large foundation models improve obustness of distribution shifts [240]. Branch-Train-Merge (BTM) [135] reduce the large amount of multi-node synchronization required for parallel LLMs training. In addition, the negative task vector of task arithmetic [174] can reduce the number of toxic generations of LLMs. For example, it decreases the amount from 4.8 \(\%\) to 0.8 \(\%\) in GPT-2 [94]. ## 7 Conclusion In this survey, we review the deep model fusion techniques which aims at improving the performance of the model. We propose a new categorization that groups the technologies of deep model fusion into four perspective: "mode connectivity", "alignment," "weight average" and "ensemble learning". In the first three chapters, we describe the fusion of model's weight to obtain the superior final fused model. In the "ensemble learning", we focus on the fusion of the output of deep models with a wealth of available methods and a large number of ensemble frameworks. We summarize the common methods from the point of view of algorithm design and performance, and compare the differences, advantages and disadvantages of different approaches. Finally, we discusses the applications and engineering prospects of deep model fusion technology in FL, distillation, LLMs, etc. We not only summarize current technologies of deep model fusion, but also point out the bottlenecks and breakthrough. The survey is expected to help the developers improve the performance of deep model fusion technologies, and indicate the promising and valuable directions. In the future, it is worth designing novel deep model fusion strategies from innovative aggregation patterns, better initial conditions, diverse ensemble frameworks and other perspectives. The abundant information in the loss landscape and the potential relationships between the components of networks remain to be further exploited. In addition, better adaptive methods are expected to be applied in heterogeneous models and complex real scenarios, such as FL, large-scale models, transfer learning, etc. Also, we need to pay attention to the practical effects to promote the development and application of deep model fusion technologies.
2309.14586
Speech Audio Synthesis from Tagged MRI and Non-Negative Matrix Factorization via Plastic Transformer
The tongue's intricate 3D structure, comprising localized functional units, plays a crucial role in the production of speech. When measured using tagged MRI, these functional units exhibit cohesive displacements and derived quantities that facilitate the complex process of speech production. Non-negative matrix factorization-based approaches have been shown to estimate the functional units through motion features, yielding a set of building blocks and a corresponding weighting map. Investigating the link between weighting maps and speech acoustics can offer significant insights into the intricate process of speech production. To this end, in this work, we utilize two-dimensional spectrograms as a proxy representation, and develop an end-to-end deep learning framework for translating weighting maps to their corresponding audio waveforms. Our proposed plastic light transformer (PLT) framework is based on directional product relative position bias and single-level spatial pyramid pooling, thus enabling flexible processing of weighting maps with variable size to fixed-size spectrograms, without input information loss or dimension expansion. Additionally, our PLT framework efficiently models the global correlation of wide matrix input. To improve the realism of our generated spectrograms with relatively limited training samples, we apply pair-wise utterance consistency with Maximum Mean Discrepancy constraint and adversarial training. Experimental results on a dataset of 29 subjects speaking two utterances demonstrated that our framework is able to synthesize speech audio waveforms from weighting maps, outperforming conventional convolution and transformer models.
Xiaofeng Liu, Fangxu Xing, Maureen Stone, Jiachen Zhuo, Sidney Fels, Jerry L. Prince, Georges El Fakhri, Jonghye Woo
2023-09-26T00:21:17Z
http://arxiv.org/abs/2309.14586v1
Speech Audio Synthesis from Tagged MRI and Non-Negative Matrix Factorization via Plastic Transformer ###### Abstract The tongue's intricate 3D structure, comprising localized functional units, plays a crucial role in the production of speech. When measured using tagged MRI, these functional units exhibit cohesive displacements and derived quantities that facilitate the complex process of speech production. Non-negative matrix factorization-based approaches have been shown to estimate the functional units through motion features, yielding a set of building blocks and a corresponding weighting map. Investigating the link between weighting maps and speech acoustics can offer significant insights into the intricate process of speech production. To this end, in this work, we utilize two-dimensional spectrograms as a proxy representation, and develop an end-to-end deep learning framework for translating weighting maps to their corresponding audio waveforms. Our proposed plastic light transformer (PLT) framework is based on directional product relative position bias and single-level spatial pyramid pooling, thus enabling flexible processing of weighting maps with variable size to fixed-size spectrograms, without input information loss or dimension expansion. Additionally, our PLT framework efficiently models the global correlation of wide matrix input. To improve the realism of our generated spectrograms with relatively limited training samples, we apply pair-wise utterance consistency with Maximum Mean Discrepancy constraint and adversarial training. Experimental results on a dataset of 29 subjects speaking two utterances demonstrated that our framework is able to synthesize speech audio waveforms from weighting maps, outperforming conventional convolution and transformer models. ## 1 Introduction Intelligible speech is produced by the intricate three-dimensional structure of the tongue, composed of localized functional units [26]. These functional units, when measured using tagged magnetic resonance imaging (MRI), exhibit cohesive displacements and derived quantities that serve as intermediate structures linking tongue muscle activity to tongue surface motion, which in turn facilitates the production of speech. A framework based on sparse non-negative matrix factorization (NMF) with manifold regularization can be used to estimate the functional units given input motion features, which yields a set of building blocks (or basis vectors) and a corresponding sparse weighting map (or encoding) [27]. The building blocks can form and dissolve with remarkable speed and agility, yielding highly coordinated patterns that vary depending on the specific speech task at hand. The corresponding weighting map can then be used to identify the cohesive regions and reveal the underlying functional units [25]. As such, by elucidating the relationship between the weighting map and intelligible speech, we can gain valuable insights for the development of speech motor control theories and the treatment of speech-related disorders. Despite recent advances in cross-modal speech processing, translating between varied-size of wide 2D weighting maps and high-frequency 1D audio waveforms remains a challenge. The first obstacle is the inherent heterogeneity of their respective data representations, compounded by the tendency of losing pitch information in audio [6, 1]. By contrast, transforming a 1D audio waveform into a 2D spectrogram provides a rich representation of the audio signal's energy distribution over the frequency domain, capturing both pitch and resonance information along the time axis [9, 12]. Second, the input sizes of the weighting maps vary between 20\(\times\)5,745 and 20\(\times\)11,938, while the output spectrogram has a fixed size for each audio section. Notably, fully connected layers used in [1] require fixed size input, while the possible fully convolution neural networks (CNN) can have varied output sizes and unstable performance [23]. Third, modeling global correlations for the long column dimension of the weighting map and the lack of spatial local neighboring relationships in the row dimension presents further difficulties for conventional CNNs that rely on deep hierarchy structure for expanding the reception field [21, 2]. Furthermore, the limited number of training pairs available hinders the large model learning process. To address the aforementioned challenges, in this work, we propose an end-to-end translator that generates 2D spectrograms from 2D weighting maps via a heterogeneous plastic light transformer (PLT) encoder and a 2D CNN decoder. The lightweight backbone of PLT can efficiently capture the global dependencies with a wide matrix input in every layer [14]. Our PLT module is designed with directional product relative position bias and single-level spatial pyramid pooling to enable flexible global modeling of weighting maps with variable sizes, producing fixed-size spectrograms without information loss or dimension expansion due to cropping, padding, or interpolation for size normalization. To deal with a limited number of training samples, we explore pair-wise utterance consistency as prior knowledge with Maximum Mean Discrepancy (MMD) [8] in a disentangled latent space as an additional optimization objective. Additionally, a generative adversarial network (GAN) [10] can be incorporated to enhance the realism of the generated spectrograms. The main contributions of this work are three-fold: \(\bullet\) To our knowledge, this is the first attempt at relating functional units with audio waveforms by means of intermediate representations, including weighting maps and spectrograms. \(\bullet\) We developed a plastic light-transformer to achieve efficient global modeling of position sensitive weighting maps with variable sizes and long dimensions. \(\bullet\) We further explored the pair-wise utterance consistency constraint with MMD minimization and adversarial training as additional supervision signals to deal with relatively limited training samples. Both quantitative and qualitative evaluation results demonstrate superior synthesis performance over comparison methods. Our framework has the potential to support clinicians and researchers in deepening their understanding of the interplay between tongue movements and speech waveforms, thereby improving treatment strategies for patients with speech-related disorders. ## 2 Methods ### Preprocessing During the training phase, we are given \(M\) pairs of synchronized tagged MRI sequences \(t_{i}\) and audio waveforms \(a_{i}\), i.e., \(\{t_{i},a_{i}\}_{i=1}^{M}\). First, we apply a non-linear transformation using librosa to convert \(a_{i}\) into mel-spectrograms, denoted as \(s_{i}\) with the function \(\mathcal{S}:a_{i}\to s_{i}\). This transformation uses a Hz-scale to emphasize human voice frequencies ranging from 40 to 1000 Hz, while suppressing high-frequency instrument noise. Second, for each tagged MRI sequence \(t_{i}\), we use a phase-based diffeomorphic registration method [29] to track the internal motion of the tongue. This allows us to generate corresponding weighting maps denoted Figure 1: Illustration of our translation framework. Only the NMF and translator with heterogeneous PLT encoder and 2D CNN decoder are used for testing. as \(\mathbf{H}_{i}\), which are based on input motion features \(\mathbf{X}_{i}\), including the magnitude and angle of each track, by optimizing the following equation. \[\mathcal{E}=\frac{1}{2}\|\mathbf{X}_{i}-\mathbf{W}_{i}\mathbf{H}_{i}\|_{F}^{2}+ \frac{1}{2}\lambda\mathrm{Tr}(\mathbf{H}_{i}\mathbf{L}_{i}\mathbf{H}_{i}^{ \top})+\eta\left\|\mathbf{H}_{i}\right\|_{1/2}, \tag{1}\] where \(\lambda\) and \(\eta\) denote the weights associated with the manifold and sparse regularizations, respectively, and \(\mathrm{Tr}(\cdot)\) represents the trace of a matrix. The graph Laplacian is denoted by \(\mathbf{L}\). ### Encoding variable size \(\mathbf{H}_{i}\) with plastic light-transformer Directly modeling correlations among any two elements in a given weighting map \(\mathbf{H}_{i}\in\mathbb{R}^{X_{i}\times Y_{i}}\) can impose quadratic complexity of \(\mathcal{O}(X_{i}^{2}Y_{i}^{2})\). The recent efficient vision transformers (ViTs) [14; 20; 32; 20; 5] usually adopt a local patch design to compute local self-attention and correlate patches with CNNs. Specifically, the input is divided into \(N_{i}=\frac{X_{i}}{P_{x}}\times\frac{Y_{i}}{P_{y}}\) patches5, each of which is flattened to a token vector with a length of \(d=P_{x}\times P_{y}\)[7]. The local self-attention is then formulated with a complexity of \(\mathcal{O}(N_{i}d^{2}=X_{i}Y_{i}d)\) as follows: Footnote 5: The bottom-right boundary is padded with 0 to ensure \(X_{i}\%P_{x}=0\) and \(Y_{i}\%P_{y}=0\). \[\mathbf{H}_{i}^{\mathrm{local}}=\mathrm{Attn}(\mathbf{H}_{i}^{q},\mathbf{H}_{i }^{k},\mathbf{H}_{i}^{v})=\mathrm{SoftMax}(\frac{\mathbf{H}_{i}^{q}\mathbf{H} _{i}^{k}{}^{\top}}{\sqrt{d}})\mathbf{H}_{i}^{v},\in\mathbb{R}^{X_{i}\times Y_ {i}}, \tag{2}\] where vectors \(\mathbf{H}_{i}^{q}\), \(\mathbf{H}_{i}^{k}\), \(\mathbf{H}_{i}^{v}\in\mathbb{R}^{N_{i}\times d}\) are produced by the linear projections of query \((W_{q})\), key \((W_{k})\), and value \((W_{v})\) branches, respectively [7; 32; 5]. The global correlation of ViTs with CNN [32; 5; 31] or window shifting [20], however, may not be efficient for our wide matrix \(\mathbf{H}_{i}\), which lacks explicit row-wise neighboring features and may have a width that is too long for hierarchical convolution modeling. To address these challenges, we follow the lightweight ViT design [14], which uses a global embedding \(\mathbf{G}\in\mathbb{R}^{T\times d}\) with \(T\ll N_{i}\) randomly generated global tokens as the anchor for global information aggregation \(\mathbf{\hat{G}}_{i}\). The aggregation is performed with attention of \(\mathbf{G}^{q},\mathbf{H}_{i}^{k},\mathbf{H}_{i}^{v}\), which is then broadcasted with attention of \(\mathbf{H}_{i}^{q},\mathbf{\hat{G}}_{i}^{k},\mathbf{\hat{G}}_{i}^{v}\) to leverage global contextual information [14]. While LightViT backbones have been shown to achieve wide global modeling within each layer [14], they are not well-suited for our variable size input and fixed size output translation. Although the self-attention scheme used in ViTs does not constrain the number of tokens, the absolute patch-position encoding in conventional ViTs [7] can only be applied to a fixed \(N_{i}\)[32], and the attention module will keep the same size of input and output. Notably, the number of tokens \(N_{i}\) will change depending on the size of \(X_{i}\times Y_{i}\). As such, in this work, we resort to the directional product relative position bias [28] to add \(\mathbf{R}_{i}\in\mathbb{R}^{N_{i}\times N_{i}}\), where element \(r_{a,b}=\mathbf{p}_{a,b}^{x},\delta_{a,b}^{y}\) is a trainable scalar, indicating the relative position weight between the patches \(a\) and \(b\)6. We set the offset of patch position in \(x\) and \(y\) directions \(\delta^{x}_{a,b}=x_{a}-x_{b}+P_{x},\delta^{y}_{a,b}=y_{a}-y_{b}+M_{y}\) as the index in \(\mathbf{p}\). Furthermore, the product relative position bias utilized in this work can distinguish between vertical or horizontal offsets, whereas the popular cross relative position bias [28] in computer vision tasks does not need to differentiate between time and spatial neighboring relationships in two dimensions. Therefore, for global attention, we can aggregate the information of local tokens by modeling their global dependencies with \[\mathbf{\hat{G}}_{i}=\mathrm{Attn}(\mathbf{G}^{q},\mathbf{H}^{k}_{i},\mathbf{H }^{v}_{i})=\mathrm{SoftMax}(\frac{\mathbf{G}^{q}\mathbf{H}^{k\top}_{i}+\mathbf{ R}_{i}}{\sqrt{d}})\mathbf{H}^{v}_{i},\in\mathbb{R}^{X_{i}\times Y_{i}}. \tag{3}\] Then, these global dependencies are broadcasted to every local token: \[\mathbf{H}^{\mathrm{global}}_{i}=\mathrm{Attn}(\mathbf{H}^{q}_{i},\mathbf{ \hat{G}}^{k}_{i},\mathbf{\hat{G}}^{v}_{i})=\mathrm{SoftMax}(\frac{\mathbf{H} ^{q}_{i}\mathbf{\hat{G}}^{k\top}_{i}+\mathbf{R}_{i}}{\sqrt{d}})\mathbf{\hat{G }}^{v}_{i},\in\mathbb{R}^{X_{i}\times Y_{i}}. \tag{4}\] By adding \(\mathbf{H}^{\mathrm{local}}_{i}\) and \(\mathbf{H}^{\mathrm{global}}_{i}\), each token can benefit from both local and global features, while maintaining linear complexity with respect to the input size. This brings noticeable improvements with negligible FLOPs increment. However, the sequentially proportional patch merging used in [14, 32, 5] still generates output sizes that vary with input sizes. Therefore, we utilize the single-level Spatial Pyramid Pooling (SSPP) [13] to extract a fixed-size feature for arbitrary input sizes. As illustrated in Fig. 1, the output of our channel-wise SSPP module with 20\(\times\)256 bins has the size of \(20\times 256\times d\), which can be a token merging scheme that adapts to the input size. Therefore, the final output of a layer is given by \[\mathbf{H}^{\prime}_{i}=\mathrm{SSPP}(\mathbf{H}^{\mathrm{local}}_{i}+\mathbf{ H}^{\mathrm{global}}_{i})\in\mathbb{R}^{X_{i}\times Y_{i}}. \tag{5}\] We cascade four PLT layers with SSPP as our encoder to extract the feature representation \(f_{i}\in\mathbb{R}^{8\times 8\times d}\). For the decoder, we adopt a simple 2D CNN with three deconvolutional layers to synthesize the spectrogram \(\tilde{s}_{i}\). ### Overall Training Protocol We utilize the intermediate pairs of \(\{\mathbf{H}_{i},s_{i}\}_{i=1}^{M}\) to train our translator \(\mathcal{T}\), which consists of a PLT encoder and a 2D CNN decoder. The quality of the generated spectrograms \(\tilde{s}_{i}\) is evaluated using the mean square error (MSE) with respect to the ground truth spectrograms \(s_{i}\): \[\mathcal{L}_{\mathrm{MSE}}=||\tilde{s}_{i}-s_{i}||_{2}^{2}=||\mathcal{T}( \mathbf{H}_{i})-\mathcal{S}(a_{i})||_{2}^{2}. \tag{6}\] Additionally, we utilize the utterance consistency in the latent feature space as an additional optimization constraint. Specifically, we propose to disentangle \(f_{i}\) into two parts, i.e., utterance-related \(f_{i}^{u}\) and subject-related \(f_{i}^{s}\). In practice, we split the utterance/subject-related parts channel-wise using tensor slicing method. Following the idea of deep metric learning [19], we aim to minimize the discrepancy between the latent features \(f_{i}^{u}\) and \(f_{j}^{u}\) of two samples \(t_{i}\) and that belong to the same utterance. Therefore, we use MMD [8] as an efficient discrepancy loss \(\mathcal{L}_{\mathrm{MMD}}=\gamma\mathrm{MMD}(f_{i}^{u},f_{j}^{u})\), where \(\gamma=1\) or \(0\) for same or different utterance pairs, respectively. Of note, the \(f_{i}^{s}\) is implicitly encouraged to incorporate the subject-related style of the articulation other than \(f_{i}^{u}\) with a complementary constraint [18, 16] for reconstruction. Therefore, the decoder, which takes \(f_{i}^{s}\) conditioned on \(f_{i}^{u}\) can be considered as the utterance-conditioned spectrogram distribution modeling. This approach follows a divide-and-conquer strategy [3, 17] for each utterance and can be particularly efficient for relatively few utterance tasks. A GAN model can be further utilized to boost the realism of \(\tilde{s}_{i}\). A discriminator \(\mathcal{D}\) is employed to differentiate whether the mel-spectrogram is real \(s_{i}=\mathcal{S}(a_{i})\) or generated \(\tilde{s}_{i}=\mathcal{T}(\mathbf{H}_{i})\) with the following binary cross-entropy loss: \[\mathcal{L}_{\mathrm{GAN}}=\mathbb{E}_{s_{i}}\{\log(\mathcal{D}(s_{i}))\}+ \mathbb{E}_{\tilde{s}_{i}}\{\log(1-\mathcal{D}(\tilde{s}_{i}))\}. \tag{7}\] In adversarial training, the translator \(\mathcal{T}\) attempts to confuse \(\mathcal{D}\) by optimizing \(\mathcal{L}_{GAN}^{\mathcal{T}}=\mathbb{E}_{\tilde{s}_{i}}\{-\log(1-\mathcal{ D}(\tilde{s}_{i}))\}\). Of note, \(\mathcal{T}\) does not involve real spectrograms in \(\log(\mathcal{D}(s_{i}^{\prime}))\)[24]. Therefore, the overall optimization objectives of our translator \(\mathcal{T}\) and discriminator \(\mathcal{D}\) are expressed as: \[\begin{array}{c}\mathop{\mathcal{T}}\limits^{\mathrm{min}}\limits_{ \mathcal{T}}\mathcal{L}_{\mathrm{MSE}}+\beta\mathcal{L}_{\mathrm{MMD}}+\lambda \mathcal{L}_{\mathrm{GAN}}^{\mathcal{T}};\quad\mathop{\mathcal{D}}\limits^{ \mathrm{min}}\limits_{\mathcal{D}}\mathcal{L}_{\mathrm{GAN}},\end{array} \tag{8}\] where \(\beta\) and \(\lambda\) represent the weighting parameters. Notably, only \(\mathcal{T}\) is utilized in testing, and we do not need pairwise inputs for utterance consistency. Recovering audio waveform from mel-spectrogram can be achieved by the well-established Griffin-Lim algorithm [11] in the Librosa toolbox. ## 3 Experiments and Results For evaluation, we collected paired 3D tagged MRI sequences and audio waveforms from a total of 29 subjects, while performing the speech words "a souk" or "a geese," with a periodic metronome-like sound as guidance [15, 30]. The tagged-MRI sequences consisted of 26 frames, which were resized to 128\(\times\)128. The resulting \(\mathbf{H}\) matrix varied in size from 20\(\times\)5,745 to 20\(\times\)11,938 (we set one dimension to a constant value of 20.) The audio waveforms had varying lengths between 21,832 to 24,175. To augment the dataset, we employed a sliding window technique on each audio, allowing us to crop sections with 21,000 time points, resulting in 100 audio waveforms. Then, we utilized the Librosa library to convert all audio waveforms into mel-spectrograms with a size of 64\(\times\)64. For our evaluation, we utilized a subject-independent leave-one-out approach. For the data augmentation of the \(\mathbf{H}\) matrix, we randomly drop the column to round \(Y_{i}\) to the nearest hundred, e.g., 9,882 to 9,800, generating 100 versions of \(\mathbf{H}\). We utilized the leave-one-out evaluation, following a subject-independent manner. In our implementation, we set \(P_{x}=1\) and \(P_{y}=20\), i.e., \(d=20\). Our encoder consisted of four PLT encoder layers with SSPP, to extract a feature \(f_{i}\) with the size of \(8\times 8\times 20\). Specifically, the first \(8\times 8\times 4\) component was set as the utterance-related factors, and the remaining 16 channels were for the subject-specific factors. Then, the three 2D de-convolutional layers were applied as our decoder to generate the \(64\times 64\) mel-spectrogram. The activation units in our model were rectified linear units (ReLU), and we normalized the final output of each pixel using the sigmoid function. The discriminator in our model consisted of three convolutional layers and two fully connected layers, and had a sigmoid output. A detailed description of the network structure is provided in the supplementary material, due to space limitations. Our model was implemented using PyTorch and trained 200 epochs for approximately 6 hours on a server equipped with an NVIDIA V100 GPU. Notably, the inference from a \(\mathbf{H}\) matrix to audio took less than 1 second, depending on the size of \(\mathbf{H}\). Also, the pairwise utterance consistency and GAN training were only applied during the training phase and did not affect inference. For our method and its ablation studies, we consistently set the learning rates of our heterogeneous translator and discriminator to \(lr^{\mathcal{T}}=10^{-3}\) and \(lr^{\mathcal{D}}=10^{-4}\), respectively, with a momentum of 0.5. The loss trade-off hyperparameters were set as \(\beta=0.75\), and we set \(\lambda=1\). It is important to note that without NMF, generating intelligible audio with a small number of subjects using video-based audio translation models, such as Lip2AudSpect [1], is not feasible. As an alternative, we pre-processed the input by cropping, padding with zeros, or using bi-cubic interpolation to obtain Figure 2: Comparisons of our PLT with CNN and LightViT using bi-cubic interpolation. We show \(\mathbf{H}_{i}^{\top}\) for compact layout. Audios are attached in supplementary. a fixed-size input **H**. We then compared the performance of our encoder module with conventional CNN or LightViT [14]. Figure 2 shows a qualitative comparison of our PLT framework with CNN and LightViT [14] using bi-cubic interpolation. We can observe that our generated spectrogram and the corresponding audio waveforms demonstrate superior alignment with the ground truth. It is worth noting that the CNN model or the CNN-based global modeling ViTs [5, 31] require deep models to achieve large receptive fields [21, 2]. Moreover, the interpolation process adds significant computational complexity for both CNN and LightViT, making it difficult to train on a limited dataset. In Fig. 3(a), we show that our proposed PLT framework achieves a stable performance gain along with the training and outperforms CNN with the crop, which lost the information of some functional units. Following [1], we used 2D Pearson's correlation coefficient (Corr2D) [4], and Perceptual Evaluation of Speech Quality (PESQ) [22] as our evaluation metrics to measure the synthesis quality of spectrograms in the frequency domain, and waveforms in the time domain, respectively. The numerical comparisons of different encoder structures with conventional CNN or LightViT with different crop or padding strategies and our PLT framework are provided in Table 1. The standard deviation was obtained from three independent random trials. Our framework outperformed CNN and lightViT consistently. In addition, the synthesis performance was improved by pair-wise disentangled utterance consistency MMD loss and GAN loss, as demonstrated in our ablation studies. Furthermore, it outperformed the in-directional cross relative position bias [28], since two dimensions in the weighting map indicate time and spatial relationship, respectively. Notably, even though CNN with SSPP can process varied size inputs, it suffers from limited long-term modeling capacity [21, 2] and unstable performance [23]. The sensitivity analysis of our loss weights are given in Fig. 3(b) and (c), where the performance was relatively stable for \(\beta\in[0.75,1.5]\) and \(\lambda\in[1,2]\). \begin{table} \begin{tabular}{l|c|c} \hline Encoder Models & Corr2D for spectrogram \(\uparrow\) & PESQ for waveform \(\uparrow\) \\ \hline \hline CNN (Crop) & 0.614\(\pm\)0.013 & 1.126\(\pm\)0.021 \\ CNN (Padding 0) & 0.684\(\pm\)0.010 & 1.437\(\pm\)0.018 \\ CNN (Bi-Cubic) & 0.689\(\pm\)0.012 & 1.451\(\pm\)0.020 \\ CNN+SSPP & 0.692\(\pm\)0.017 & 1.455\(\pm\)0.022 \\ \hline LightViT (Crop) & 0.635\(\pm\)0.015 & 1.208\(\pm\)0.022 \\ LightViT (Padding 0) & 0.708\(\pm\)0.011 & 1.475\(\pm\)0.015 \\ LightViT (Bi-Cubic) & 0.702\(\pm\)0.012 & 1.492\(\pm\)0.018 \\ \hline **Ours** & **0.742**\(\pm\)0.012 & **1.581**\(\pm\)0.020 \\ \hline Ours with cross embedding & 0.720\(\pm\)0.013 & 1.550\(\pm\)0.021 \\ Ours w/o Pair-wise Disentangle & 0.724\(\pm\)0.010 & 1.548\(\pm\)0.019 \\ Ours w/o GAN & 0.729\(\pm\)0.011 & 1.546\(\pm\)0.020 \\ \hline \end{tabular} \end{table} Table 1: Numerical comparisons during testing using leave-one-out evaluation ## 4 Conclusion This work aimed to explore the relationship between tongue movements and speech acoustics by translating weighting maps, which represent the functional units of the tongue, to their corresponding audio waveforms. To achieve this, we proposed a deep PLT framework that can handle variable-sized weighting maps and generated fixed-sized spectrograms, without information loss or dimension expansion. Our framework efficiently modeled global correlations in wide matrix input. To improve the realism of the generated spectrograms, we applied pairwise utterance consistency with MMD constraint and adversarial training. Our experimental results demonstrated the potential of our framework to synthesize audio waveforms from weighting maps, which can aid clinicians and researchers in better understanding the relationship between the two modalities. ## Acknowledgements This work is supported by NIH R01DC014717, R01DC018511, R01CA133015, and P41EB022544.
2309.08811
Subgroup and Coset Intersection in abelian-by-cyclic groups
We consider two decision problems in infinite groups. The first problem is Subgroup Intersection: given two finitely generated subgroups $\langle \mathcal{G} \rangle, \langle \mathcal{H} \rangle$ of a group $G$, decide whether the intersection $\langle \mathcal{G} \rangle \cap \langle \mathcal{H} \rangle$ is trivial. The second problem is Coset Intersection: given two finitely generated subgroups $\langle \mathcal{G} \rangle, \langle \mathcal{H} \rangle$ of a group $G$, as well as elements $g, h \in G$, decide whether the intersection of the two cosets $g \langle \mathcal{G} \rangle \cap h \langle \mathcal{H} \rangle$ is empty. We show that both problems are decidable in finitely generated abelian-by-cyclic groups. In particular, we reduce them to the Shifted Monomial Membership problem (whether an ideal of the Laurent polynomial ring over integers contains any element of the form $X^z - f,\; z \in \mathbb{Z} \setminus \{0\}$). We also point out some obstacles for generalizing these results from abelian-by-cyclic groups to arbitrary metabelian groups.
Ruiwen Dong
2023-09-15T23:30:58Z
http://arxiv.org/abs/2309.08811v2
# Subgroup and Coset Intersection in abelian-by-cyclic groups ###### Abstract We consider two decision problems in infinite groups. The first problem is Subgroup Intersection: given two finitely generated subgroups \(\langle\mathcal{G}\rangle,\langle\mathcal{H}\rangle\) of a group \(G\), decide whether the intersection \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle\) is trivial. The second problem is Coset Intersection: given two finitely generated subgroups \(\langle\mathcal{G}\rangle,\langle\mathcal{H}\rangle\) of a group \(G\), as well as elements \(g,h\in G\), decide whether the intersection of the two cosets \(g\langle\mathcal{G}\rangle\cap h\langle\mathcal{H}\rangle\) is empty. We show that both problems are decidable in finitely generated abelian-by-cyclic groups. In particular, we reduce them to the Shifted Monomial Membership problem (whether an ideal of the Laurent polynomial ring over integers contains any element of the form \(X^{z}-f,\ z\in\mathbb{Z}\setminus\{0\}\)). We also point out some obstacles for generalizing these results from abelian-by-cyclic groups to arbitrary metabelian groups. computational group theory, infinite groups, abelian-by-cyclic groups, metabelian groups, subgroup intersection, coset intersection 10.4230/LIPIcs... 10.4230/LIPIcs.0 ###### Abstract We prove that the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings of the _Gullings_ of the _Gullings_ of the _ ### Metabelian and abelian-by-cyclic groups As most algorithmic problems for abelian groups are well-understood due to their relatively simple structure, much effort has focused on relaxations of the commutativity requirement. For example, the aforementioned decidability results have been successfully extended to the class of nilpotent groups [19]. Among the simplest and most well-studied extensions to abelian groups is the class of _metabelian groups_. A group \(G\) is called metabelian if it admits an abelian normal subgroup \(A\) such that the quotient group \(G/A\) is abelian. Developing a complete algorithmic theory for finitely generated metabelian groups has been the focus of intense research since the 1950s [5, 12]. Despite their simple definition, many problems in finitely generated metabelian groups are still far from being well understood. Unlike free groups, abelian groups and nilpotent groups, metabelian groups do not satisfy the Howson property [6, 13]. This makes solving intersection-type problems in metabelian groups much more difficult. Among the three problems introduced above, only the decidability of Subgroup Membership is known, thanks to a classic result of Romanovskii [25]. Subgroup Intersection has been solved only for _free_ metabelian groups [6] and the wreath products \(\mathbb{Z}^{m}\wr\mathbb{Z}^{n},m,n\geq 1\). Unfortunately, this solution does not generalize to arbitrary metabelian groups, as explicitly stated after [6, Corollary C]. Despite Subgroup Intersection and Coset Intersection being currently out of reach for arbitrary metabelian groups, various results have been obtained for specific classes of metabelian groups. Recent results by Lohrey, Steinberg and Zetzsche [17] showed decidability of the _Rational Subset Membership_ problem (which subsumes Coset Intersection) in the wreath products \((\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z},\ p\geq 2\). This result has been extended to the _Baumslag-Solitar groups_\(\mathsf{BS}(1,p),\ p\geq 2\), by Cadilhac, Chistikov and Zetzsche [9]. The groups \((\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z}\) and \(\mathsf{BS}(1,p)\) can be respectively represented as groups of \(2\times 2\) matrices over the Laurent polynomial ring \((\mathbb{Z}/p\mathbb{Z})\left[X,X^{-1}\right]\) and over the ring \(\mathbb{Z}[1/p]=\{\frac{a}{p^{\ast}}\mid a\in\mathbb{Z},n\in\mathbb{N}\}\): \[(\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z} \cong\left\{\begin{pmatrix}X^{b}&f\\ 0&1\end{pmatrix}\biggm{|}f\in(\mathbb{Z}/p\mathbb{Z})\left[X,X^{-1}\right],b \in\mathbb{Z}\right\}, \tag{1}\] \[\mathsf{BS}(1,p) \cong\left\{\begin{pmatrix}p^{b}&f\\ 0&1\end{pmatrix}\biggm{|}f\in\mathbb{Z}[1/p],b\in\mathbb{Z}\right\}. \tag{2}\] Alternatively, the element \(\begin{pmatrix}X^{b}&f\\ 0&1\end{pmatrix}\in(\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z}\) can be thought of as a Turing machine configuration whose tape cells contain letters in \(\{0,1,\ldots,p-1\}\) which correspond to the coefficients of the polynomial \(f\), while the head of the machine is positioned at the cell \(b\). Multiplication in \((\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z}\) corresponds to operating the machine by moving the head and adding integers to the cells modulo \(p\). (See [17] for a complete description.) Similarly, \(\mathsf{BS}(1,p)\) can be considered as a version of \((\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z}\) with "carrying". The element \(\begin{pmatrix}p^{b}&f\\ 0&1\end{pmatrix}\in\mathsf{BS}(1,p)\) can be seen the base-\(p\) expansion of the rational number \(f\in\mathbb{Z}[1/p]\), along with a cursor at the \(b\)-th position. Multiplication in \(\mathsf{BS}(1,p)\) corresponds to aligning the cursors of the two elements and adding up the numbers \(f\). (See [9] for a complete description.) The Turing machine-like structure of \((\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z}\) and \(\mathsf{BS}(1,p)\) can be explained by the following fact. Both groups belong to the much broader class of groups called _abelian-by-cyclic_ groups. A group is called abelian-by-cyclic if it admits an abelian normal subgroup \(A\) such that the quotient group \(G/A\) is isomorphic to \(\mathbb{Z}\). Intuitively, this isomorphism to \(\mathbb{Z}\) gives them the Turing machine-like structure described above, as \(\mathbb{Z}\) represents the indices of the tape. Abelian-by-cyclic groups have been extensively studied from the point of view of geometry and growth [11, 15], algorithmic problems [8], random walks [23], and group algebra isomorphism [3]. They also serve as a first step towards understanding general metabelian groups, whose definition is obtained by replacing \(\mathbb{Z}\) with an arbitrary abelian group. Figure 1 illustrates the relations between the classes of groups introduced above, as well as their known decidability results. In this paper, we show decidability of Subgroup Intersection and Coset Intersection in finitely generated abelian-by-cyclic groups. Our approach is different from the automata-based methods [9, 17] used for \((\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z}\) and \(\mathsf{BS}(1,p)\). We reduce both Subgroup and Coset Intersection to the problem of finding an element of the form \(X^{z}-f,\ z\in\mathbb{Z}\setminus\{0\}\) in a given ideal of the Laurent polynomial ring \(\mathbb{Z}[X,X^{-1}]\). This problem has already been solved by Noskov [22]. However, Noskov's solution relies on a series of intricate arguments in commutative algebra. We propose a more direct solution using a combination of computational algebraic geometry and number theory. A natural follow-up to our work would be trying to generalize our results to arbitrary metabelian groups. This boils down to generalizing several arguments in this paper to _multivariate_ polynomial rings, which become significantly more difficult. ## 2 Preliminaries ### Laurent polynomial ring and modules A (univariate) _Laurent polynomial_ with coefficients over \(\mathbb{Z}\) is an expression of the form \[f=\sum_{i=p}^{q}a_{i}X^{i},\quad\text{where $p,q\in\mathbb{Z}$ and $a_{i}\in\mathbb{Z},i=p,p+1,\ldots,q$.}\] The set of all Laurent polynomials with coefficients over \(\mathbb{Z}\) forms a ring and is denoted by \(\mathbb{Z}[X^{\pm}]\). On the other hand, we denote by \(\mathbb{Z}[X]\) the _usual_ univariate polynomial ring over \(\mathbb{Z}\); it contains elements whose monomials have non-negative degree. Figure 1: Inclusion relation of different classes of metabelian groups. Let \(d\geq 1\) be a positive integer. One can similarly define the Laurent polynomial ring \[\mathbb{Z}[X^{\pm d}]\coloneqq\left\{\sum_{i=p}^{q}a_{di}X^{di}\in\mathbb{Z}[X^{ \pm}]\left|\,p,q\in\mathbb{Z},a_{dp},\ldots,a_{dq}\in\mathbb{Z}\right.\right\}.\] Its elements are Laurent polynomials whose monomials have degrees divisible by \(d\). Let \(R\) be a commutative ring. An \(R\)-module is defined as an abelian group \((M,+)\) along with an operation \(\,\cdot\,:\,R\times M\to M\) satisfying \(f\cdot(a+b)=f\cdot a+f\cdot b\), \((f+g)\cdot a=f\cdot a+g\cdot a\), \(fg\cdot a=f\cdot(g\cdot a)\) and \(1\cdot a=a\). We will denote by \(\mathbf{0}\) the neutral element of an \(R\)-module \(M\). For example, for any \(d\in\mathbb{N}\), the group \(\mathbb{Z}[X^{\pm}]\) can be seen as a \(\mathbb{Z}[X^{\pm d}]\)-module by \(f\cdot g\coloneqq fg,\ f\in\mathbb{Z}[X^{\pm d}],g\in\mathbb{Z}[X^{\pm}]\). In general, in order to define a \(\mathbb{Z}[X^{\pm d}]\)-module structure on an abelian group \(M\), it suffices to define \(X^{d}\cdot m\) and \(X^{-d}\cdot m\) for all \(m\in M\). The value of \(f\cdot m,\ f\in\mathbb{Z}[X^{\pm d}],m\in M\), would then follow from the linearity of the operation \(\,\cdot\,\). An _ideal_ of \(R\) is a subset of \(R\) that is an \(R\)-module. If \(M\) is an \(R\)-module and \(m\in M\), then \(R\cdot m\coloneqq\{r\cdot m\mid r\in R\}\) is again an \(R\)-module. If \(N\) and \(N^{\prime}\) are \(R\)-submodule of \(M\), then \(N+N^{\prime}\coloneqq\{n+n^{\prime}\mid n\in N,n^{\prime}\in N^{\prime}\}\) is again an \(R\)-submodule of \(M\). ### Finite presentation of modules For any \(D\in\mathbb{N}\), \(\mathbb{Z}[X^{\pm}]^{D}\) is a \(\mathbb{Z}[X^{\pm}]\)-module by \(f\cdot(g_{1},\ldots,g_{D})\coloneqq(fg_{1},\ldots,fg_{D})\). Throughout this paper, we use the bold symbol \(\mathbf{f}\) to denote a vector \((f_{1},\ldots,f_{d})\in\mathbb{Z}[X^{\pm}]^{D}\). Given \(\mathbf{g}_{1},\ldots,\mathbf{g}_{m}\in\mathbb{Z}[X^{\pm}]^{D}\), we say they _generate_ the \(\mathbb{Z}[X^{\pm}]\)-module \(\sum_{i=1}^{D}\mathbb{Z}[X^{\pm}]\cdot\mathbf{g}_{i}\). A module is called _finitely generated_ if it can be generated by a finite number of elements. Given two finitely generated \(\mathbb{Z}[X^{\pm}]\)-submodules \(N,M\) of \(\mathbb{Z}[X^{\pm}]^{D}\) such that \(N\subseteq M\), we can define the quotient \(M/N\coloneqq\{\overline{\mathbf{m}}\mid\mathbf{m}\in M\}\) where \(\overline{\mathbf{m}_{1}}=\overline{\mathbf{m}_{2}}\) if and only if \(\mathbf{m}_{1}-\mathbf{m}_{2}\in N\). This quotient is also an \(\mathbb{Z}[X^{\pm}]\)-module. We say that an \(\mathbb{Z}[X^{\pm}]\)-module \(\mathcal{A}\) is _finitely presented_ if it can be written as a quotient \(M/N\) for two finitely generated submodules \(N\subseteq M\) of \(\mathbb{Z}[X^{\pm}]^{D}\) for some \(D\in\mathbb{N}\). Such a pair \((M,N)\), given by their respective generators, is called a _finite presentation_ of \(\mathcal{A}\). The element \(\overline{\mathbf{m}}\) of \(\mathcal{A}\) is effectively represented by \(\mathbf{m}\in\mathbb{Z}[X^{\pm}]^{D}\), this representation is unique modulo \(N\). Effective computation in finitely presented modules over polynomial rings is a well-studied area, with numerous algorithms developed to solve a wide range of computation problems. In particular, these algorithms have been applied to solve various other decision problems in metabelian groups, see the paper [4] by Baumslag, Cannonito and Miller for a comprehensive account on this subject. The following are some classic computational problems with effective algorithms that we will make use of. [[4, Lemma 2.1, 2.2]] Let \(\mathcal{A}\) be a \(\mathbb{Z}[X^{\pm}]\)-module with a given finite presentation. The following problems are effectively solvable: 1. [label=()] 2. (Submodule Membership) Given elements \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k},\mathbf{a}\in\mathcal{A}\), decide whether \(\mathbf{a}\) is in the submodule generated by \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\). 3. (Computing Syzygies) Given elements \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\in\mathcal{A}\), compute a finite set of generators for the Syzygy module \(S\subseteq\mathbb{Z}[X^{\pm}]^{k}\): \[S\coloneqq\left\{(f_{1},\ldots,f_{k})\in\mathbb{Z}[X^{\pm}]^{k}\,\left|\,\,f_{1} \cdot\mathbf{a}_{1}+\cdots+f_{k}\cdot\mathbf{a}_{k}=\mathbf{0}\right.\right\}.\] 4. (Computing Intersection) Given the generators \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\) of a submodule \(A\subseteq\mathcal{A}\) and the generators \(\mathbf{b}_{1},\ldots,\mathbf{b}_{m}\) of a submodule \(B\subseteq\mathcal{A}\), compute a finite set of generators for the submodule \(A\cap B\). _This effectiveness still holds if we replace the Laurent polynomial ring \(\mathbb{Z}[X^{\pm}]\) with the regular polynomial ring \(\mathbb{Z}[X]\)._ In particular, taking \(\mathcal{A}\coloneqq\mathbb{Z}[X^{\pm}]\), Lemma 1(i) becomes the well-known _Ideal Membership_ problem: given elements \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k},\mathbf{a}\in\mathbb{Z}[X^{\pm}]\), decide whether \(\mathbf{a}\) is in the ideal generated by \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\). For Lemma 1(ii), it states that one can compute the generators for the solution set of any homogeneous linear equation. Alternatively, Lemma 1(ii) can be understood as a procedure to compute the finite presentation \(\mathbb{Z}[X^{\pm}]^{k}/S\) of the module \(\sum_{i=1}^{k}\mathbb{Z}[X^{\pm}]\cdot\mathbf{a}_{i}\). The following lemma shows we can effectively compute the intersection of a submodule of \(\mathbb{Z}[X^{\pm}]^{k}\) with \(\mathbb{Z}^{k}\). [[4, Corollary 2.5(2)]] Suppose we are given \(k\in\mathbb{N}\) and elements \(\mathbf{g}_{1},\ldots,\mathbf{g}_{n}\) of the \(\mathbb{Z}[X^{\pm}]\)-module \(\mathbb{Z}[X^{\pm}]^{k}\). Let \(\mathcal{M}\) denote the \(\mathbb{Z}[X^{\pm}]\)-module generated by \(\mathbf{g}_{1},\ldots,\mathbf{g}_{n}\), and define \(\Lambda\coloneqq\mathcal{M}\cap\mathbb{Z}^{k}\). Then \(\Lambda\subseteq\mathbb{Z}^{k}\) is a \(\mathbb{Z}\)-module, and a finite set of generators for \(\Lambda\) can be effectively computed. Recall that for any \(d\geq 1\), a \(\mathbb{Z}[X^{\pm}]\)-module is naturally a \(\mathbb{Z}[X^{\pm d}]\)-module. In particular, \(\mathbb{Z}[X^{\pm}]^{D}\) is isomorphic as a \(\mathbb{Z}[X^{\pm d}]\)-module to \(\mathbb{Z}[X^{\pm d}]^{Dd}\), and any finitely presented \(\mathbb{Z}[X^{\pm}]\)-module can be considered as a finitely presented \(\mathbb{Z}[X^{\pm d}]\)-module: Let \(d\geq 2\). Given a finite presentation of a \(\mathbb{Z}[X^{\pm}]\)-module \(\mathcal{A}\), one can compute a finite presentation of \(\mathcal{A}\) as a \(\mathbb{Z}[X^{\pm d}]\)-module. Furthermore, let \(\mathbf{a}\in\mathcal{A}\) be given in the finite presentation of \(\mathcal{A}\) as \(\mathbb{Z}[X^{\pm}]\)-module, then one can compute the representation of \(\mathbf{a}\) in \(\mathcal{A}\) considered as a \(\mathbb{Z}[X^{\pm d}]\)-module. ### Abelian-by-cyclic groups We now formally define abelian-by-cyclic groups, the main object of study in this paper. A group \(G\) is called _abelian-by-cyclic_ if it admits an abelian normal subgroup \(A\) such that \(G/A\cong\mathbb{Z}\). It is a classic result [8, p.17] that every finitely generated abelian-by-cyclic group \(G\) can be written as a _semidirect product_\(\mathcal{A}\rtimes\mathbb{Z}\): \[\mathcal{A}\rtimes\mathbb{Z}\coloneqq\{(\mathbf{a},z)\mid\mathbf{a}\in\mathcal{A},z \in\mathbb{Z}\}\,, \tag{3}\] where \(\mathcal{A}\) is a finitely presented \(\mathbb{Z}[X^{\pm}]\)-module. The group law in \(\mathcal{A}\rtimes\mathbb{Z}\) is defined by \[(\mathbf{a},z)\cdot(\mathbf{a}^{\prime},z^{\prime})=(\mathbf{a}+X^{z}\cdot\mathbf{a}^{\prime},z+z^{\prime}),\quad(\mathbf{a},z)^{-1}=(-X^{-z}\cdot\mathbf{a},-z).\] The neutral element of \(\mathcal{A}\rtimes\mathbb{Z}\) is \((\mathbf{0},0)\). Intuitively, the element \((\mathbf{a},z)\) is analogous to a \(2\times 2\) matrix \(\begin{pmatrix}X^{z}&\mathbf{a}\\ 0&1\end{pmatrix}\), where group multiplication is represented by matrix multiplication. By direct computation, for all \(m\in\mathbb{Z}\), we have \[(\mathbf{a},z)^{m}=\begin{cases}\left(\frac{X^{m}z-1}{X^{z}-1}\cdot\mathbf{a},mz \right),&z\neq 0,\\ \left(m\cdot\mathbf{a},0\right),&z=0.\end{cases}\] We naturally identify \(\mathcal{A}\) with the subgroup \(\{(\mathbf{a},0)\mid\mathbf{a}\in\mathcal{A}\}\) of \(\mathcal{A}\rtimes\mathbb{Z}\). In particular, the quotient \(\left(\mathcal{A}\rtimes\mathbb{Z}\right)/\mathcal{A}\) is isomorphic to \(\mathbb{Z}\), so \(\mathcal{A}\rtimes\mathbb{Z}\) is indeed abelian-by-cyclic. If we take \(\mathcal{A}\coloneqq(\mathbb{Z}/p\mathbb{Z})[X,X^{-1}]=\mathbb{Z}[X^{\pm}]/ \left(\mathbb{Z}[X^{\pm}]\cdot p\right)\), then we recover the definition (1) of the group \((\mathbb{Z}/p\mathbb{Z})\wr\mathbb{Z}\). If we take \(\mathcal{A}\coloneqq\mathbb{Z}[1/p]=\mathbb{Z}[X^{\pm}]/\left(\mathbb{Z}[X^{ \pm}]\cdot(X-p)\right)\), then we recover the definition (2) of the group \(\mathsf{BS}(1,p)\). Throughout this paper, a finitely generated abelian-by-cyclic group \(G\) is always represented as the semidirect product \(\mathcal{A}\rtimes\mathbb{Z}\), where \(\mathcal{A}\) is a \(\mathbb{Z}[X^{\pm}]\)-module given by a finite presentation. ## 3 Main results and overview The main result of this paper is the following. Subgroup Intersection and Coset Intersection are decidable for finitely generated abelian-by-cyclic groups. Our proof of Theorem 3.1 is divided into two parts. The first part is to reduce both Subgroup Intersection and Coset Intersection to the _Shifted Monomial Membership_ problem: Shifted Monomial Membership is the following decision problem. Given as input a finite set of generators of an ideal \(\mathcal{I}\subseteq\mathbb{Z}[X^{\pm}]\), as well as a Laurent polynomial \(f\in\mathbb{Z}[X^{\pm}]\), decide if there exists \(z\in\mathbb{Z}\setminus\{\mathbf{0}\}\) such that \(X^{z}-f\in\mathcal{I}\). The reduction from Subgroup Intersection and Coset Intersection to Shifted Monomial Membership combines various classic techniques from effective computation of finitely presented \(\mathbb{Z}[X^{\pm}]\)-modules, and will be shown in Section 4. The main difficulty in this part is the interaction of modules over different base rings. Our key is to adaptively change the base rings when combining different modules. The second part is to prove decidability of Shifted Monomial Membership: this will be shown in Section 5. As written in the introduction, we provide a more direct proof than that of Noskov [22]. We will use structural theorems to classify ideals of \(\mathbb{Z}[X]\) and consider each case separately. In some cases, we employ arguments of _height of algebraic numbers_ to produce a bound on \(|z|\) whenever \(X^{z}-f\in\mathcal{I}\). In other cases, we will show certain periodicity stemming from the finiteness of quotients or from roots of unity. As a result, in all cases it suffices to verify whether \(X^{z}-f\in\mathcal{I}\) for a finite number of \(z\). Omitted proofs can be found in Appendix A. A summary of the algorithms for deciding Subgroup Intersection, Coset Intersection, and Shifted Monomial Membership is given in Appendix B. ## 4 Reduction to Shifted Monomial Membership Let \(\mathcal{A}\) be a \(\mathbb{Z}[X^{\pm}]\)-module given by a finite presentation. Recall that we naturally identify \(\mathcal{A}\) with the subgroup \(\{(\mathbf{a},0)\mid\mathbf{a}\in\mathcal{A}\}\) of \(\mathcal{A}\rtimes\mathbb{Z}\); that is, we will sometimes write \(\mathbf{a}\) instead of \((\mathbf{a},0)\) when the context is clear. We start with a lemma that effectively describes finitely generated subgroups of \(\mathcal{A}\rtimes\mathbb{Z}\). Such a description follows from the general description of subgroups of finitely generated metabelian groups [25, proof of Theorem 1]. Here we give a systematic reformulation in the context of abelian-by-cyclic groups. [Structural theorem of abelian-by-cyclic groups, see also [25]] Let \(\langle\mathcal{G}\rangle\) be a subgroup of \(\mathcal{A}\rtimes\mathbb{Z}\) generated by the elements \(g_{1}\coloneqq(\mathbf{a}_{1},z_{1}),\ldots,g_{K}\coloneqq(\mathbf{a}_{K},z_{ K})\). Then 1. If \(z_{1}=\cdots=z_{K}=0\), then \(\langle\mathcal{G}\rangle\) is contained in \(\mathcal{A}\) and it is the \(\mathbb{Z}\)-module generated by \(\mathbf{a}_{1},\ldots,\mathbf{a}_{K}\). 2. If \(z_{1},\ldots,z_{K}\) are not all zero, then \(\langle\mathcal{G}\rangle\not\subset\mathcal{A}\). Let \(d\in\mathbb{N}\) denote the greatest common divisor of \(z_{1},\ldots,z_{K}\). Consider the lattice \[\Lambda\coloneqq\left\{(s_{1},\ldots,s_{K})\in\mathbb{Z}^{K}\mid s_{1}z_{1}+ \cdots+s_{K}z_{K}=0\right\}.\] Let \((s_{11},\ldots,s_{1K}),\ldots,(s_{T1},\ldots,s_{TK})\) be a finite set of generators for \(\Lambda\). Then \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) is a \(\mathbb{Z}[X^{\pm d}]\)-submodule of \(\mathcal{A}\), generated by the set of elements \[S\coloneqq\left\{g_{i}g_{j}g_{i}^{-1}g_{j}^{-1}\mid 1\leq i<j\leq K\right\}\cup \left\{g_{1}^{s_{i1}}\cdots g_{K}^{s_{iK}}\mid i\in[1,T]\right\}.\] (4) * _In case (ii), let_ \(\mathbf{a}\in\mathcal{A}\) _be any element such that_ \((\mathbf{a},d)\in\langle\mathcal{G}\rangle\)_. Then_ \(\langle\mathcal{G}\rangle\) _is generated by_ \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) _and_ \((\mathbf{a},d)\) _as a group. In other words, every element of_ \(\langle\mathcal{G}\rangle\) _can be written as_ \((\mathbf{b},0)\cdot(\mathbf{a},d)^{m}\) _for some_ \(\mathbf{b}\in\langle\mathcal{G}\rangle\cap\mathcal{A}\) _and_ \(m\in\mathbb{Z}\)_._ We point out that in case (ii), the subgroup \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) is finitely generated as a \(\mathbb{Z}[X^{\pm d}]\)-module; but it is not necessarily finitely generated as a group. Let \(\mathcal{A}=\mathbb{Z}[X^{\pm}]\), considered as a \(\mathbb{Z}[X^{\pm}]\)-module. Let \(\langle\mathcal{G}\rangle\) be the subgroup of \(\mathcal{A}\rtimes\mathbb{Z}\) generated by the elements \(g_{1}=(X,4),g_{2}=(1+X,-6)\). Then \(d=2\), and \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) is the \(\mathbb{Z}[X^{\pm 2}]\)-module generated by the elements \[g_{1}g_{2}g_{1}^{-1}g_{2}^{-1}=(X^{5}+X^{4}-1-X^{-5},0),\quad g_{1}^{3}g_{2}^ {2}=(X^{13}+X^{12}+X^{9}+X^{7}+X^{6}+X^{5}+X,0).\] For example, consider the element \(g_{1}^{2}g_{2}g_{1}g_{2}\in\langle\mathcal{G}\rangle\). By direct computation, its second entry is zero, therefore \(g_{1}^{2}g_{2}g_{1}g_{2}\in\langle\mathcal{G}\rangle\cap\mathcal{A}\). Furthermore, \(g_{1}^{2}g_{2}g_{1}g_{2}\) can be written as \[g_{1}^{2}g_{2}g_{1}g_{2}=g_{1}^{2}(g_{2}g_{1})g_{2}=g_{1}^{2}(g_ {2}g_{1}g_{2}^{-1}g_{1}^{-1})(g_{1}g_{2})g_{2}=g_{1}^{2}(g_{1}g_{2}g_{1}^{-1}g_ {2}^{-1})^{-1}g_{1}g_{2}^{2}\] \[=g_{1}^{2}(g_{1}g_{2}^{2})(g_{1}g_{2}^{2})^{-1}(g_{1}g_{2}g_{1}^{- 1}g_{2}^{-1})^{-1}(g_{1}g_{2}^{2})=g_{1}^{3}g_{2}^{2}\cdot(g_{1}g_{2}^{2})^{-1 }(g_{1}g_{2}g_{1}^{-1}g_{2}^{-1})^{-1}(g_{1}g_{2}^{2})\] \[=(X^{13}+X^{12}+X^{9}+X^{7}+X^{6}+X^{5}+X,0)\cdot(g_{1}g_{2}^{2}) ^{-1}(X^{5}+X^{4}-1-X^{-5},0)^{-1}(g_{1}g_{2}^{2})\] \[\quad=(X^{13}+X^{12}+X^{9}+X^{7}+X^{6}+X^{5}+X)+X^{-8}\cdot(-1) \cdot(X^{5}+X^{4}-1-X^{-5}).\] It is therefore indeed in the \(\mathbb{Z}[X^{\pm 2}]\)-module generated by \(g_{1}g_{2}g_{1}^{-1}g_{2}^{-1}=X^{5}+X^{4}-1-X^{-5}\) and \(g_{1}^{3}g_{2}^{2}=X^{13}+X^{12}+X^{9}+X^{7}+X^{6}+X^{5}+X\). Intuitively, modulo the generator \(g_{1}g_{2}g_{1}^{-1}g_{2}^{-1}\), one can permute letters in any word over \(\mathcal{G}\) (in the above example, \(g_{1}^{2}g_{2}g_{1}g_{2}\) is congruent to \(g_{1}^{3}g_{2}^{2}\)). Whereas the generator \(g_{1}^{3}g_{2}^{2}\) guarantees the second entry of the product to be zero. Let \(\langle\mathcal{G}\rangle,\langle\mathcal{H}\rangle\) be finitely generated subgroups of \(\mathcal{A}\rtimes\mathbb{Z}\) given by their respective generators, let \(h\) be an element of \(\mathcal{A}\rtimes\mathbb{Z}\). We now consider Subgroup and Coset Intersection for \(\langle\mathcal{G}\rangle\) and \(\langle\mathcal{H}\rangle\). We split into three cases according to whether \(\langle\mathcal{G}\rangle\) and \(\langle\mathcal{H}\rangle\) are contained in the subgroup \(\mathcal{A}\). If at least one of \(\langle\mathcal{G}\rangle\) and \(\langle\mathcal{H}\rangle\) is contained in \(\mathcal{A}\) (Case 1 and 2 below), then the solutions to Subgroup and Coset Intersection are relatively straightforward using the standard tools introduced in Lemma 1 and 2. They do not need to be reduced to Shifted Monomial Membership. If neither \(\langle\mathcal{G}\rangle\) nor \(\langle\mathcal{H}\rangle\) is contained in the subgroup \(\mathcal{A}\) (Case 3 below), then the solution is more complicated and we reduce both Subgroup and Coset Intersection to Shifted Monomial Membership. Case 1: \(\langle\mathcal{G}\rangle\) and \(\langle\mathcal{H}\rangle\) are both contained in \(\mathcal{A}\) Suppose \(\langle\mathcal{G}\rangle\) is generated by the elements \(g_{1}=(\mathbf{a}_{1},0),\ldots,g_{K}=(\mathbf{a}_{K},0)\), and \(\langle\mathcal{H}\rangle\) is generated by the elements \(h_{1}=(\mathbf{a}^{\prime}_{1},0),\ldots,h_{M}=(\mathbf{a}^{\prime}_{M},0)\). #### Subgroup Intersection In this case, we have \[\langle\mathcal{G}\rangle=\{y_{1}\!\cdot\!\mathbf{a}_{1}\!+\!\cdots\!+\!y_{K}\! \cdot\!\mathbf{a}_{K}\mid y_{1},\ldots,y_{K}\in\mathbb{Z}\},\quad\langle\mathcal{H} \rangle=\{z_{1}\!\cdot\!\mathbf{a}^{\prime}_{1}\!+\!\cdots\!+\!z_{M}\!\cdot\!\mathbf{a}^{ \prime}_{M}\mid z_{1},\ldots,z_{M}\in\mathbb{Z}\}.\] Then \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle=\{e\}\) if and only if every element of \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle\) is equal to the neutral element. This means that every solution of \(y_{1}\cdot\mathbf{a}_{1}+\cdots+y_{K}\cdot\mathbf{a}_{K}=z_{1}\cdot\mathbf{a}^{\prime}_{1}+ \cdots+z_{M}\cdot\mathbf{a}^{\prime}_{M},\ y_{1},\ldots,y_{K},z_{1},\ldots,z_{M}\in \mathbb{Z}\) is also a solution of \(y_{1}\cdot\mathbf{a}_{1}+\cdots+y_{K}\cdot\mathbf{a}_{K}=\mathbf{0}\). Let \(\mathcal{M}\) denote the \(\mathbb{Z}[X^{\pm}]\)-module \[\mathcal{M}\coloneqq\Big{\{}(y_{1},\ldots,y_{K},z_{1},\ldots,z_{M}) \in\mathbb{Z}[X^{\pm}]^{K+M}\ \Big{|}\\ y_{1}\cdot\mathbf{a}_{1}+\cdots+y_{K}\cdot\mathbf{a}_{K}-z_{1}\cdot\mathbf{a} _{1}^{\prime}-\cdots-z_{M}\cdot\mathbf{a}_{M}^{\prime}=\mathbf{0}\Big{\}}, \tag{5}\] and let \(\mathcal{Z}\) denote the \(\mathbb{Z}[X^{\pm}]\)-module \[\mathcal{Z}\coloneqq\Big{\{}(y_{1},\ldots,y_{K},z_{1},\ldots,z_{M})\in \mathbb{Z}[X^{\pm}]^{K+M}\ \Big{|}\ y_{1}\cdot\mathbf{a}_{1}+\cdots+y_{K}\cdot\mathbf{a}_{K}=\mathbf{0}\Big{\}}. \tag{6}\] Then the statement above can be summarized as follows. We have \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle=\{e\}\) if and only if \(\mathcal{M}\cap\mathbb{Z}^{K+M}=(\mathcal{M}\cap\mathcal{Z})\cap\mathbb{Z}^{K+M}\). Indeed, the left hand side \(\mathcal{M}\cap\mathbb{Z}^{K+M}\) denotes all the integer solutions of the equation \(y_{1}\cdot\mathbf{a}_{1}+\cdots+y_{K}\cdot\mathbf{a}_{K}=z_{1}\cdot\mathbf{a}_{1}^{\prime }+\cdots+z_{M}\cdot\mathbf{a}_{M}^{\prime}\), while the right hand side \((\mathcal{M}\cap\mathcal{Z})\cap\mathbb{Z}^{K+M}\) denotes all integer solutions of \(y_{1}\cdot\mathbf{a}_{1}+\cdots+y_{K}\cdot\mathbf{a}_{K}=z_{1}\cdot\mathbf{a}_{1}^{\prime }+\cdots+z_{M}\cdot\mathbf{a}_{M}^{\prime}=\mathbf{0}\). By Observation 3.1, we can decide Subgroup Intersection in this case. By Lemma 3.1(ii) we can compute the generators of \(\mathcal{M}\) and \(\mathcal{Z}\), then by Lemma 3.1(iii) we can compute the generators of \(\mathcal{M}\cap\mathcal{Z}\). Next, by Lemma 3.2 we can compute the generators of \(\mathcal{M}\cap\mathbb{Z}^{K+M}\) and \((\mathcal{M}\cap\mathcal{Z})\cap\mathbb{Z}^{K+M}\). Since these are subgroups of \(\mathbb{Z}^{K+M}\), their equality can be decided by checking whether all generators of one subgroup belong to the other subgroup. ### Coset Intersection Let \(h=(\mathbf{a}_{h},z_{h})\). If \(z_{h}\neq 0\) then \(\langle\mathcal{G}\rangle\cap h\langle\mathcal{H}\rangle=\emptyset\). Therefore we only need to consider the case where \(z_{h}=0\). Then \(\langle\mathcal{G}\rangle\cap h\langle\mathcal{H}\rangle=\emptyset\) if and only if there is no solution for \(y_{1}\cdot\mathbf{a}_{1}+\cdots+y_{K}\cdot\mathbf{a}_{K}=z_{1}\cdot\mathbf{a}_{1}^{\prime }+\cdots+z_{M}\cdot\mathbf{a}_{M}^{\prime}+z\cdot\mathbf{a}_{h},\ y_{1},\ldots,y_{K},z_ {1},\ldots,z_{M}\in\mathbb{Z},z=1\). Let \(\mathcal{M}^{\prime}\) denote the \(\mathbb{Z}[X^{\pm}]\)-module \[\mathcal{M}^{\prime}\coloneqq\Big{\{}(y_{1},\ldots,y_{K},z_{1}, \ldots,z_{M},z)\in\mathbb{Z}[X^{\pm}]^{K+M+1}\ \Big{|}\\ y_{1}\cdot\mathbf{a}_{1}+\cdots+y_{K}\cdot\mathbf{a}_{K}-z_{1}\cdot \mathbf{a}_{1}^{\prime}-\cdots-z_{M}\cdot\mathbf{a}_{M}^{\prime}-z\cdot\mathbf{a}_{h}=\mathbf{ 0}\Big{\}}. \tag{7}\] We have \(\langle\mathcal{G}\rangle\cap h\langle\mathcal{H}\rangle=\emptyset\) if and only if \(\big{(}\mathcal{M}^{\prime}\cap\mathbb{Z}^{K+M+1}\big{)}\cap\big{(}\mathbb{Z}^ {K+M}\times\{1\}\big{)}=\emptyset\). Again, \(\mathcal{M}^{\prime}\cap\mathbb{Z}^{K+M+1}\) can be computed by Lemma 3.1(ii)(iii). So Coset Intersection in this case can be decided using linear algebra over \(\mathbb{Z}\). Case 2: one of \(\langle\mathcal{G}\rangle\) and \(\langle\mathcal{H}\rangle\) is contained in \(\mathcal{A}\) This case is similar to the previous case, we leave the detailed proofs in the appendix and summarize the result by the following proposition. Suppose exactly one of \(\langle\mathcal{G}\rangle\) and \(\langle\mathcal{H}\rangle\) is contained in \(\mathcal{A}\). Let \(h\in\mathcal{A}\rtimes\mathbb{Z}\). Given finite sets of generators of \(\langle\mathcal{G}\rangle\) and \(\langle\mathcal{H}\rangle\) as groups, it is decidable whether \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle=\{e\}\) and whether \(\langle\mathcal{G}\rangle\cap h\langle\mathcal{H}\rangle=\emptyset\). Case 3: neither \(\langle\mathcal{G}\rangle\) nor \(\langle\mathcal{H}\rangle\) is contained in \(\mathcal{A}\) By Lemma 3.1, suppose \(\langle\mathcal{G}\rangle\) is generated by \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) and an element \((\mathbf{a}_{\mathcal{G}},d_{\mathcal{G}})\); and \(\langle\mathcal{H}\rangle\) is generated by \(\langle\mathcal{H}\rangle\cap\mathcal{A}\) and an element \((\mathbf{a}_{\mathcal{H}},d_{\mathcal{H}})\). The elements \((\mathbf{a}_{\mathcal{G}},d_{\mathcal{G}})\) and \((\mathbf{a}_{\mathcal{H}},d_{\mathcal{H}})\) can be effectively computed from the generating sets \(\mathcal{G},\mathcal{H}\) by performing the Euclidean algorithm. Furthermore, \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) is a \(\mathbb{Z}[X^{\pm d_{\mathcal{G}}}]\)-module whose generators are explicitly given (by Equation (4)), and \(\langle\mathcal{H}\rangle\cap\mathcal{A}\) is a \(\mathbb{Z}[X^{\pm d_{\mathcal{H}}}]\)-module whose generators are explicitly given. ### Subgroup Intersection We have \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle=\{e\}\) if and only if the equation \((\mathbf{b},0)\cdot(\mathbf{a}_{\mathcal{G}},d_{\mathcal{G}})^{m}=(\mathbf{c},0)\cdot(\mathbf{a }_{\mathcal{H}},d_{\mathcal{H}})^{n}\) has non-trivial solutions \(\mathbf{b}\in\langle\mathcal{G}\rangle\cap\mathcal{A},\ \mathbf{c}\in\langle\mathcal{H} \rangle\cap\mathcal{A},\ m,n\in\mathbb{Z}\). Here, non-trivial means \(\mathbf{b},\mathbf{c},m,n\) are not all zero. By direct computation, this equation is equivalent to the system \[\mathbf{b}+\frac{X^{md_{\mathcal{G}}}-1}{X^{d_{\mathcal{G}}}-1}\cdot\mathbf{a}_{ \mathcal{G}}=\mathbf{c}+\frac{X^{nd_{\mathcal{H}}}-1}{X^{d_{\mathcal{H}}}-1}\cdot \mathbf{a}_{\mathcal{H}},\quad md_{\mathcal{G}}=nd_{\mathcal{H}}. \tag{8}\] An obstacle here is that \(\mathbf{b}\) and \(\mathbf{c}\) take values respectively in the \(\mathbb{Z}[X^{\pm d_{\mathcal{G}}}]\)-module \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) and the \(\mathbb{Z}[X^{\pm d_{\mathcal{H}}}]\)-module \(\langle\mathcal{H}\rangle\cap\mathcal{A}\). This difference makes solving (8) complicated. To overcome this obstacle we define \(d\coloneqq\mathrm{lcm}(d_{\mathcal{G}},d_{\mathcal{H}})\); that is, \(d\) is the smallest positive integer such that \(d_{\mathcal{G}}\mid d,\ d_{\mathcal{H}}\mid d\). We can thus consider both \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) and \(\langle\mathcal{H}\rangle\cap\mathcal{A}\) as \(\mathbb{Z}[X^{\pm d}]\)-modules (see Lemma 3). One can compute a finite set of generators \(S_{\mathcal{G}}\) for \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) as a \(\mathbb{Z}[X^{\pm d}]\)-module. Similarly, one can compute a finite set of generators \(S_{\mathcal{H}}\) for \(\langle\mathcal{H}\rangle\cap\mathcal{A}\) as a \(\mathbb{Z}[X^{\pm d}]\)-module. Define the \(\mathbb{Z}[X^{\pm d}]\)-module \(\mathcal{M}\coloneqq(\langle\mathcal{G}\rangle\cap\mathcal{A})+(\langle \mathcal{H}\rangle\cap\mathcal{A})\), it is generated by the set \(S_{\mathcal{G}}\cup S_{\mathcal{H}}\). The intersection \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle\) is non-trivial if and only if at least one of the following two conditions is satisfied: 1. \((\langle\mathcal{G}\rangle\cap\mathcal{A})\cap(\langle\mathcal{H}\rangle\cap \mathcal{A})\neq\{\mathbf{0}\}\). 2. The equation \[(X^{\pm d}-1)\cdot\mathbf{a}_{\mathcal{G},\mathcal{H}}\in(X^{d}-1)\cdot\mathcal{M}\] (9) has solution \(z\in\mathbb{Z}\setminus\{\mathbf{0}\}\). Here, \[\mathbf{a}_{\mathcal{G},\mathcal{H}}\coloneqq\frac{X^{d}-1}{X^{d_{\mathcal{G}}}-1} \cdot\mathbf{a}_{\mathcal{G}}-\frac{X^{d}-1}{X^{d_{\mathcal{H}}}-1}\cdot\mathbf{a}_{ \mathcal{H}}.\] Proof.: Suppose \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle\) is non-trivial. Let \((\mathbf{a},z^{\prime})\in\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle\), then \(d_{\mathcal{G}}\mid z^{\prime}\) and \(d_{\mathcal{H}}\mid z^{\prime}\). Therefore \(d\mid z^{\prime}\) and we can write \(z^{\prime}=zd\) for some \(z\in\mathbb{Z}\). If \(z=0\) then \(\mathbf{a}\in(\langle\mathcal{G}\rangle\cap\mathcal{A})\cap(\langle\mathcal{H} \rangle\cap\mathcal{A})\), so \((\langle\mathcal{G}\rangle\cap\mathcal{A})\cap(\langle\mathcal{H}\rangle\cap \mathcal{A})\neq\{\mathbf{0}\}\). If \(z\neq 0\) then Equation (8) has solution with \(md_{\mathcal{G}}=nd_{\mathcal{H}}=zd\), meaning \[\frac{X^{zd}-1}{X^{d}-1}\cdot\left(\frac{X^{d}-1}{X^{d_{\mathcal{G}}}-1}\cdot \mathbf{a}_{\mathcal{G}}-\frac{X^{d}-1}{X^{d_{\mathcal{H}}}-1}\cdot\mathbf{a}_{ \mathcal{H}}\right)=\mathbf{c}-\mathbf{b}.\] In particular, we have \(\mathbf{c}-\mathbf{b}\in(\langle\mathcal{G}\rangle\cap\mathcal{A})+(\langle\mathcal{H} \rangle\cap\mathcal{A})=\mathcal{M}\). Multiplying both sides by \(X^{d}-1\) yields \((X^{zd}-1)\cdot\mathbf{a}_{\mathcal{G},\mathcal{H}}\in(X^{d}-1)\cdot\mathcal{M}\). Suppose either (i) or (ii) is satisfied. In case (i), we have \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle\supseteq(\langle \mathcal{G}\rangle\cap\mathcal{A})\cap(\langle\mathcal{H}\rangle\cap \mathcal{A})\neq\{\mathbf{0}\}\). In case (ii), we have \[\frac{X^{zd}-1}{X^{d}-1}\cdot\mathbf{a}_{\mathcal{G},\mathcal{H}}\in\mathcal{M}=( \langle\mathcal{G}\rangle\cap\mathcal{A})+(\langle\mathcal{H}\rangle\cap \mathcal{A}),\] so it can be written as \(\mathbf{c}-\mathbf{b}\) for some \(\mathbf{c}\in\langle\mathcal{H}\rangle\cap\mathcal{A},\ \mathbf{b}\in\langle\mathcal{G}\rangle\cap \mathcal{A}\). Therefore Equation (8) has non-trivial solutions by taking \(m\coloneqqzd/d_{\mathcal{G}}\), \(n\coloneqqzd/d_{\mathcal{H}}\). Suppose neither \(\langle\mathcal{G}\rangle\) nor \(\langle\mathcal{H}\rangle\) is contained in \(\mathcal{A}\). Then deciding whether \(\langle\mathcal{G}\rangle\cap\langle\mathcal{H}\rangle=\{e\}\) reduces to Shifted Monomial Membership (see Definition 7). Proof.: (See Algorithm 1 for a summary.) We use Lemma 13. Condition (i) can be decided using Lemma 1(iii), so it suffices to decide condition (ii). The set \[\mathcal{I}\coloneqq\left\{f\in\mathbb{Z}[X^{\pm d}]\ \Big{|}\ f\cdot\mathbf{a}_{ \mathcal{G},\mathcal{H}}\in(X^{d}-1)\cdot\mathcal{M}\right\} \tag{10}\] is an ideal of \(\mathbb{Z}[X^{\pm d}]\). Furthermore, given a finite set of generators for \(\mathcal{M}\), one can compute a finite set of generators for \(\mathcal{I}\) using Lemma 1(ii). Condition (ii) in Lemma 13 is equivalent to "\(\mathcal{I}\) contains an element \(X^{zd}-1\) for some \(z\in\mathbb{Z}\setminus\{0\}\)". Performing the variable change \(X^{d}\to X\), we can consider \(\mathcal{I}\) as an ideal of \(\mathbb{Z}[X^{\pm}]\) and the condition becomes "\(\mathcal{I}\) contains an element \(X^{z}-1\) for some \(z\in\mathbb{Z}\setminus\{0\}\)". This is exactly the Shifted Monomial Membership problem (Definition 7) with \(f=1\). The fact that \(\mathcal{M}\coloneqq(\langle\mathcal{G}\rangle\cap\mathcal{A})+(\langle \mathcal{H}\rangle\cap\mathcal{A})\) is a finitely generated \(\mathbb{Z}[X^{\pm d}]\)-module is crucial to the reduction in Proposition 14. This argument is specific to abelian-by-cyclic groups and no longer holds in arbitrary metabelian groups. For example, let \(\mathcal{A}\) now be a finitely presented module over the bivariate Laurent polynomial ring \(\mathbb{Z}[X^{\pm},Y^{\pm}]\). We can similarly define the semidirect product \(\mathcal{A}\rtimes\mathbb{Z}^{2}\), which is metabelian but not abelian-by-cyclic. We can find subgroups \(\langle\mathcal{G}\rangle\) and \(\langle\mathcal{H}\rangle\) such that \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) is a finitely generated \(\mathbb{Z}[X^{\pm}]\)-module, while \(\langle\mathcal{H}\rangle\cap\mathcal{A}\) is a finitely generated \(\mathbb{Z}[Y^{\pm}]\)-module. In this case, if we define the sum \(\mathcal{M}\coloneqq(\langle\mathcal{G}\rangle\cap\mathcal{A})+(\langle \mathcal{H}\rangle\cap\mathcal{A})\), then \(\mathcal{M}\) is not a \(\mathbb{Z}[X^{\pm d}]\)-module, a \(\mathbb{Z}[Y^{\pm d}]\)-module, or a \(\mathbb{Z}[X^{\pm d},Y^{\pm d}]\)-module for any \(d\geq 1\). While \(\mathcal{M}\) is still a \(\mathbb{Z}\)-module (since both \(\mathbb{Z}[X^{\pm}]\)-modules and \(\mathbb{Z}[Y^{\pm}]\)-modules can be seen as \(\mathbb{Z}\)-modules), it is not finitely generated as a \(\mathbb{Z}\)-module. Since being finitely generated is essential to the effectiveness results in Lemma 1-3, this constitutes the key difficulty in generalizing our results from abelian-by-cyclic groups to arbitrary metabelian groups. The same difficulty also appears in Coset Intersection. ### Coset Intersection Let \(h=(\boldsymbol{a}_{h},z_{h})\). Then \(\langle\mathcal{G}\rangle\cap h\langle\mathcal{H}\rangle=\emptyset\) if and only if the equation \((\boldsymbol{b},0)\cdot(\boldsymbol{a}_{\mathcal{G}},d_{\mathcal{G}})^{m}=( \boldsymbol{a}_{h},z_{h})\cdot(\boldsymbol{c},0)\cdot(\boldsymbol{a}_{ \mathcal{H}},d_{\mathcal{H}})^{n}\) has solutions \(\boldsymbol{b}\in\langle\mathcal{G}\rangle\cap\mathcal{A},\boldsymbol{c}\in \langle\mathcal{H}\rangle\cap\mathcal{A},m,n\in\mathbb{Z}\). By direct computation, this is equivalent to the system \[\boldsymbol{b}+\frac{X^{md_{\mathcal{G}}}-1}{X^{d_{\mathcal{G}}}-1}\cdot \boldsymbol{a}_{\mathcal{G}}=\boldsymbol{a}_{h}+X^{z_{h}}\cdot\boldsymbol{c }+X^{z_{h}}\cdot\frac{X^{nd_{\mathcal{H}}}-1}{X^{d_{\mathcal{H}}}-1}\cdot \boldsymbol{a}_{\mathcal{H}},\quad md_{\mathcal{G}}=nd_{\mathcal{H}}+z_{h}. \tag{11}\] Again we define \(d\coloneqq\operatorname{lcm}(d_{\mathcal{G}},d_{\mathcal{H}})\) and consider both \(\langle\mathcal{G}\rangle\cap\mathcal{A}\) and \(\langle\mathcal{H}\rangle\cap\mathcal{A}\) as \(\mathbb{Z}[X^{\pm d}]\)-modules, respectively generated by the sets \(S_{\mathcal{G}}\) and \(S_{\mathcal{H}}\). This time, we define the \(\mathbb{Z}[X^{\pm d}]\)-module \(\mathcal{M}^{\prime}\coloneqq(\langle\mathcal{G}\rangle\cap\mathcal{A})+X^{z _{h}}\cdot(\langle\mathcal{H}\rangle\cap\mathcal{A})\), it is generated by the set \(S_{\mathcal{G}}\cup(X^{z_{h}}\cdot S_{\mathcal{H}})\). If \(md_{\mathcal{G}}=nd_{\mathcal{H}}+z_{h}\) has no integer solutions \(m,n\), then \(\langle\mathcal{G}\rangle\cap h\langle\mathcal{H}\rangle=\emptyset\). Otherwise, there exist \(z_{\mathcal{G}}\coloneqq md_{\mathcal{G}},z_{\mathcal{H}}\coloneqq nd_{ \mathcal{H}}\in\mathbb{Z}\) such that \(d_{\mathcal{G}}\mid z_{\mathcal{G}},\,d_{\mathcal{H}}\mid z_{\mathcal{H}}\) and \(z_{\mathcal{G}}=z_{\mathcal{H}}+z_{h}\). Then, every solution \((m,n)\in\mathbb{Z}^{2}\) of the equation \(md_{\mathcal{G}}=nd_{\mathcal{H}}+z_{h}\) is of the form \[m=(z_{\mathcal{G}}+zd)/d_{\mathcal{G}},\quad n=(z_{\mathcal{H}}+zd)/d_{\mathcal{ H}},\quad z\in\mathbb{Z}.\] Similar to Lemma 13, we can show the following: Let \(z_{\mathcal{G}},z_{\mathcal{H}}\) be integers such that \(d_{\mathcal{G}}\mid z_{\mathcal{G}},\,d_{\mathcal{H}}\mid z_{\mathcal{H}}\) and \(z_{\mathcal{G}}=z_{\mathcal{H}}+z_{h}\). The intersection \(\langle\mathcal{G}\rangle\cap h\langle\mathcal{H}\rangle\) is non-empty if and only if the equation \[X^{zd}\cdot\boldsymbol{a}^{\prime}_{\mathcal{G},\mathcal{H}}-\boldsymbol{a}^{ \prime\prime}_{\mathcal{G},\mathcal{H}}\in(X^{d}-1)\cdot\mathcal{M}^{\prime} \tag{12}\] has solution \(z\in\mathbb{Z}\). Here, \[\boldsymbol{a}^{\prime}_{\mathcal{G},\mathcal{H}}\coloneqq X^{z_{\mathcal{G}}} \cdot\frac{X^{d}-1}{X^{d_{\mathcal{G}}}-1}\cdot\boldsymbol{a}_{\mathcal{G}}-X ^{z_{\mathcal{H}}}\cdot\frac{X^{d}-1}{X^{d_{\mathcal{H}}}-1}\cdot\boldsymbol{a} _{\mathcal{H}},\] and \[\boldsymbol{a}^{\prime\prime}_{\mathcal{G},\mathcal{H}}\coloneqq\frac{X^{d}-1}{X ^{d_{\mathcal{G}}}-1}\cdot\boldsymbol{a}_{\mathcal{G}}-\frac{X^{d}-1}{X^{d_{ \mathcal{H}}}-1}\cdot\boldsymbol{a}_{\mathcal{H}}+(X^{d}-1)\cdot\boldsymbol{a} _{h}.\] We can decide if \[f\cdot\mathbf{a}^{\prime}_{\mathcal{G},\mathcal{H}}-\mathbf{a}^{\prime\prime}_{\mathcal{G },\mathcal{H}}\in(X^{d}-1)\cdot\mathcal{M}^{\prime} \tag{13}\] has solution \(f\in\mathbb{Z}[X^{\pm d}]\) by deciding membership of \(\mathbf{a}^{\prime\prime}_{\mathcal{G},\mathcal{H}}\) in the \(\mathbb{Z}[X^{\pm d}]\)-module generated by \(\mathbf{a}^{\prime}_{\mathcal{G},\mathcal{H}}\) and \(\mathcal{M}^{\prime}\) (see Lemma 1(i)). If Equation (13) does not have a solution \(f\in\mathbb{Z}[X^{\pm d}]\), then (12) cannot have a solution \(z\in\mathbb{Z}\); otherwise, we can compute a solution \(f=f_{0}\) of Equation (13). For example, \(f_{0}\) can be computed by enumerating all elements \(f\in\mathbb{Z}[X^{\pm d}]\) (there are countably many), and test for each one whether it satisfies Equation (13). Since Equation (13) has a solution, this procedure must terminate. Consider the ideal \[\mathcal{I}^{\prime}\coloneqq\left\{f\in\mathbb{Z}[X^{\pm d}]\;\middle|\;f \cdot\mathbf{a}^{\prime}_{\mathcal{G},\mathcal{H}}\in(X^{d}-1)\cdot\mathcal{M}^{ \prime}\right\} \tag{14}\] of \(\mathbb{Z}[X^{\pm d}]\), then a finite set of generators for \(\mathcal{I}^{\prime}\) can be computed by Lemma 1(ii). The solution set \[\left\{f\in\mathbb{Z}[X^{\pm d}]\;\middle|\;f\cdot\mathbf{a}^{\prime}_{\mathcal{G},\mathcal{H}}-\mathbf{a}^{\prime\prime}_{\mathcal{G},\mathcal{H}}\in(X^{d}-1) \cdot\mathcal{M}^{\prime}\right\}\] is equal to \(f_{0}+\mathcal{I}^{\prime}\coloneqq\left\{f_{0}+g\;\middle|\;g\in\mathcal{I}^{ \prime}\right\}\). Suppose neither \(\left\langle\mathcal{G}\right\rangle\) nor \(\left\langle\mathcal{H}\right\rangle\) is contained in \(\mathcal{A}\). Then deciding whether \(\left\langle\mathcal{G}\right\rangle\cap h\langle\mathcal{H}\rangle=\emptyset\) reduces to Shifted Monomial Membership. Proof.: (See Algorithm 2 for a summary.) Suppose there exist \(z_{\mathcal{G}},z_{\mathcal{H}}\in\mathbb{Z}\) such that \(d_{\mathcal{G}}\mid z_{\mathcal{G}},\;d_{\mathcal{H}}\;\middle|\;z_{\mathcal{H}}\) and \(z_{\mathcal{G}}=z_{\mathcal{H}}+z_{h}\), otherwise \(\left\langle\mathcal{G}\right\rangle\cap h\langle\mathcal{H}\rangle=\emptyset\). By Lemma 16 and 17, it suffices to decide whether there exists \(z\in\mathbb{Z}\) such that \(X^{z}\in f_{0}+\mathcal{I}^{\prime}\). This is equivalent to \(X^{z}-f_{0}\in\mathcal{I}^{\prime}\). We can decide whether \(X^{0}-f_{0}\in\mathcal{I}^{\prime}\) using ideal membership of \(1-f_{0}\) in \(\mathcal{I}^{\prime}\). Then we use Shifted Monomial Membership to decide whether there exists \(z\in\mathbb{Z}\setminus\{\mathbf{0}\}\) such that \(X^{z}-f_{0}\in\mathcal{I}^{\prime}\). ## 5 Deciding Shifted Monomial Membership In this section we show that Shifted Monomial Membership is decidable. Recall that for this problem, we are given a finite set of generators of an ideal \(\mathcal{I}\subseteq\mathbb{Z}[X^{\pm}]\), as well as a Laurent polynomial \(f\in\mathbb{Z}[X^{\pm}]\). We want to decide if there exists \(z\in\mathbb{Z}\setminus\{\mathbf{0}\}\) such that \(X^{z}-f\in\mathcal{I}\). The outline of the proof is as follows. In Subsection 5.1 we first simplify the problem by reducing to ideals \(\widetilde{\mathcal{I}}\) over the ring \(\mathbb{Z}[X]\) instead of \(\mathcal{I}\subseteq\mathbb{Z}[X^{\pm}]\). We then consider the greatest common divisor \(\varphi\) of the elements in \(\widetilde{\mathcal{I}}\), and divide into five cases according to \(\varphi\). Each of Subsections 5.2-5.6 treats a separate case. A common idea in each case is to give a bound on the absolute value of \(z\) whenever Shifted Monomial Membership has positive answer. See Algorithm 3 for a summary. ### Reduction to ideals of \(\mathbb{Z}[X]\) Let \(g_{1},\ldots,g_{m}\) be the given generators of the ideal \(\mathcal{I}\subseteq\mathbb{Z}[X^{\pm}]\). Without loss of generality suppose none of the \(g_{i}\) is zero. Multiplying any \(g_{i}\) with any power of \(X\) does not change the ideal they generate, because \(X\) is invertible in \(\mathbb{Z}[X^{\pm}]\). Therefore we can multiply each \(g_{i}\) with a suitable power of \(X\), and without loss of generality suppose \(g_{1},\ldots,g_{m}\) are polynomials in \(\mathbb{Z}[X]\) instead of \(\mathbb{Z}[X^{\pm}]\), and that they are not divisible by \(X\). Let \(\widetilde{\mathcal{I}}\) denote the ideal of \(\mathbb{Z}[X]\) generated by \(g_{1},\ldots,g_{m}\). **Lemma 19**.: _Let \(g\) be a polynomial in \(\mathbb{Z}[X^{\pm}]\). Then \(g\in\mathcal{I}\) if and only if for some \(c\in\mathbb{N}\), \(X^{c}\cdot g\in\widetilde{\mathcal{I}}\)._ By Lemma 19, the ideal \(\mathcal{I}\subseteq\mathbb{Z}[X^{\pm}]\) contains an element \(X^{z}-f\) for some \(z\neq 0\), if and only if \(\widetilde{\mathcal{I}}\) contains an element \(X^{a}-X^{b}f\) for some \(a,b\in\mathbb{N},a\neq b\). Furthermore, in this case, we have \(z=a-b\). Hence, Shifted Monomial Membership reduces to the following problem: **Problem 20**.: Given the generators of an ideal \(\widetilde{\mathcal{I}}\subseteq\mathbb{Z}[X]\), decide whether \(\widetilde{\mathcal{I}}\) contains any element of the form \(X^{a}-X^{b}f\), \(a,b\in\mathbb{N},a\neq b\). For a non-zero polynomial \(g\in\mathbb{Z}[X]\), its _leading coefficient_ is defined as the coefficient of its monomial of largest degree. For example, the leading coefficient of \(3X^{2}+4\) is \(3\). A _common divisor_ of a set \(S\subseteq\mathbb{Z}[X]\) is a polynomial \(g\) with positive leading coefficient, such that \(g\mid s\) for all \(s\in S\). The _greatest common divisor_ of \(S\), denoted by \(\gcd(S)\), is a polynomial that has the largest degree and largest leading coefficient among all common divisors of \(S\). The greatest common divisor is well-defined over \(\mathbb{Z}[X]\) because it is a Unique Factorization Domain [26]. In particular, as \(\widetilde{\mathcal{I}}\subseteq\mathbb{Z}[X]\) is the ideal generated by \(g_{1},\ldots,g_{m}\), the greatest common divisor \(\gcd(\widetilde{\mathcal{I}})\) is equal to \(\gcd(\{g_{1},\ldots,g_{m}\})\). Denote \(\varphi\coloneqq\gcd(\widetilde{\mathcal{I}})\). Then \(X\nmid\varphi\) because \(X\nmid g_{1}\). We say that a polynomial \(g\in\mathbb{Z}[X]\) is _primitive_ if there is no integer \(d\geq 2\) such that \(d\mid g\). A complex number \(x\) is called a _root of unity_ if \(x^{p}=1\) for some \(p\geq 1\). We say that a polynomial \(g\in\mathbb{Z}[X]\) has a _square divisor_ if \(\phi^{2}\mid g\) for some polynomial \(\phi\in\mathbb{Z}[X]\) with degree at most one. A polynomial is called _square-free_ if it does not have a square divisor. Since \(\varphi\neq 0\), there are only five cases regarding \(\varphi\): 1. [label=()] 2. \(\varphi=1\), 3. \(\varphi\) is not primitive, 4. \(\varphi\) is primitive and has a root that is not a root of unity, 5. \(\varphi\) is primitive, all roots of \(\varphi\) are roots of unity, and \(\varphi\) has a square divisor, 6. \(\varphi\) is primitive, all roots of \(\varphi\) are roots of unity, and \(\varphi\) is square-free. Each of the following subsections deals with one case. ### Case (i): trivial GCD In this case, \(\varphi=1\). The following lemma gives the structure of the ideal \(\widetilde{\mathcal{I}}\) in this case. A polynomial in \(\mathbb{Z}[X]\) is called _monic_ if its leading coefficient is one. **Lemma 21** ([28, p.384-385]).: _Let \(\widetilde{\mathcal{I}}\) be an ideal of \(\mathbb{Z}[X]\) such that \(\gcd(\widetilde{\mathcal{I}})=1\). Then there are only two possible cases for \(\widetilde{\mathcal{I}}\):_ 1. _[label=()]_ 2. _either_ \(\widetilde{\mathcal{I}}=\mathbb{Z}[X]\)_,_ 3. _or_ \(\widetilde{\mathcal{I}}\) _contains an integer_ \(c\geq 2\)_, as well as a monic polynomial_ \(g\) _of degree at least one. Furthermore, given a finite set of generators for_ \(\widetilde{\mathcal{I}}\)_, one can decide which case is true. In case (ii), one can explicitly compute such_ \(c\) _and_ \(g\)_._ If \(\widetilde{\mathcal{I}}=\mathbb{Z}[X]\) then obviously it contains an element \(X^{a}-X^{b}f,\ a\neq b\). Suppose now that \(\widetilde{\mathcal{I}}\) contains an integer \(c\geq 2\) and a monic polynomial \(g\) of degree at least one. In particular, \(\widetilde{\mathcal{I}}\subseteq(\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\). **Lemma 22**.: _The quotient \(\mathbb{Z}[X]/(\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\) is finite._ Proof.: Let \(\deg g\) denote the degree of \(g\). Since \(g\) is monic, every \(f\in\mathbb{Z}[X]\) can be written as \(f=gh+r\) where \(g,h,r\in\mathbb{Z}[X]\) and \(\deg r<\deg g\). Therefore, every element in \(\mathbb{Z}[X]\) is equivalent modulo \(g\) to a polynomial with degree at most \(\deg g-1\). But there are only finitely many polynomials modulo \(c\) with degree at most \(\deg g-1\). Therefore, the quotient \(\mathbb{Z}[X]/(\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\) is finite. Let \(f\mapsto\overline{f}\) denote the canonical projection \(\mathbb{Z}[X]\to\mathbb{Z}[X]/(\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\). Consider the sequence \(\overline{1},\overline{X},\overline{X^{2}},\cdots\in\mathbb{Z}[X]/(\mathbb{Z} [X]\cdot g+\mathbb{Z}[X]\cdot c)\). Since \(\mathbb{Z}[X]/(\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\) is finite, there exists \(0\leq p<q\) such that \(\overline{X^{p}}=\overline{X^{q}}\). Furthermore, such integers \(p,q\) can be effectively found by incrementally testing whether \(X^{q}-X^{p}\in(\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\) (see Lemma 1). Then \(X^{q}-X^{p}\in(\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\), so \(\overline{X^{r}}=\overline{X^{r-(q-p)}}\) for every \(r\geq q\). From this, we easily obtain the following result. Suppose \(\widetilde{\mathcal{I}}\) contains an integer \(c\geq 2\), as well as a monic polynomial \(g\) of degree at least one. Then \(\widetilde{\mathcal{I}}\) contains an element of the form \(X^{a}-X^{b}f\), \(a,b\in\mathbb{N},a\neq b\), if and only if \(\widetilde{\mathcal{I}}\) contains an element of the form \(X^{a^{\prime}}-X^{b^{\prime}}f\), \(a^{\prime},b^{\prime}\in[0,q-1]\). Proof.: Since \(\overline{X^{r}}=\overline{X^{r-(q-p)}}\) for every \(r\geq q\), every \(X^{r},\ r\in\mathbb{N}\) is equivalent modulo \((\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\) to \(X^{r^{\prime}}\) for some \(r^{\prime}\in[0,q-1]\). Since \((\mathbb{Z}[X]\cdot g+\mathbb{Z}[X]\cdot c)\subseteq\widetilde{\mathcal{I}}\), every \(X^{r}\) is also equivalent modulo \(\widetilde{\mathcal{I}}\) to \(X^{r^{\prime}}\) for some \(r^{\prime}\in[0,q-1]\). Therefore, if \(\widetilde{\mathcal{I}}\) contains an element of the form \(X^{a}-X^{b}f\), \(a,b\in\mathbb{N},a\neq b\), then \(\widetilde{\mathcal{I}}\) contains an element of the form \(X^{a^{\prime}}-X^{b^{\prime}}f\), \(a^{\prime},b^{\prime}\in[0,q-1]\). And if \(\widetilde{\mathcal{I}}\) contains some \(X^{a^{\prime}}-X^{b^{\prime}}f\), \(a^{\prime},b^{\prime}\in[0,q-1]\), then it also contains \(X^{a^{\prime}+q(q-p)}-X^{b^{\prime}}f\). Taking \(a\coloneqq a^{\prime}+q(q-p),b\coloneqq b^{\prime}\) we have \(X^{a}-X^{b}f\in\widetilde{\mathcal{I}}\) and \(a\neq b\). Since there are only finitely many integers in \([0,q-1]\), one can decide whether \(\widetilde{\mathcal{I}}\) contains an element of the form \(X^{a^{\prime}}-X^{b^{\prime}}f\), \(a^{\prime},b^{\prime}\in[0,q-1]\) by enumerating all such \(a^{\prime},b^{\prime}\). ### Case (ii): non-primitive GCD In this case, \(\varphi\) is not primitive. Suppose \(d\mid\varphi\) with \(d\geq 2\). Then \(d\) divides every element in \(\widetilde{\mathcal{I}}\). We show that there is an effectively computable bound on \(a-b\). Let \(d\geq 2\). If \(d\mid X^{a}-X^{b}f\), then \(0\leq a-b\leq\deg f\). Proof.: If \(a>b+\deg f\) then \(\deg X^{a}>\deg X^{b}f\), so the leading coefficient of \(X^{a}-X^{b}f\) is \(1\), a contradiction to \(d\mid X^{a}-X^{b}f\). Similarly if \(a<b\) then the coefficient of the monomial \(X^{a}\) in \(X^{a}-X^{b}f\) is one, a contradiction to \(d\mid X^{a}-X^{b}f\). Therefore \(0\leq a-b\leq\deg f\). Therefore if \(X^{a}-X^{b}f\in\widetilde{\mathcal{I}},\ a\neq b\), then \(d\mid X^{a}-X^{b}f\), and we must have \(a-b\in[1,\deg f]\). By Lemma 19, we have \(X^{a}-X^{b}f\in\widetilde{\mathcal{I}}\) if and only if \(X^{a-b}-f\in\mathcal{I}\). Therefore in this case, it suffices to decide for each \(r\in[1,\deg f]\) whether \(X^{r}-f\in\mathcal{I}\). ### Case (iii): non-root of unity In this case, \(\varphi\) has a root \(x\) that is not a root of unity. Since \(X\nmid g_{1}\) we have \(X\nmid\varphi\), so \(x\neq 0\). Let \(\mathbb{K}\) be an algebraic number field that contains \(x\). The key idea in this case is to use the _height function_ over \(\mathbb{K}\) to give a bound on \(|a-b|\). For an exact construction of the height function, see [29, Section 3.2]. In this paper we will only make use of its properties listed in the following lemma. [Height of algebraic numbers [29, Property 3.3 and Section 3.6]] Let \(\mathbb{K}\) be an algebraic number field and denote \(\mathbb{K}^{*}\coloneqq\mathbb{K}\setminus\{0\}\). There exists a map \(H\colon\mathbb{K}^{*}\to\mathbb{R}_{\geq 0}\) that satisfies to following properties. ** **(i)**: _For any_ \(n\in\mathbb{Z}\) _and_ \(y\in\mathbb{K}^{*}\)_, we have_ \(H(y^{n})=H(y)^{|n|}\)_._ **(ii)**: _For all_ \(y\in\mathbb{K}^{*}\)_, we have_ \(H(y)\geq 1\)_. And_ \(H(y)=1\) _if and only if_ \(y\) _is a root of unity._ _For any_ \(y\in\mathbb{K}^{*}\)_, the value_ \(H(y)\) _is called the_ height _of_ \(y\)_, it is an algebraic number that can be effectively computed._ Since \(x\) is not a root of unity, we have \(H(x)>1\). **Lemma 26**.: _Let \(x\neq 0\) be a root of \(\varphi\) that is not a root of unity. If \(\varphi\mid X^{a}-X^{b}f\), then \(f(x)\neq 0\) and \(|a-b|=\frac{\log H(f(x))}{\log H(x)}\)._ Proof.: Since \(\varphi\mid X^{a}-X^{b}f\) and \(x\) is a root of \(\varphi\), we have \(x^{a}-x^{b}f(x)=0\). Therefore \(x^{a-b}=f(x)\). Since \(x\neq 0\) we have \(f(x)\neq 0\). Taking the height function on both sides of \(x^{a-b}=f(x)\) yields \(H(x)^{|a-b|}=H(x^{a-b})=H(f(x))\), so \(|a-b|=\frac{\log H(f(x))}{\log H(x)}\). If \(X^{a}-X^{b}f\in\widetilde{\mathcal{I}}\) then we must have \(\varphi\mid X^{a}-X^{b}f\). By Lemma 19, we have \(X^{a}-X^{b}f\in\widetilde{\mathcal{I}}\) if and only if \(X^{a-b}-f\in\mathcal{I}\). Therefore by Lemma 26, it suffices to decide whether \(r\coloneqq\frac{\log H(f(x))}{\log H(x)}\) is a non-zero integer, and then decide whether one of \(X^{r}-f\) and \(X^{-r}-f\) is in \(\mathcal{I}\). ### Case (iv): square divisor In this case, \(\varphi\) has a square divisor. Suppose \(\phi^{2}\mid\varphi\) where \(\deg\phi\geq 1\). Let \(x\) be a root of \(\phi\), then \(x\neq 0\) since \(X\nmid\varphi\). The key here is that if \(\phi^{2}\mid g\), then \(\phi\mid g^{\prime}\) where \(g^{\prime}\) denotes the derivative of \(g\). Indeed, writing \(g=\phi^{2}h\), then \(g^{\prime}=2\phi\phi^{\prime}h+\phi^{2}h^{\prime}\) is divisible by \(\phi\). **Lemma 27**.: _Let \(x\neq 0\) be any root of \(\phi\). If \(\phi^{2}\mid X^{a}-X^{b}f\) where \(a\neq b\), then \(a-b=\frac{xf^{\prime}(x)}{f(x)}\)._ Proof.: If \(a>b\) then \(\phi^{2}\mid X^{a-b}-f\). Taking the derivative of \(X^{a-b}-f\) yields \(\phi\mid(a-b)X^{a-b-1}-f^{\prime}\). Since \(\phi(x)=0\) this yields \((a-b)x^{a-b-1}=f^{\prime}(x)\). On the other hand, \(\phi^{2}\mid X^{a}-X^{b}f\) yields \(x^{a-b}=f(x)\). Combining these two equations, we obtain \(a-b=\frac{xf^{\prime}(x)}{f(x)}\). If \(a<b\) then \(\phi^{2}\mid 1-X^{b-a}f\). Taking the derivative of \(1-X^{b-a}f\) yields \(\phi\mid(b-a)X^{b-a-1}f+X^{b-a}f^{\prime}\). Since \(\phi(x)=0\) this yields \((b-a)x^{b-a-1}f(x)+x^{b-a}f^{\prime}(x)=0\). Since \(\phi^{2}\mid X^{a}-X^{b}f\) we have \(x^{a}=x^{b}f(x)\), so \(f(x)\neq 0\). Therefore \(a-b=\frac{xf^{\prime}(x)}{f(x)}\). As in the previous cases we have \(X^{a}-X^{b}f\in\widetilde{\mathcal{I}}\) if and only if \(X^{a-b}-f\in\mathcal{I}\). Therefore, by Lemma 27, it suffices to decide whether \(r\coloneqq\frac{xf^{\prime}(x)}{f(x)}\) is a non-zero integer, and then decide whether \(X^{r}-f\in\mathcal{I}\). ### Case (v): only roots of unity In this case, \(\varphi\) is primitive, square-free, and all its roots are roots of unity. **Lemma 28**.: _Let \(\varphi\in\mathbb{Z}[X]\) be a primitive, square-free polynomial such that all its roots are roots of unity. Then there exists an effectively computable integer \(p\geq 1\) such that \(\varphi\mid X^{p}-1\)._ Proof.: Since \(\varphi\) is square-free, it has no repeated roots over the complex numbers [30]. Recall that roots of unity are of the form \(e^{\frac{2\pi ip}{r}},\ r,s\in\mathbb{N}\). Let \(e^{\frac{2\pi ip_{0}}{p1}},\ldots,e^{\frac{2\pi ip_{d}}{pd}}\) be all the roots of \(\varphi\). Let \(p\geq 1\) be a common multiplier of \(p_{1},\ldots,p_{d}\), then these roots can be written as \(e^{\frac{2\pi ip_{0}}{p}},\ldots,e^{\frac{2\pi ip_{0}d}{p}}\) where \(Q_{1},\ldots,Q_{d}\in[0,p-1]\) are pairwise distinct. Therefore \(\varphi\) divides \(X^{p}-1=\prod_{q=0}^{p-1}(X-e^{\frac{2\pi ip_{0}}{p}})\) in the ring \(\mathbb{Q}[X]\). Hence \(\varphi\) divides \(c(X^{p}-1)\) in the ring \(\mathbb{Z}[X]\) for some \(c\in\mathbb{Z}\). Without loss of generality suppose \(\varphi\neq 1\). Since \(\varphi\) is primitive, it does not divide \(c\), so it must divide \(X^{p}-1\) in the ring \(\mathbb{Z}[X]\) Let \(p\geq 1\) be such that \(\varphi\mid X^{p}-1\). Write \(\widetilde{\mathcal{I}}=\varphi\cdot\widetilde{\mathcal{J}}\) where \(\widetilde{\mathcal{J}}\) is an ideal of \(\mathbb{Z}[X]\) with \(\gcd(\widetilde{\mathcal{J}})=1\). In particular, the generators of \(\widetilde{\mathcal{J}}\) are \(\frac{q_{1}}{\varphi},\ldots,\frac{q_{m}}{\varphi}\). We apply Lemma 21 for \(\widetilde{\mathcal{J}}\). If \(\widetilde{\mathcal{J}}=\mathbb{Z}[X]\), then \(\widetilde{\mathcal{I}}\) is simply the ideal generated by \(\varphi\). Then \(X^{a}-X^{b}f\in\widetilde{\mathcal{I}}\) if and only if \(\varphi\mid X^{a}-X^{b}f\). Since \(\varphi\mid X^{p}-1\), there exist \(a\neq b\) such that \(\varphi\mid X^{a}-X^{b}f\), if and only if there exist \(a^{\prime},b^{\prime}\in[0,p-1]\) (not necessarily distinct), such that \(\varphi\mid X^{a^{\prime}}-X^{b^{\prime}}f\). If \(\widetilde{\mathcal{J}}\neq\mathbb{Z}[X]\), then by Lemma 21, \(\widetilde{\mathcal{J}}\) contains an integer \(c\geq 2\), as well as a monic polynomial \(g\) of degree at least one. Similar to Case (i), consider the equivalent class of the elements \(\frac{X^{p}-1}{\varphi},\frac{X^{2p}-1}{\varphi},\ldots,\) in the quotient \(\mathbb{Z}[X]/(\mathbb{Z}[X]\cdot c+\mathbb{Z}[X]\cdot g)\). Since \(\mathbb{Z}[X]/(\mathbb{Z}[X]\cdot c+\mathbb{Z}[X]\cdot g)\) is finite by Lemma 22, there exist \(0\leq q^{\prime}<q\) such that \[\frac{X^{qp}-1}{\varphi}-\frac{X^{q^{\prime}p}-1}{\varphi}\in\mathbb{Z}[X] \cdot c+\mathbb{Z}[X]\cdot g\subseteq\widetilde{\mathcal{J}}. \tag{15}\] and we have the following result which is analogous to Lemma 23. Suppose \(\widetilde{\mathcal{J}}\) contains an integer \(c\geq 2\), as well as a monic polynomial \(g\) of degree at least one. Then \(\widetilde{\mathcal{I}}=\varphi\cdot\widetilde{\mathcal{J}}\) contains an element of the form \(X^{a}-X^{b}f\), \(a,b\in\mathbb{N},a\neq b\), if and only if \(\widetilde{\mathcal{I}}\) contains an element of the form \(X^{a^{\prime}}-X^{b^{\prime}}f\), \(a^{\prime},b^{\prime}\in[0,pq-1]\). Since there are only finitely many integers in \([0,pq-1]\), one can decide whether \(\widetilde{\mathcal{I}}\) contains an element of the form \(X^{a^{\prime}}-X^{b^{\prime}}f\), \(a^{\prime},b^{\prime}\in[0,pq-1]\) by enumerating all such \(a^{\prime},b^{\prime}\).
2309.10092
Conformal Temporal Logic Planning using Large Language Models
This paper addresses planning problems for mobile robots. We consider missions that require accomplishing multiple high-level sub-tasks, expressed in natural language (NL), in a temporal and logical order. To formally define the mission, we treat these sub-tasks as atomic predicates in a Linear Temporal Logic (LTL) formula. We refer to this task specification framework as LTL-NL. Our goal is to design plans, defined as sequences of robot actions, accomplishing LTL-NL tasks. This action planning problem cannot be solved directly by existing LTL planners because of the NL nature of atomic predicates. To address it, we propose HERACLEs, a hierarchical neuro-symbolic planner that relies on a novel integration of (i) existing symbolic planners generating high-level task plans determining the order at which the NL sub-tasks should be accomplished; (ii) pre-trained Large Language Models (LLMs) to design sequences of robot actions based on these task plans; and (iii) conformal prediction acting as a formal interface between (i) and (ii) and managing uncertainties due to LLM imperfections. We show, both theoretically and empirically, that HERACLEs can achieve user-defined mission success rates. Finally, we provide comparative experiments demonstrating that HERACLEs outperforms LLM-based planners that require the mission to be defined solely using NL. Additionally, we present examples demonstrating that our approach enhances user-friendliness compared to conventional symbolic approaches.
Jun Wang, Jiaming Tong, Kaiyuan Tan, Yevgeniy Vorobeychik, Yiannis Kantaros
2023-09-18T19:05:25Z
http://arxiv.org/abs/2309.10092v4
# Conformal Temporal Logic Planning using Large Language Models: ###### Abstract This paper addresses a new motion planning problem for mobile robots tasked with accomplishing multiple high-level sub-tasks, expressed using natural language (NL), in a temporal and logical order. To formally define such missions, we leverage LTL defined over NL-based atomic predicates modeling the considered NL-based sub-tasks. This is contrast to related planning approaches that define LTL tasks over atomic predicates capturing desired low-level system configurations. Our goal is to design robot plans that satisfy LTL tasks defined over NL-based atomic propositions. A novel technical challenge arising in this setup lies in reasoning about correctness of a robot plan with respect to such LTL-encoded tasks. To address this problem, we propose HERACLEs, a hierarchical command natural language planner, that relies on a novel integration of existing tools that include (i) automata theory to determine the NL-specified sub-task the robot should accomplish next to make mission progress; (ii) Large Language Models to design robot plans satisfying these sub-tasks; and (iii) conformal prediction to reason probabilistically about correctness of the designed plans and mission satisfaction and to determine if external assistance is required. We provide extensive comparative experiments on mobile manipulation tasks. The project website is ltl-llm.github.io. ## I Introduction Several motion planning algorithms have been proposed recently that can generate paths satisfying complex high-level tasks expressed as Linear Temporal Logic (LTL) formulas [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. To define an LTL task, users are required to specify multiple atomic predicates (i.e., Boolean variables) to model desired low-level robot configurations and then couple them using temporal and Boolean operators. However, this demands a significant amount of manual effort, increasing the risk of incorrect formulas, especially for complex requirements. Additionally, complex tasks often result in lengthy LTL formulas, which, in turn, increases the computational cost of designing feasible paths [16]. Natural Language (NL) has also been explored as a more user-friendly means to specify robot missions; however, NL-specified tasks may be characterized by ambiguity. Planning algorithms for NL-based tasks have been proposed recently in [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]; however, unlike LTL planners, they do not offer correctness guarantees, and their performance often deteriorates with increasing task complexity. To mitigate the limitations mentioned above, we propose an alternative approach to specifying complex high-level robot tasks that combines both LTL and NL. Our focus is on mobile robots tasked with performing multiple high-level semantic sub-tasks (e.g., 'deliver a bottle of water to the table') within their environments in a temporal and logical order. To formally describe these tasks, we harness the power of LTL. However, a key departure from related LTL planning approaches lies in how we define atomic predicates. In our method, atomic predicates are true when the aforementioned NL-based sub-tasks are accomplished and false otherwise. This stands in contrast to related LTL planning works, discussed earlier, that rely on defining multiple low-level atomic predicates to describe robot movements and actions for the completion of such sub-tasks. As a result, our approach yields shorter LTL formulas and, in turn, to more computationally efficient path design and lower manual effort required to define complex task requirements. Next, we address the problem of designing robot plans that satisfy LTL tasks defined over NL-based atomic predicates. A novel technical challenge that arises in this setup lies in designing mechanisms (e.g., labeling functions) that can automatically reason about the correctness of plans with respect to such temporal logic specifications. To address this planning problem, we propose a new planner called HERACLEs, for HiERArchical Conformal natural languageE planner, relying on a novel integration of existing tools that include task decomposition [33, 34, 35], Large Language Models (LLMs) [36, 37, 38, 39, 40, 41, 42, 43], and conformal prediction (CP) [44, 45, 46, 18, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 1777, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 182, 189, 191, 200, 211, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 259, 270, 253, 254, 256, 257, 258, 259, 261, 259, 262, 263, 264, 265, 266, 267, 268, 269, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 289, 291, 300, 311, 32, 335, 336, 337, 341, 342, 343, 343, 35, 35, 36, 371, 38, 392, 301, 33, 338, 393, 394, 395, 396, 400, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 111, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 152, 154, 155, 156, 157, 158, 159, 160, 161, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 212, 223, 23, 24, 24, 24, 25, 25, 26, 27, 28, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 323, 324, 325, 326, 327, 328, 329, 333, 333, 343, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 400, 411, 42, 436, 447, 448, 459, 460, 471, 481, 490, 491, 492, 493, 494, 495, 496, 497, 500, 498, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 52, 535, 556, 509, 52, 540, 51, 557, 509, 536, 558, 511, 559, 560, 571, 58, 59, 59, 511, 59, 59, 52, 512, 53, 55, 56, 57, 59, 513, 59, 509, 52, 514, 509, 53, 515, 57, 516, **Contribution:** The contribution of the paper can be summarized as follows. First, we propose a novel approach to specify complex high-level robot tasks that aims to bridge the gap between LTL- and NL-based specification methods. Second, we introduce a novel robot planning problem for LTL tasks over NL-based atomic predicates. Third, to address this problem, we provide a planning algorithm that relies on a novel integration of existing tools to design robot plans with probabilistic correctness guarantees. Fourth, we provide extensive comparative experiments demonstrating that the proposed planner outperforms related works. ## II Problem Formulation ### _Robot System and Skills_ Consider a mobile robot governed by the following dynamics: \(\mathbf{p}(t+1)=\mathbf{f}(\mathbf{p}(t),\mathbf{u}(t))\), where \(\mathbf{p}(t)\in\mathcal{P}\subset\mathbb{R}^{n}\) stands for the state (e.g., position and orientation) of the robot, and \(\mathbf{u}(t)\in\mathbb{R}^{b}\) stands for control input at discrete time \(t\). We assume that the robot state \(\mathbf{p}(t)\) is known for all time instants \(t\geq 0\). The robot has \(A>0\) number of abilities/skills collected in a set \(\mathcal{A}\in\{1,\ldots,A\}\). Each skill \(a\in\mathcal{A}\) is represented as text such as 'take a picture', 'grab', or'move to'. Application of a skill \(a\) at an object/region with location \(\mathbf{x}\) at time \(t\geq 0\) is denoted by \(s(a,\mathbf{x},t)\) or, with slight abuse of notation, when it is clear from the context, by \(s(t)\) for brevity. Also, we assume that the robot has access to low level controllers \(\mathbf{u}(t)\) to apply the skills in \(\mathcal{A}\). Hereafter, we assume perfect/error-free execution of these capabilities. ### _Partially Known Semantic Environment_ The robot resides in a semantic environment \(\Omega\subseteq\mathbb{R}^{d}\), \(d\in\{2,3\}\) with fixed, static, and potentially unknown obstacle-free space denoted by \(\Omega_{\text{free}}\subseteq\Omega\). We assume that \(\Omega_{\text{free}}\) is populated with \(M>0\) static semantic objects. Each object \(e\) is characterized by its expected location \(\mathbf{x}_{e}\) and semantic label \(c_{e}\in\mathcal{C}\), where \(\mathcal{C}\) is a set collecting all semantic labels that the robot can recognize (e.g., 'bottle' or 'chair'). Notice that the occupied space \(\Omega\setminus\Omega_{\text{free}}\) (e.g., walls) may prevent access to semantic objects of interest. We assume that the robot is equipped with (perfect) sensors allowing it to detect obstacles and detect/classify semantic objects. ### _Task Specification_ The robot is responsible for accomplishing a high-level task expressed as an LTL formula [49, 50, 51]. LTL is a formal language that comprises a set of atomic propositions (i.e., Boolean variables), denoted by \(\mathcal{AP}\), Boolean operators, (i.e., conjunction \(\land\), and negation \(\neg\)), and two temporal operators, next \(\bigcirc\) and until \(\mathcal{U}\). LTL formulas over a set \(\mathcal{AP}\) can be constructed based on the following grammar: \(\phi::=\text{true}\mid\pi\mid\phi_{1}\land\phi_{2}\mid\neg\phi\mid\bigcirc \phi\mid\phi_{1}\mathcal{U}\;\phi_{2}\), where \(\pi\in\mathcal{AP}\). For brevity we abstain from presenting the derivations of other Boolean and temporal operators, e.g., _always_\(\square\), _eventually_\(\Diamond\), _implication_\(\Rightarrow\), which can be found in [16]. For simplicity, hereafter, we restrict our attention to co-safe LTL formulas that is a fragment of LTL that exclude the 'always' operator. We define atomic propositions (APs) so that they are true when a sub-task expressed in natural language (NL) is satisfied, and false otherwise; for example, such a sub-task is 'deliver a bottle of water to location X'. Each NL-based AP \(\pi\), is satisfied by a finite robot trajectory \(\tau\), defined as a finite sequence of decisions \(s(t)\), i.e., \(\tau=s(0),s(1),\ldots,s(T)\), for some horizon \(T\geq 1\). A robot trajectory \(\tau\) satisfying \(\pi\) can be generated using existing Large Language Models (LLMs) [30]. We emphasize that the NL-based definition of APs is fundamentally different from related works on LTL planning (see Section I). In these works, APs are defined so that they are true when the system state \(\mathbf{p}(t)\) belongs to a desired set of states. Then, labeling functions \(\mathcal{L}:\mathcal{P}\rightarrow\mathcal{2}^{\mathcal{AP}}\) can be defined straightforwardly so that they return the APs satisfied at any state \(\mathbf{p}(t)\). A major challenge arising in the considered setup is the construction of a corresponding labeling function \(L\) that can reason about the correctness of plans \(\tau\) with respect to the NL-based APs. We address this challenge in Section III-C using conformal prediction. Co-safe LTL tasks are satisfied by finite robot trajectories \(\tau_{\phi}\) defined as \(\tau_{\phi}=\tau(0),\tau(1),\ldots,\tau(n),\ldots,\tau(N)\), where \(\tau(n)\) is a finite robot trajectory of horizon \(T_{n}\), as defined before. Thus, the total horizon \(H\) of the plan \(\tau_{\phi}\) is \(H=\sum_{n=1}^{N}T_{n}\). We highlight that in \(\tau_{\phi}\), the index \(n\) is different from the time instants \(t\in\{1,\ldots,H\}\). In fact, \(n\in\{1,\ldots,N\}\) is an index, initialized as \(n=0\) and increased by \(1\) every \(T_{n}\) time instants, pointing to the next finite trajectory in \(\tau_{\phi}\). ### _Problem Statement_ This paper addresses the following problem (see Ex. 2.1): **Problem 1**: _Given a robot with known dynamics and capabilities (Section II-A), a partially unknown semantic environment \(\Omega\) (Section II-B), and an LTL-encoded task \(\phi\) constructed using NL-based APs (Section II-C), design (online) a robot path \(\tau_{\phi}\) satisfying \(\phi\)._ **Example II.1**: _Consider a robot with skills \(\mathcal{A}=\{\text{go to, pick up}\}\) residing in an environment shown in Fig. 1. The semantic objects that the robot can recognize are \(\mathcal{C}=\{\text{Coke},\text{Pen},\text{Water Bottle}\}\). The environment along with the locations of all semantic objects is shown in Fig. 1. The task of the robot \(\phi=\Diamond\pi_{2}\land(\neg\pi_{1}\mathcal{U}\pi_{2})\), where \(\pi_{1}\) and \(\pi_{2}\) model the sub-tasks 'Deliver the water bottle to location \(\mathbf{x}_{3}\)' and \(\pi_{2}\) is 'Deliver a coke to the table at location \(\mathbf{x}_{3}\)'. A plan \(\tau_{\phi}\) to satisfy \(\phi\) is defined as \(\tau_{\phi}=s(\text{go to,}\mathbf{x}_{1},1),s(\text{grab,}\mathbf{x}_{1},2),s( \text{go to,}\mathbf{x}_{3},3),s(\text{go to,}\mathbf{x}_{4},4),\)\(s(\text{grab,}\mathbf{x}_{4},5),s(\text{go to,}\mathbf{x}_{3},6)\). Notice that this plan may need Fig. 1: Graphical depiction of a semantic environment with unknown geometric structure. to be revised on-the-fly in case an object has been removed from its expected location or the geometric structure of the environment prohibits access to one the semantic objects. ## III Hierarchical Temporal Logic Planning with Natural Language Instructions In this section, we propose HERACLEs, a new hierarchical planning algorithm to address Problem 1; see Fig. 3. In Section III-A, we provide a high-level LTL-based task planner that, given the current mission status, generates a sub-task and constraints that the robot should accomplish next to make progress towards accomplishing \(\phi\). In Section III-B, we discuss how LLMs can be used to design plans \(\tau\), defined in Section II-C, satisfying the considered sub-task and constraints. The generated plans \(\tau\) are executed using available low-level controllers \(\mathbf{u}(t)\). In Section III-C, using conformal prediction, we define a labeling function that can probabilistically reason about the correctness of plans \(\tau\) with respect to the considered sub-task. In Section III-D we discuss how CP dictates when the robot should seek assistance in order to adapt to unanticipated events that include infeasible sub-tasks (e.g., non-existent objects of interest), ambiguous user-specified NL-based APs, and possibly incorrect LLM-based plans \(\tau\). ### _LTL Task Planner: What to Do Next?_ In this section, we present a high-level task planner determining what sub-task the robot should accomplish next to make progress towards accomplishing the LTL task \(\phi\). As a first step, we translate \(\phi\) into a Deterministic Finite state Automaton (DFA) defined as follows [16]; see Fig. 2. **Definition III.1** (Dfa): _A Deterministic Finite state Automaton (DFA) \(D\) over \(\Sigma=2^{\mathcal{AP}}\) is defined as a tuple \(D=\big{(}\mathcal{Q}_{D},q_{D}^{0},\Sigma,\delta_{D},q_{D}^{F}\big{)}\), where \(\mathcal{Q}_{D}\) is the set of states, \(q_{D}^{0}\in\mathcal{Q}_{D}\) is the initial state, \(\Sigma\) is an alphabet, \(\delta_{D}:\mathcal{Q}_{D}\times\Sigma\rightarrow\mathcal{Q}_{D}\) is a deterministic transition relation, and \(q_{D}^{F}\in\mathcal{Q}_{D}\) is the accepting/final state._ Next, we introduce notations and definitions that will be used to construct the task planner. First, to interpret a temporal logic formula over a plan \(\tau_{\phi}=\tau(0),\tau(1),\ldots,\tau(N)\), we use a labeling function \(L\) that maps each sub-plan \(\tau(n)\) to symbols \(\sigma\in\Sigma\). A finite plan \(\tau_{\phi}\) satisfies \(\phi\) if the _word_\(w=L(\tau(0))L(\tau(1))\ldots L(\tau(N))\) yields an _accepting_ DFA run, i.e. if starting from the initial state \(q_{D}^{0}\), each element in \(w\) yields a DFA transition so that the final state \(q_{D}^{F}\) is reached [16]. As discussed in Section II-C, a critical challenge lies in defining the labeling function \(L\). We will temporarily assume that such a labeling function has been designed. Its formal construction is deferred to Section III-C. Also, given any two DFA states \(q_{D},q_{D}^{\prime}\in\mathcal{Q}\) (where \(q_{D}\neq q_{D}^{\prime}\) does not necessarily hold), we define the set \(\Sigma_{q_{D}\rightarrow q_{D}^{\prime}}\subseteq\Sigma\) that collects all symbols \(\sigma\in\Sigma\) that can enable the transition from \(q_{D}\) to \(q_{D}^{\prime}\), i.e., \(\Sigma_{q_{D}\rightarrow q_{D}^{\prime}}=\{\sigma\in\Sigma\mid q_{D}^{\prime} =\delta(q_{D},\sigma)\}\). Note that a DFA along with sets \(\Sigma_{q_{D}\rightarrow q_{D}^{\prime}}\) can be constructed using existing tools such as [52]. In what follows, we present a high-level task planner that leverages the DFA accepting condition to generate sub-tasks, online, that the robot should accomplish next to make mission progress. This planner builds upon our earlier works [33, 34]. First, we define a function over the DFA state-space that captures how far any given DFA state is from the DFA accepting state. To define this function, first, we prune the DFA by removing all infeasible transitions, i.e., transitions that cannot be physically enabled. Specifically, a DFA transition from \(q_{D}\) to \(q_{D}^{\prime}\) is infeasible if its activation requires the robot to satisfy more than one AP, i.e., accomplish more than one sub-task, simultaneously; a more formal definition can be found in [33, 34]. All symbols \(\sigma\in\Sigma\) that are generated only if a robot satisfies more than one AP simultaneously are called infeasible and they are collected in a set \(\Sigma^{\text{infleas}}\). Then, we prune the DFA by removing infeasible DFA transitions; see also Fig. 2. Next, we define the function \(d:\mathcal{Q}_{D}\times\mathcal{Q}_{D}\rightarrow\mathbb{N}\) that returns the minimum number of _feasible_ DFA transitions required to reach a state \(q_{D}^{\prime}\in\mathcal{Q}_{D}\) starting from a state \(q_{D}\in\mathcal{Q}_{D}\). This function is defined as: \[d(q_{D},q_{D}^{\prime})=\left\{\begin{array}{ll}|SP_{q_{D},q_{D}^{\prime}}|,\text{if }SP_{q_{D},q_{D}^{\prime}}\text{ exists},\\ \infty,\quad\text{ otherwise},\end{array}\right. \tag{1}\] where \(SP_{q_{D},q_{D}^{\prime}}\) denotes the shortest path (in terms of hops) in the pruned DFA from \(q_{D}\) to \(q_{D}^{\prime}\) and \(|SP_{q_{D},q_{D}^{\prime}}|\) stands for its cost (number of hops). Note that if \(d(q_{D}^{\prime},q_{D}^{\prime})=\infty\), then \(\phi\) can not be satisfied since in the pruning process, only the DFA transitions that are impossible to enable are removed. The function \(d\) is used to generate sub-tasks online given the current mission status. Specifically, let \(q_{D}(t)\) be the DFA state reached by the robot at time \(t\) after applying the first \(t\) actions as per \(\tau\). This state is initialized as \(q_{D}(0)=q_{D}^{0}\). Given \(q_{D}(t)\), we compute all DFA states \(q_{D}^{\prime}\neq q_{D}(t)\) that are one hop closer to \(q_{D}^{F}\) than \(q_{D}(t)\) is. We collect them in the set \[\mathcal{R}(q_{D}(t))= \tag{2}\] \[\{q_{D}^{\prime}\in\mathcal{Q}_{D}\setminus\{q_{D}(t)\}|d(q_{D}^{ \prime},q_{D}^{F})=d(q_{D}(t),q_{D}^{F})-1\}.\] Among all states in \(\mathcal{R}(q_{D}(t))\), we select a DFA state denoted by \(q_{D}^{\text{next}}\). This state is selected randomly although any user-specified criterion can be used. Then, given \(q_{D}(t)\) and \(q_{D}^{\text{next}}\), we construct the set \[\Sigma^{\text{feas}}_{q_{D}(t)\to q_{D}^{\text{next}}}=\Sigma_{q_{D}(t) \to q_{D}^{\text{next}}}\setminus\Sigma^{\text{infleas}}, \tag{3}\] Fig. 2: DFA corresponding to \(\phi=\zeta\pi_{2}\wedge(\neg\pi_{1}\mathcal{U}\pi_{2})\). \(\pi_{1}\) is “Deliver the water bottle to \(\mathbf{x}_{3}\)” and \(\pi_{2}\) is “Deliver a Coke to \(\mathbf{x}_{3}\)”. The red dashed edges correspond to an infeasible DFA transition. that collects all feasible symbols that can generate the transition from \(q_{D}(t)\) to \(q_{D}^{\text{test}}\); see Example III.2. Similarly, we construct the set \(\Sigma_{q_{D}(t)\to q_{D}(t)}^{\text{feas}}\). Notice that this set is empty if there is no self-loop at \(q_{D}(t)\). Then, a plan \(\tau\) will enable the transition from \(q_{D}(t)\) to \(q_{D}^{\text{test}}\) at some discrete time step \(t^{\prime}\geq t\), if (i) the plan \(\tau(t:\bar{t})=s(t),\ldots,s(\bar{t})\) satisfies \(L(\tau(t:\bar{t}))\in\Sigma_{q_{D}(t)\to q_{D}(t)}^{\text{feas}}\) and \(L(\tau(t:\bar{t}))\notin\Sigma_{q_{D}(t)\to q_{D}^{\text{test}}}^{\text{feas}}\) for all \(\bar{t}\in[t,t^{\prime}-1]\), where \(\tau(t:\bar{t})\) denotes the part of the plan \(\tau\) from the time instant \(t\) until \(\bar{t}\), and (ii) the plan \(\tau\) satisfies \(L(\tau)\in\Sigma_{q_{D}(t)\to q_{D}^{\text{test}}}^{\text{feas}}\). If there is no self-loop at \(q_{D}(t)\) then these conditions should hold for \(t^{\prime}=t\). Examples illustrating these conditions can be found in Example III.2. Among all APs in \(\Sigma_{q_{D}(t)\to q_{D}^{\text{test}}}^{\text{feas}}\) that can be used to satisfy (ii), we select one randomly denoted by \(\pi_{\text{next}}\). The NL-based sub-task corresponding to \(\pi_{\text{next}}\) along with all sub-tasks in \(\Sigma_{q_{D}(t)\to q_{D}(t)}^{\text{feas}}\) are inputs provided by the high-level LTL planner to the LLM-based planner discussed next. **Example III.2**: _Consider the DFA in Fig. 2. We have that \(\Sigma=\{\pi_{1},\pi_{2},\pi_{1}\pi_{2},\varnothing\}\), where \(\varnothing\) is the empty symbol (when no AP is satisfied), and \(\Sigma^{\text{infeas}}=\{\pi_{1}\pi_{2}\}\). Also, we have that \(\mathcal{R}(q_{D}^{\text{}})=\{q_{D}^{\text{}},q_{D}^{\text{}}\}\). Notice that \(q_{D}^{\text{}}\) is not included in \(\mathcal{R}_{q_{D}^{\text{}}}\) as the transition from \(q_{D}^{0}\) to \(q_{D}^{F}\) is infeasible. Also, we have that \(\Sigma_{q_{D}^{\text{feas}}\to q_{D}^{0}}^{\text{feas}}=\{\varnothing\}\) and \(\Sigma_{q_{D}^{0}\to q_{D}^{1}}^{\text{feas}}=\{\pi_{2}\}\). Similarly, we have that \(\mathcal{R}(q_{D}^{\text{}})=\{q_{D}^{1},q_{D}^{F}\}\), \(\Sigma_{q_{D}^{0}\to q_{D}^{1}}^{\text{feas}}=\{\pi_{2},\varnothing\}\), and \(\Sigma_{q_{D}^{0}\to q_{D}^{F}}^{\text{feas}}=\{\pi_{1}\}\). Assume \(q_{D}(t)=q_{D}^{1}\) and \(q_{D}^{\text{test}}=q_{D}^{F}\). Then, the only option for \(\pi_{\text{next}}\) is \(\pi_{\text{next}}=\pi_{1}\). In words, this DFA transition will be enabled at \(t^{\prime}\geq t\) by a plan \(\tau\) of length \(t^{\prime}-t\), if the sequence of actions in \(\tau\) until the time instant \(\bar{t}\), for any \(\bar{t}\in[t,t^{\prime}-1]\), does not satisfy \(\pi_{\text{next}}\) (condition (ii)) but the entire plan \(\tau\) satisfies \(\pi_{\text{next}}\) (condition (i))._ ### _LLM Planner: How to Accomplish the Assigned Task?_ Next, our goal is to design a plan \(\tau\) satisfying conditions (i)-(ii) so that the transition from \(q_{D}(t)\) to \(q_{D}^{\text{test}}\) is eventually activated. To synthesize \(\tau\), we utilize existing LLMs. A key challenge here is that LLMs cannot necessarily break down the conditions (i)-(ii) into low-level instructions that are suitable for robotic execution. For instance, an LLM response to the task 'bring me a bottle of water' may be "I need to go to the store and purchase a bottle of water'. Although this response is reasonable, it is not executable by the robot. Therefore, we will inform the LLM that we specifically want the conditions (i)-(ii) to be broken down into sequences of executable robot skills collected in \(\mathcal{A}\). **Prompt Engineering:** To this end, we employ prompt engineering [53]. Prompt engineering provides examples in the context text ('prompt') for the LLM specifying the task and the response structure that the model will emulate. The prompt used in this work consists of the following parts. (1) _System description_ that defines the action space determining all possible actions \(s(a,\mathbf{x},t)\) that the robot can apply and rules that the LLM should always respect in its decision-making process. For instance, in our considered simulations, such rules explicitly determine that the robot cannot grasp an object located in a fridge before opening the fridge door. We observed that explicitly specifying such constraints improved the quality of LLM plans. We additionally require the length of the plan \(\tau\) to be less than \(T\), where \(T\geq 0\) is a user-specified hyperparameter. (2) _Environment description_ that describes the expected locations \(\mathbf{x}_{e}\) of each semantic object \(e\) of interest; (3) _Task description_ that includes the NL definition of the task \(\pi_{\text{next}}\) (condition (i)) as well as Fig. 3: Graphical illustration of HERACLEs. Interaction between the ask-for-help module and the LTL planner is not shown for simplicity (see Section III-D for more details). constraints, if any, imposed by condition (ii), that the robot should respect until \(\pi_{\text{next}}\) is satisfied; (4) _History of actions & current environment status_ that includes the sequence of actions, generated by the LLM, that the robot has executed so far towards accomplishing the assigned task. It also includes the current locations of semantic objects that the robot may have sensed or manipulated/moved so far; (5) _Response structure_ describing the desired structure of the LLM output. **Plan Design:** In what follows, we describe how we use the above prompt to generate a plan \(\tau\) incrementally so that conditions (i)-(ii) are satisfied. The process of designing \(\tau\) is converted into a sequence of \(T>0\) multiple-choice question-answering problems for the LLM; \(T\) essentially determines the horizon \(T_{n}\) in Section II-C. Specifically, assume that at time \(t\) the robot reaches a DFA state \(q_{D}(t)\) and the LTL planner generates the AP \(\pi_{\text{next}}\) that needs to be completed next (condition (i)) along with the APs that should not be satisfied in the meantime (condition (ii)). This information is used to generate part (3) in the prompt. Part (4) does not include any textual information at time \(t\) as a new sub-task has just been announced. We denote by \(\ell(t)\) parts (1)-(5) of the prompt and by \(h(t)\) only part (4) of \(\ell(t)\) at time \(t\). Given \(\ell(t)\), the LLM is asked to make a decision \(s(a,\mathbf{x},t)\) among all available ones included in part (1). For simplicity, we adopt the notation \(s(t)\) instead of \(s(a,\mathbf{x},t)\). We also denote by \(\mathcal{S}\) all possible decisions \(s\) that the robot can make; this set can be constructed offline using the action space \(\mathcal{A}\) and the available semantic objects. The LLM makes a decision \(s(t)\) as follows. Given any \(s\in\mathcal{S}\), LLMs can provide a score \(r(s|\ell(t))\); the higher the score, the more likely the decision \(s\) is a valid next step to address the user-specified task \(\ell(t)\). Thus, as in [30], we query the model over all potential decisions \(s\) and we convert them into a vector of 'probabilities' using the softmax function. We denote the softmax score of decision \(s\) by \(g(s|\ell(t))\). Then, the decision selected at time \(t\) is \[s(t)=\arg\max_{s\in\mathcal{S}}g(s|\ell(t)). \tag{4}\] Once \(s(t)\) is selected, the robot executes it, the current time step is updated to \(t+1\), and the robot internal state \(\mathbf{p}(t+1)\) is computed. Then, \(h(t+1)\) is updated by incorporating perceptual feedback as follows. First, we automatically convert this perceptual feedback into text denoted by \(p(t+1)\); recall from Section II that we assume that the robot is equipped with perfect sensors. For instance, when the robot is \(\mathbf{p}(t+1)\), it may sense an unexpected semantic object of interest at location \(\mathbf{x}\) or it may not see object \(e\) at its expected location \(\mathbf{x}_{e}\) that was included in environment description in part (2). This sensor feedback will be converted into text of the form 'object of class \(c\) exists in location \(\mathbf{x}\)' or 'no object in location \(\mathbf{x}\)'). Then, \(h(t+1)\) is updated by including in \(h(t)\) the sensor feedback \(p(t+1)\) as well as the decision \(s(t)\) that was previously taken. With slight abuse of notation, we denote this prompt update by \(h(t+1)=h(t)+s(t)+p(t+1)\) where the summation means concatenation of text. This process is repeated for all subsequent time instants \(t+k\), \(k\in\{2,\ldots,T-1\}\). Part (4) is the only part of the prompt that is updated during the time interval \([t+1,t+T-1]\) using the recursive update function \(h(t+k)=h(t+k-1)+s(t+k-1)+p(t+k)\) for all \(k\in\{1,\ldots,T-1\}\). This process generates a plan \[\tau=s(t),s(t+1),\ldots,s(t+T-1), \tag{5}\] of length \(T\). At time \(t+T\), two steps follow. First, part (2) is updated based on the sensor feedback \(p(t+k)\) collected for all \(k\in\{1,\ldots,T\}\). Part (4) is updated so that it does not include any information. These steps give rise to the prompt \(\ell(t+T)\) which is used to compute a plan for the next sub-task that the LTL planner will generate. Concatenation of all plans \(\tau\) for the sub-tasks generated by the LTL planner gives rise to the plan \(\tau_{\phi}\) (see Section II). ### _Probabilistic Satisfaction Guarantees for the LTL Mission_ As discussed earlier, a key challenge lies in reasoning about correctness of plan \(\tau\) with respect to conditions (i)-(ii) (see Section III-A). This is important as it determines if the transition from \(q_{D}(t)\) to the desired DFA state \(q_{D}^{\text{next}}\) can be enabled after executing the plan \(\tau\). To address this challenge, inspired by [18, 46], we utilize an existing conformal prediction (CP) algorithm [48]. CP also allows us to provide probabilistic satisfaction guarantees for \(\phi\). Specifically, we utilize CP to compute a prediction set that contains plans that satisfy a given LTL task \(\phi\) with high probability. Later, we will show how to use this prediction set to define a labeling function and reason about mission satisfaction. **Single-step Plans:** For simplicity, here we focus on LTL formulas \(\phi\) that can be satisfied by plans \(\tau_{\phi}\) of horizon \(H=1\) (see Section II); later we generalize the results for \(H\geq 1\). This means that synthesizing \(\tau_{\phi}\) requires the LLM to make a single decision \(s\). First, we collect a calibration dataset \(\mathcal{M}=\{(\ell_{\text{calib}}^{i},s_{\text{calib}}^{i})\}_{i=1}^{M}\). We assume that there exists a unique correct decision \(s_{\text{calib}}\) for each \(\ell_{\text{calib}}^{i}\). This assumption can be relaxed as in [18]; due to space limitations we abstain from this presentation. Consider a new test data point \(\ell_{\text{test}}\) with unknown correct decision \(s_{\text{test}}\). CP can generate a prediction set \(\mathcal{C}(\ell_{\text{test}})\) of decisions \(s\) containing the correct one \(s_{\text{test}}\) with probability greater than \(1-\alpha\), i.e., \[P(s_{\text{test}}\in\mathcal{C}(\ell_{\text{test}}))\geq 1-\alpha, \tag{6}\] where \(\alpha\in[0,1]\) is user-specified. To generate \(\mathcal{C}(\ell_{\text{test}})\), CP first uses the LLM's confidence \(g\) (see Section III-B) to compute the set of nonconformity scores \(\{r_{i}=1-g(s_{\text{calib}}^{i}\ |\ \ell_{\text{calib}}^{i})\}_{i=1}^{M}\) over the calibration set. The higher the score is, the less each data in the calibration set conforms to the data used for training the LLM. Then CP performs calibration by computing the \(\frac{(M+1)(1-\alpha)}{M}\) empirical quantile of \(r_{1},\ldots,r_{M}\) denoted by \(q\). Then, it generates prediction set \[\mathcal{C}(\ell_{\text{test}})=\{s\in\mathcal{S}\ |\ g(s|\ell_{\text{test}})>1-q\} \tag{7}\] includes all labels (i.e., decisions) that the predictor is at least \(1-q\) confident in. The generated prediction set ensures that the \(1-\alpha\) coverage guarantee, mentioned above, holds. By construction of the prediction sets, given an LTL formula \(\phi\), the LLM output plan \(\tau_{\phi}=\tau=s(t)\) belongs to \(\mathcal{C}(\ell_{\text{test}})\) meaning that the satisfaction probability of \(\phi\) is greater than \(1-\alpha\). A key assumption in CP is that all calibration and test data points are i.i.d. which holds in this setup. **Multi-step Plans:** Next, we generalize the above result to the case where satisfaction of \(\phi\) requires plans \(\tau_{\phi}\) with \(H\geq 1\) decisions selected from \(\mathcal{S}\). The challenge in this case is that the test points \(\{(\ell_{\text{test}}(t+k),s_{\text{test}}(t+k))\}_{k=1}^{T}\) are not independent with each other which violates the exchange-ability assumption required to apply CP. The reason is that the test prompts are not independent since the distribution of prompts \(\ell_{\text{test}}(t)\) depend on past robot decisions as well as on \(\phi\). To address this challenge, as in [18], the key idea is to (i) lift the data to sequences, and (ii) perform calibration at the sequence level using a carefully designed nonconformity score function. First, we construct a calibration dataset as follows. We generate \(M\geq 1\) LTL formulas \(\phi_{i}\). Each LTL formula is broken into a sequence of \(H_{i}\geq 1\) prompts, denoted by: \[\bar{\ell}_{\text{calib}}^{i}=[\ell_{\text{calib}}^{i}(0),\ldots,\ell_{\text{ calib}}^{i}(H_{i}-1)], \tag{8}\] as discussed in Section III-B. Then for each prompt, we manually construct the corresponding ground decisions denoted by \[\tau_{\phi,\text{calib}}^{i}=s_{\text{calib}}^{i}(0),\ldots,s_{\text{calib}}^ {i}(H_{i}-1), \tag{9}\] This gives rise to the calibration set \(\mathcal{M}=\{(\bar{\ell}_{\text{calib}}^{i},\tau_{\phi,\text{calib}}^{i})\}_ {i=1}^{M}\). As before, we assume that each context \(\bar{\ell}_{\text{calib}}\) has a unique correct plan \(\tau_{\phi,\text{calib}}^{i}\). Next, we use the lowest score over the time-steps \(1,\ldots,H_{i}\) as the score for each sequence \(i\) in calibration set, i.e., \[\bar{g}(\tau_{\phi,\text{calib}}^{i}\mid\bar{\ell}_{\text{calib}}^{i})=\min_ {t\in\{1,\ldots,H_{i}\}}g(s_{\text{calib}}^{i}(t)\mid\bar{\ell}_{\text{calib} }^{i}) \tag{10}\] Thus, the non-conformity score of each sequence \(i\) is \[\bar{r}_{i}=1-\bar{g}(\tau_{\phi,\text{calib}}^{i}|\bar{\ell}_{\text{calib}} ^{i}). \tag{11}\] Consider a new LTL formula \(\phi_{\text{test}}\) corresponding to a test data point \(\bar{\ell}_{\text{test}}\) with unknown correct label/plan \(\tau_{\phi,\text{test}}\) of length \(H_{\text{test}}\geq 1\). CP can generate a prediction set \(\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})\) of plans \(\tau_{\phi}\) containing the correct one \(\tau_{\phi,\text{test}}\) with high probability i.e., \[P(\tau_{\phi,\text{test}}\in\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})) \geq 1-\alpha, \tag{12}\] where the prediction set \(\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})\) is defined as \[\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})=\{\tau\mid\bar{g}(\tau|\ell_{ \text{test}})>1-\bar{q}\}, \tag{13}\] where \(\bar{q}\) is the \(\frac{(M+1)(1-\alpha)}{M}\) empirical quantile of \(\bar{r}_{1},\ldots,\bar{r}_{M}\). The generated prediction set ensures that the coverage guarantee in (12) holds. By construction of the prediction sets, the plan \(\tau_{\phi}\) generated by the LLM belongs to \(\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})\). **Causal Construction of the Prediction Set:** Observe that \(\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})\) is constructed after the entire sequence \(\bar{\ell}_{\text{test}}=\ell_{\text{test}}(0),\ldots,\ell_{\text{test}}(H_{ \text{test}}-1)\) is observed. However, at every (test) time \(t\in\{1,\ldots,H_{\text{test}}\}\), the robot observes only \(\ell_{\text{test}}(t)\) and not the whole sequence. Thus, the prediction set needs to be constructed in a causal manner using only the current and past information. Thus, at every (test) time step \(t\), we construct the causal prediction set \(\mathcal{C}(\ell_{\text{test}}(t)))=\{s\mid g(s|\ell(t)))>1-\bar{q}\}\). Alternatively, \(\mathcal{C}(\ell(t)))\) can be constructed using the RAPs approach that can result in smaller prediction sets [48]; see Section IV-C. Then, the causal prediction set associated with \(\bar{\ell}_{\text{test}}\) is defined as \[\mathcal{C}(\bar{\ell}_{\text{test}})=\mathcal{C}(\ell(0)))\times\mathcal{C}( \ell(1)))\ldots\mathcal{C}(\ell(H_{\text{test}}))). \tag{14}\] As shown in [18], \(\mathcal{C}(\bar{\ell}_{\text{test}})\) and \(\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})\), it hold that \(\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})=\mathcal{C}(\bar{\ell}_{\text{ test}})\). **Probabilistic LTL Satisfaction Guarantees:** Using the above CP framework, we can state the following result. **Theorem III.3**: _Consider any co-safe LTL formula \(\phi_{\text{test}}\) and a plan \(\tau_{\phi,\text{test}}\) of horizon \(H_{\text{test}}\geq 1\) designed by HERACLEs. If \(|\mathcal{C}(\ell_{\text{test}}(t)))|=1\), for all \(t\in\{1,\ldots,H_{\text{test}}\}\), then the probability that \(\tau_{\phi,\text{test}}\), generated by HERACLEs, satisfies \(\phi_{\text{test}}\) is at least equal to \(1-\alpha\), i.e., \(\mathbb{P}(\tau_{\phi,\text{test}}\models\phi_{\text{test}})\geq 1-\alpha\)._ This result directly follows from (12). If \(|\mathcal{C}(\ell_{\text{test}}(t)))|=1\), for all \(t\in\{1,\ldots,H_{\text{test}}\}\), then, since \(\mathcal{C}(\bar{\ell}_{\text{test}})=\bar{\mathcal{C}}(\bar{\ell}_{\text{test}})\), we have that (12) is equivalent to \(\mathbb{P}(\tau_{\phi,\text{test}}\models\phi_{\text{test}})\geq 1-\alpha\). **Labeling Function:** Theorem III.3 motivates the following definition for the labeling function \(L\): \[L(\tau|\bar{\ell}_{\text{test}})=\begin{cases}\text{T}&\text{if }|\mathcal{C}(\ell_{\text{test}}(t+k))|=1,\forall k\in\{1,\ldots,T-1\}\\ \text{F}&\text{otherwise},\end{cases}\] where T and F stand for the logical true and false, respectively. In words, we say that \(\tau\) enables the transition from \(q_{D}(t)\) to \(q_{D}^{\text{test}}\) only if the cardinality of \(\mathcal{C}(\ell_{\text{test}}(t+k)))\) is \(1\), for all \(k\in\{1,\ldots,T-1\}\). This definition ensures that \(\tau_{\phi,\text{test}}\) will be constructed using plans \(\tau\) with singleton causal prediction sets so that Theorem III.3 holds. Otherwise, if \(|\mathcal{C}(\ell_{\text{test}}(t+k)))|>1\) for at least one \(k\in\{1,\ldots,T\}\), then \(L(\tau|\bar{\ell}_{\text{test}})=F\) and \(\tau\) does not enable the required DFA transition. ### _When to Seek for Assistance?_ Assume that there exists at least one \(k\in\{1,\ldots,T\}\) so that \(|\mathcal{C}(\ell_{\text{test}}(t+k)))|>1\). In this case, the robot asks for help in order to proceed; see Fig. 3. This assistance request and response occurs as follows. First, the robot requests a new sub-task from the LTL planner to make mission progress. To this end, first \(\pi_{\text{next}}\) is removed from \(\Sigma_{q_{D}(t)\to q_{D}^{\text{test}}}^{\text{feas}}\), i.e., \[\Sigma_{q_{D}(t)\to q_{D}^{\text{test}}}^{\text{feas}}=\Sigma_{q_{D}(t)\to q _{D}^{\text{test}}}^{\text{feas}}\setminus\{\pi_{\text{next}}\}. \tag{15}\] Then, the LTL planner selects a new AP from \(\Sigma_{q_{D}(t)\to q_{D}^{\text{test}}}^{\text{feas}}\) and the process of generating a feasible plan \(\tau\) follows (Section III-B). If there are no other available APs in \(\Sigma_{q_{D}(t)\to q_{D}^{\text{test}}}^{\text{feas}}\), i.e., \[\Sigma_{q_{D}(t)\to q_{D}^{\text{test}}}^{\text{feas}}=\emptyset, \tag{16}\] then \(q_{D}^{\text{next}}\) is removed from \(\mathcal{R}(q_{D}(t))\), i.e., \[\mathcal{R}(q_{D}(t))=\mathcal{R}(q_{D}(t))\setminus\{q_{D}^{\text{next}}\}, \tag{17}\] and a new DFA state \(q_{D}^{\text{next}}\) is selected from the resulting set \(\mathcal{R}(q_{D}(t))\). Then then the LTL planner fails to provide assistance. In this case, the robot asks for help from a human operator. Specifically, first a DFA state \(q_{D}^{\text{next}}\) is selected from the original set \(\mathcal{R}(q_{D}(t))\). Then, the robot returns to the user the prompts \(\ell_{\text{test}}(t+k)\) for which it holds \(|\mathcal{C}(\ell_{\text{test}}(t+k))|>1\), waiting for the human operator to select the correct decision \(s(t+k)\). We emphasize that help from the LTL planner can also be requested if the robot cannot physically execute \(s(t)\) (e.g., grabbing a non-reachable or non-existent object) regardless of the prediction set size, as e.g., in [33, 34]; see Sec. IV-B. ## IV Numerical Experiments In this section, we provide extensive comparative experiments to demonstrate HERACLEs. In Section IV-A, we compare the proposed planner against existing LLM planners that require the task description exclusively in NL. These experiments show that the performance gap between baselines and HERACLEs increases significantly as task complexity increases. In Section IV-B, we illustrate the proposed planner on mobile manipulation tasks; see the supplemental material. To demonstrate the seek-for-assistance module we consider unknown environments and APs with ambiguous instructions. In Section IV-C, we provide additional comparisons showing how various choices of CP algorithm [18, 48] can affect the number of times assistance is requested. Finally, in Section IV-D, we compare the effect of defining NL-based predicates against system-based predicates, being used widely in related LTL planning works, on the DFA size. In all case studies, we pick GPT-3.5 [39] as the LLM. ### _Comparisons against LLM planners with NL Tasks_ **Setup:** We consider the following semantic objects \(\mathcal{C}=\{\text{Coke, Pen, Water Bottle, Apple, Tin Can}\}\) that can be recognized by the robot. The environment is populated with two cans of Coke, one water bottle, one pen, one tin can, and one apple. The water bottle is inside the fridge and the tin can is inside a drawer. Thus, grabbing e.g., the pen requires the robot to first open the drawer if it is closed. The status of these containers (open or closed) is not known a-priori and, therefore, not included in the environment description in \(\ell(t)\). Instead, it can be provided online through sensor feedback as described in Section III-B. For simplicity, here we assume that the expected locations of all objects are accurate and the obstacle-free space is known. The latter ensures that robot knows beforehand if there are any objects being blocked by surrounding obstacles. The action space \(\mathcal{A}\) is defined as in Table I includes \(6\) actions. The action 'do nothing' in \(\mathcal{A}\) is useful when a sub-task can be accomplished in less than \(T\) time steps while the action'report missing item' is desired when the robot realizes, through sensor feedback \(p(t)\), that objects that are not at their expected locations. This action can be used e.g., to notify a user about missing objects. The action 'do nothing' is useful when a sub-task can be accomplished in less than \(T\) time steps while the action'report missing item' is desired when the robot realizes, through sensor feedback \(p(t)\), that objects that are not at their expected locations. This action can be used e.g., to notify a user about missing objects. The action 'do nothing' is useful when a sub-task can be accomplished in less than \(T\) time steps while the action'report missing item' is desired when the robot realizes, through sensor feedback \(p(t)\), that objects that are not at their expected locations. This action notify a user about missing objects. Given a prompt \(\ell(t)\), the number of choices/decisions \(s\) that the LLM can pick from is \(|\mathcal{S}|=18\). Recall that this set is constructed using \(\mathcal{A}\) and all objects/locations in the environment where the actions in \(\mathcal{A}\) can be applied. We select \(T=7\) for all sub-tasks generated by the LTL planner. **Baseline:** As a baseline for our experiments, we employ saycan, a recently proposed LLM-based planner [30], that requires the task to be fully described using NL. Thus, we manually convert LTL tasks into NL ones, which are then used as inputs for [30]. We compare the accuracy of HERACLEs and saycan over \(111\) case studies. For accuracy, we report the percentage of cases where a planner generates a plan that accomplishes the original task. To make comparisons fair we have applied the algorithms under the following settings: (i) Both saycan and HERACLEs select a decision from the same set \(\mathcal{S}\). (ii) We remove altogether the CP component from our planner (since it does not exist in [30]). This implies that the labeling function \(L\) in our planner is defined naively in the sense that given any plan generated by the LLM, the desired DFA transition is assumed to be enabled. This also implies that the CP-module in Fig. 3 will never trigger an assistance request. (iii) We require the baseline to complete the plan within \(T\times K\) steps, where \(K\) is the number of predicates/sub-tasks in \(\phi\). We classify the considered case studies into the following three categories: **Case Study I (Easy):** We consider \(25\) LTL formulas of the form \(\phi=\phi\pi_{1}\) where \(\pi_{1}\) is defined as 'Move object \(c\) to location \(\mathbf{x}\)' for various objects \(c\in\mathcal{C}\) and locations \(\mathbf{x}\). We manually translate such formulas into NL as 'Eventually move object \(c\) to location \(\mathbf{x}\)'. The accuracy of the proposed planner was \(100\%\) while the baseline managed to design correct plans in \(24\) cases, i.e., its accuracy was \(96\%\). To compute this accuracy, we manually check the correctness of the designed plans. Notice that the performance of both planners is comparable due to the task simplicity. **Case Study II (Medium):** For medium tasks, we consider \(15\) LTL formulas defined as either \(\phi_{1}=\Diamond\pi_{1}\wedge\Diamond\pi_{2}\) or \(\phi_{2}=\Diamond\pi_{1}\wedge\Diamond\pi_{2}\wedge(\neg\pi_{1}\mathcal{U}\pi _{2})\). The task \(\phi_{1}\) requires to eventually complete the sub-tasks \(\pi_{1}\) and \(\pi_{2}\) in any order while \(\pi_{2}\) requires \(\pi_{2}\) to be completed strictly before \(\pi_{1}\). The APs \(\pi_{1}\) and \(\pi_{2}\) are defined as before. The accuracy of our planner and the baseline is \(93.3\%\) and \(40\%\), respectively. Observe that the performance of the baseline drops as temporal and/or logical requirements are incorporated into the task description. **Case Study III (Hard):** As for hard tasks, we consider \(71\) LTL formulas defined over at least \(4\) atomic predicates. Two \begin{table} \begin{tabular}{|c|c|} \hline Symbol & Explanation \\ \hline (1, X) & Go to location X \\ \hline (2, X) & Pick up object X \\ \hline (3) & Put down object \\ \hline (4, X) & Open the door of the container X \\ \hline (5) & Do nothing \\ \hline (6) & Report item missing/Failure \\ \hline \end{tabular} \end{table} TABLE I: Action space space used in Section IV. The left column shows how each action in encoded in the prompt and the right one provides its explanation. \(X\) is a variable referring to An object/container selected by the LLM. The plans \(\tau_{\phi}\) are finite sequences of these actions. examples of such LTL formulas are: \(\phi_{1}=\Diamond\pi_{1}\wedge\Diamond\pi_{2}\wedge\Diamond\pi_{3}\wedge\Diamond \pi_{4}\wedge(\neg\pi_{4}\mathcal{U}\pi_{1})\) and \(\phi_{2}=\Diamond\pi_{1}\wedge\Diamond\pi_{2}\wedge\Diamond\pi_{3}\wedge(\neg \pi_{3}\mathcal{U}\pi_{2})\wedge\Diamond\pi_{5}\wedge(\neg\pi_{2}\mathcal{U}\pi_ {5})\wedge(\neg\neg\pi_{5}\mathcal{U}\pi_{1})\wedge\Diamond\pi_{4}\). For instance, \(\phi\) requires the robot to accomplish \(\pi_{1}\), \(\pi_{2}\), \(\pi_{3}\) and \(\pi_{4}\) in any order as long as \(\pi_{1}\) is executed before \(\pi_{4}\). The predicates are defined as before. The accuracy of our planner and the baseline is \(93\%\) and \(14.08\%\). Mistakes made by our planner were mostly because the LLM asked the robot to move to the wrong location to pick up a desired object or the LLM requested the robot to pick up an object inside a closed container without first opening it. Observe that the performance gap increases significantly as the task complexity increases. Also, notice that the performance of the proposed planner does not change significantly across the considered case studies. The reason is that it decomposes the overall planning problem into smaller ones that can be handled efficiently by the LLM. This is in contrast to the baseline where the LLM is responsible for generating plans directly for the original long-horizon task. In the above case studies, the average runtime required for the LTL and the LLM planner to generate a subtask and a decision was \(2.7\times 10^{-5}\) and \(1.2\) secs, respectively. ### _Robotic Platform Demonstrations_ In this section, we demonstrate HERACLEs, using ROS/Gazebo [54], on mobile manipulation tasks using a ground robot (Turtlebot3 Waffle Pi robot [55, 56]) equipped with a manipulator arm with 4 DOFs (OpenManipulator-X [57]). Unlike Section IV-A, the robot is allowed to ask for help, whenever needed, as determined by CP with \(\alpha=0.05\). The robot can recognize the following objects \(\mathcal{C}=\){Coke, Pen, Water Bottle}. Particularly, there are two cans of Coke, one water bottle, and one pen (see Fig. 1). The robot knows the exact position of each object but the obstacle-free space of the environment is unknown. As a result, the robot is not aware a priori if there is any object that cannot be reached due to blocking obstacles. We use existing navigation and sensing stacks [58] for Turtlebots as well as the MoveIt! [59] toolbox for manipulation control. **Case Study I:** Consider the task \(\phi=\Diamond(\pi_{1}\vee\pi_{2})\) where \(\pi_{1}\) means 'Deliver Coke #1 to \(\mathbf{x}_{3}\)' and \(\pi_{2}\) means 'Deliver Coke #2 to \(\mathbf{x}_{3}\). This task requires either Coke #1 or #2 to be delivered to \(\mathbf{x}_{3}\). Initially, the LTL planner selects \(\pi_{2}\) as \(\pi_{\text{next}}\). As the robot navigates the environment to reach Coke #2, it builds an occupancy grid map of the environment that is used, as in [33], to reason about whether the object is blocked by surrounded obstacles or not. Once the robot realizes that Coke #2 is not reachable (see Fig. 4), it requests help from the LTL planner. In response to that request, the LTL planner generates an alternative sub-task, modeled by \(\pi_{1}\), that is eventually successfully accomplished by the robot (see Fig. 5). Assistance from a human operator was never requested in this case study. **Case Study II:** Consider the task \(\phi=\Diamond\pi_{1}\wedge\Diamond\pi_{2}\) where both \(\pi_{1}\) and \(\pi_{2}\) mean 'Bring a drink to location LC'. Observe that these APs are ambiguous as both water and Coke qualify as drinks. Once the LTL planner generates the sub-task \(\pi_{1}\), the LLM selects the action 'go to the Coke location'. However, the prediction set includes two actions 'go to the water bottle location' and 'go to the Coke location'. We note here that, interestingly, we did not specify in the prompt that both water and coke qualify as drinks. In this case, the robot asks for help from the LTL planner. The LTL planner cannot provide assistance as there are no alternative sub-tasks to make mission progress. Thus, the robot next seeks help from a user. Once the user selects the desired action and \(\pi_{1}\) is satisfied, the LTL planner generates the next sub-task \(\pi_{2}\) and the above process repeats. ### _Conformal Prediction Comparisons_ The prediction sets can be constructed using existing CP algorithms; see Section III-C. In our implementation, we have employed RAPS [48] as opposed to the 'vanilla' CP algorithm [60]. We note that [18, 46] employ the 'vanilla' CP algorithm. In what follows, we demonstrate how the choice of the CP affects HERACLEs. To apply CP, we Fig. 4: As the robot navigates towards Coke #2 (Fig. (a)a), it builds the occupancy grid map of the environment (Fig. (b)b) allowing it to realize that Coke #2 is not accessible. Once this happens, the robot asks for help from the LTL planner. construct a calibration dataset with \(70\) datapoints and we select \(\alpha=0.05\). We also construct a test dataset that includes the \(111\) case studies considered in Section IV-A. Among the \(111\) case studies, \(75\) and \(70\) of them correspond to plans \(\tau_{\phi}\) associated with singleton prediction sets using RAPS and vanilla CP, respectively. This is expected as RAPS can generate smaller prediction sets [48]. Thus, employing RAPS can minimize the number of times assistance will be requested. Among the singleton-prediction-set plans, \(100\%\) and \(97.2\%\) of them, respectively, satisfy their corresponding LTL tasks. Observe that these percentages are greater than \(95\%\) as expected since \(1-\alpha=0.95\). The average runtime to construct a prediction set using RAPS and vanilla CP was \(3.35\times 10^{-5}\) and \(1.26\times 10^{-6}\) secs, respectively. ### _Effect of Task Specification on the Automaton Size_ In this section, we demonstrate that the proposed task specification approach using NL-based predicates results in shorter LTL formulas compared to related works that define atomic predicates directly over the system state \(\mathbf{p}(t)\); Section I. Shorter LTL formulas result in smaller automata size which, in turn, results in more computationally efficient plan synthesis. For instance, consider the LTL formula \(\phi=\lozenge(\pi_{1})\) where the NL-based predicate \(\pi_{1}\) is true if the robot delivers a bottle of water to location \(A\). This formula corresponds to a DFA with \(2\) states and 3 edges. Using system-based predicates, the same task can be written as \(\phi=\lozenge(\pi^{\prime}_{1}\land(\lozenge\pi^{\prime}_{2}\land(\lozenge \pi^{\prime}_{3}\land(\lozenge\pi^{\prime}_{4}))))\) where \(\pi^{\prime}_{1}\) is true if the robot position is close enough to the bottle of water, \(\pi^{\prime}_{2}\) is true if the robot grabs the bottle successfully, \(\pi^{\prime}_{3}\) is true if the robot position is close enough to location \(A\), and \(\pi^{\prime}_{4}\) is true if the robot puts down the water bottle successfully. This formula corresponds to DFA with \(5\) states and \(15\) edges. The difference in the automaton size becomes more pronounced as task complexity increases. For instance, consider the NL-based LTL formula \(\phi=\lozenge(\pi_{1})\land\lozenge(\pi_{2})\) where both \(\pi_{1}\) and \(\pi_{2}\) model delivery tasks as before. This formula corresponds to a DFA with \(4\) states and \(9\) edges. Expressing the same task using system-based predicates as before would result in a DFA with \(25\) states and \(225\) edges. ## V Conclusion In this paper, we propose HERACLEs, a new robot planner for LTL tasks defined over NL-based APs. Our future work will focus on extending the planner to multi-robot systems as well as accounting for imperfect execution of robot skills. Fig. 5: Once the robot arrives at the location of Coke #1, the LLM planner generates a new decision asking the robot to pick up the Coke (Fig. 4(a)). Once it is picked, the LLM requests the robot to move to location \(\mathbf{x}_{3}\) and then put down the coke (Fig. 4(b)). After this step, the mission is completed.
2309.14592
Efficient Post-training Quantization with FP8 Formats
Recent advances in deep learning methods such as LLMs and Diffusion models have created a need for improved quantization methods that can meet the computational demands of these modern architectures while maintaining accuracy. Towards this goal, we study the advantages of FP8 data formats for post-training quantization across 75 unique network architectures covering a wide range of tasks, including machine translation, language modeling, text generation, image classification, generation, and segmentation. We examine three different FP8 representations (E5M2, E4M3, and E3M4) to study the effects of varying degrees of trade-off between dynamic range and precision on model accuracy. Based on our extensive study, we developed a quantization workflow that generalizes across different network architectures. Our empirical results show that FP8 formats outperform INT8 in multiple aspects, including workload coverage (92.64% vs. 65.87%), model accuracy and suitability for a broader range of operations. Furthermore, our findings suggest that E4M3 is better suited for NLP models, whereas E3M4 performs marginally better than E4M3 on computer vision tasks. The code is publicly available on Intel Neural Compressor: https://github.com/intel/neural-compressor.
Haihao Shen, Naveen Mellempudi, Xin He, Qun Gao, Chang Wang, Mengni Wang
2023-09-26T00:58:36Z
http://arxiv.org/abs/2309.14592v2
# Efficient Post-training Quantization with FP8 Formats ###### Abstract Recent advances in deep learning methods such as LLMs and Diffusion models have created a need for improved quantization methods that can meet the computational demands of these modern architectures while maintaining accuracy. Towards this goal, we study the advantages of FP8 data formats for post-training quantization across 75 unique network architectures covering a wide range of tasks, including machine translation, language modeling, text generation, image classification, generation, and segmentation. We examine three different FP8 representations (E5M2, E4M3, and E3M4) to study the effects of varying degrees of trade-off between dynamic range and precision on model accuracy. Based on our extensive study, we developed a quantization workflow that generalizes across different network architectures. Our empirical results show that FP8 formats outperform INT8 in multiple aspects, including workload coverage (92.64% vs. 65.87%), model accuracy and suitability for a broader range of operations. Furthermore, our findings suggest that E4M3 is better suited for NLP models, whereas E3M4 performs marginally better than E4M3 on computer vision tasks. The code is publicly available on Intel Neural Compressor: [https://github.com/intel/neural-compressor](https://github.com/intel/neural-compressor). ## 1 Introduction Quantization is the process of reducing the numeric precision of weights and activations of a neural network to lower the computation costs of inference. INT8 quantization Vanhoucke et al. (2011); Han et al. (2015) is the most widely-accepted choice today due to its ability to deliver high inference performance on modern deep learning hardware while maintaining reasonable model accuracy. It has been particularly effective for computer vision tasks such as object detection and image classification, and has been widely deployed in production both at the data center scale and on resource-constrained edge devices. However, INT8 presents several challenges that arise due to its limited dynamic range. Several quantization techniques have been developed to address these challenges. For example, asymmetric quantization Jacob et al. (2018); Krishnamoorthi (2018); Bhalgat et al. (2020) allocates different numbers of bits for the positive and negative ranges with a non-zero offset, to better represent the distribution of the original values. Non-uniform quantization methods Miyashita et al. (2016); Zhou et al. (2017); Cai et al. (2017); Fang et al. (2020); Li et al. (2020) attempt to assign more precision to the parts of the data that are deemed more important to reduce quantization errors. Methods that use per-group Zhou et al. (2016); Mellempudi et al. (2017) or per-channel Jacob et al. (2018); Krishnamoorthi (2018) scaling extend the effective dynamic range by using independent scaling factor for each selected group of elements. The limited dynamic range of INT8 also results in poor representation of outliers that are typically found in activations. This is especially prevalent in Large Language Models (LLMs), where outliers are significantly larger when compared to the rest of the activations. Most common approach for handling outliers is to clip them using threshold values that are either obtained through calibration Sung et al. (2015); Zhao et al. (2019) or learned during training Bhalgat et al. (2020); Choi et al. (2018); Esser et al. (2020); Zhang et al. (2018). More recently Wei et al. (2022); Xiao et al. (2022) have proposed applying mathematical transformations to redistribute the magnitude of outliers between weights and activation tensors to minimize their impact. Despite these advancements, INT8 methods remain ineffective for a wide range of language modeling tasks, where the presence of LayerNorm was shown to amplify the occurrence of outliers Wei et al. (2022). Therefore, a significant percentage of these workloads falls back to using higher precision to preserve model accuracy. This paper argues that 8-bit floating-point (FP8) formats are an efficient and more productive alternative to INT8 for deep neural network quantization. We evaluated three different representations (E5M2, E4M3, and E3M4) that offer varying degrees of trade-off between dynamic range and precision. Table 1 shows the details of the binary format and special value encoding. The study focused on the benefits of FP8 formats for post-training quantization as the preferred approach used in production. We developed quantization workflows that generalized across different network architectures, and conducted experiments on 75 networks that cover a wide range of application domains. Our results show that FP8 formats overall provide higher accuracy, better workload coverage compared to INT8 (92.64% vs. 65.87%) and can handle more operations such as LayerNorm and BatchNorm. The data also suggests that E4M3 is better suited for a broad range of NLP models with a coverage of 96.32% compared to E3M4 (92.11%), while E3M4 performs slightly better on computer vision models with 78.95% coverage compared to E4M3 (73.68%). Our contributions are as follows: * Propose a unified and scalable FP8 quantization flow that works across application domains and different model sizes. To the best of our knowledge, our work is the first to study this problem across 200+ tasks and 75+ models demonstrating the scalability of our approach. * Demonstrate the advantages of FP8 formats over INT8, in terms of workload coverage, model accuracy and suitability for a broader range of operations. Our work is also the first study to showcase accuracy-driven automatic model tuning for quantization. * Suggest that E4M3 is better suited for NLP models, whereas E3M4 performs marginally better than E4M3 on computer vision tasks. ### Related Work There is a growing body of research is studying the use of 8-bit floating-point formats to accelerate deep learning training and inference tasks. Initial studies by Wang et al. (2018) and Mellempudi et al. (2019) focused on the E5M2 format for training tasks due to its wider dynamic range which is necessary for representing gradient values. Sun et al. (2019) subsequently proposed using a combination of two binary formats, E5M2 and E4M3, for training and extended their research to include inference tasks. They also suggested using an exponent bias to shift the numeric range of E4M3 format for handling outliers in activations. Later studies by Noune et al. (2022) and Kuzmin et al. (2022) have extended this scope to include variable exponent bias and formats with fewer exponent bits, such as E3M4 and E2M5. More recently, Micikevicius et al. (2022) presented a generalized training method that employs per-tensor scaling using E5M2 and E4M3 formats. They also extended the inference studies to cover large language models such as GPT-3 (6.7B). \begin{table} \begin{tabular}{l c c c} \hline \hline & E5M2 & E4M3 & E3M4 \\ \hline Exponent bias (_b_) & 15 & 7 & 3 \\ Max value & 57344.0 & 448.0 & 30.0 \\ Min value & \(1.5\times 10^{-5}\) & \(1.9\times 10^{-3}\) & \(1.5\times 10^{-2}\) \\ Subnormals & Yes & Yes & Yes \\ NaNs & all & single & single \\ Infinity & Yes & No & No \\ \hline \hline \end{tabular} \end{table} Table 1: FP8 binary formats: The EeM\(\underline{m}\) notation represents bit allocation for _Exponent (e)_ and _Mantissa (m)_ respectively. The formats support a _sign-bit_ and an implicit leading bit in the mantissa. E5M2 follows IEEE-like encoding rules, while E4M3 and E3M4 use extended encoding to reclaim \(\pm\)Infinity for useful encoding, a unique bit-sequence of _all-ones_ represents a NaN. The rest of this paper is organized as follows. Section 2 discusses the advantages of 8-bit floating point representation in handling outliers. Section.3 introduces the quantization workflow and components of a standard, extended quantization scheme and a framework for tuning model performance. Section 4 outlines the experimental setup, presents accuracy results, and offers discussion on performance tuning. Section 5 presents the conclusions and future work. ## 2 Background **FP8 Value Distribution and Quantization Error:** Floating-point formats can express a large dynamic range of values using a combination of a mantissa and an exponent. A set of floating point numbers in \(\hat{X}\in\mathbb{R}\) are expressed as follows: \[x=(-1)^{s}\times 2^{2^{s}-b}\times\left(1+f_{1}\times 2^{-1}+f_{2}\times 2^{-2}+... +f_{m}\times 2^{-m}\right)\] where \(s\in\{0,1\}\) is the sign, \(e\) is exponent bit width and \(f_{i}\in\{0,1\}\) is the _m_-bit mantissa or fraction. The dynamic range of a floating point format is determined by the width of its exponent. The exponent value is expressed in powers of \(2\) and serves as a scaling factor for the mantissa. This means that floating-point numbers are not uniformly spaced, but have a smaller step-size around zero that increases with the magnitude of the represented value. This allows floating-point formats to represent smaller values with better accuracy. The width of the mantissa determines the number of grid points represented for each incremental step of the exponent, which in turn affects the precision of the format. These properties allow floating-point formats to support higher dynamic range without compromising the accuracy of smaller values, making them well-suited for representing many frequently occurring data patterns in deep learning workloads that exhibit long-tailed normal distributions. Figure 1 illustrates the differences in distribution of quantized values and impact of outliers on both FP8 and INT8 formats. In the center plot, FP8 formats show a greater concentration of grid points in the middle of the distribution, indicating a region of higher precision closer to zero. The high-precision band is wider for formats with more mantissa bits, allowing them to represent a greater percentage of the \(3\sigma\) region of the original data with higher accuracy. In contrast, INT8 quantization operates with a _fixed step-size_ that is determined by the largest value present in the input data. This means that the outliers can significantly influence the step-size by stretching the quantization grid, resulting in fewer grid points under the \(3\sigma\) region. This is reflected in the overall quantization error (MSE) shown on the right, where E4M3 and E3M4 formats have significantly outperformed INT8, while E5M2 performed worse because it has fewer mantissa bits. ## 3 Quantization Workflow There are several challenges in creating a generalized quantization scheme that can be applied to networks across multiple application domains and involves multiple data formats. The networks may have different requirements for dynamic range, precision and may contain operations that are sensitive to quantization. To facilitate generalization, the quantization scheme must be capable of Figure 1: **(left) Histogram of the tensor \(X\sim\mathcal{N}(\mu=0.0,\,\sigma^{2}=0.5)\), that contains a small number ( 1%) of outliers uniformly distributed between -6.0 to 6.0. (center) Distribution of quantized values represented by E5M2, E4M3, E3M4 and INT8 data formats. (right) Overall quantization error as measured by mean-square-error (MSE).** supporting a broad set of common operations, while also having the ability to adapt to meet the unique requirements of various applications. Our framework accomplishes this by incorporating both a _standard quantization scheme_ that can be broadly applied, as well as an _extended quantization scheme_ that optimizes specific operations through an iterative tuning process. Figure 2 depicts the high-level workflow for post-training FP8 quantization. The standard quantization scheme is the default configuration applied to common set of operators across different architectures, while the extended scheme is specific to an architecture and is applied incrementally in a feedback loop. The flow diagram in Figure 2 also includes an additional _BatchNormCom Calibration_ step applied only to computer vision models. Sun et al. (2019) have shown that retuning BatchNorm parameters (_mean_ and _variance_) to compensate for the variance shift caused by quantization, has significantly improved the inference accuracy. Additionally, please note that E5M2 uses _direct quantization_ and does not require _Range Calibration_ because it has sufficient dynamic range to handle outliers. For E4M3 and E3M4 formats, we found simple _max_ scaling to be sufficient for handling outliers. We also examined more sophisticated range-calibration methods such as KL divergence Darvish Rouhani et al. (2020), Migacz (2017), MSE error Choukroun et al. (2019), Zhao et al. (2019) and percentile Gholami et al. (2021) which did not provide any additional benefits. ### Standard Quantization Scheme This section outlines the components of the standard quantization scheme, which is derived from our extensive studies conducted on several deep learning tasks across multiple application domains. This scheme is applied to the common subset of operators including Convolution, Linear and Embedding. This scheme is also identical to INT8 quantization scheme, allowing a fair accuracy comparison. **Weight and Activation Scaling:** We recommend using per-channel scaling for weights across all networks. Although FP8 formats have sufficient dynamic range to handle common weight distributions, empirical evidence suggests that applying per-channel scaling can reduce rounding errors by effectively utilizing the full encoding space for each channel. Similarly, we found per-tensor scaling to be adequate for handling outliers using FP8 formats. The scale factors are computed as below: \[s=(float\_max/max\_T)\] where _float_max_ is the max representable value of the selected FP8 format, and _max_T_ is the calibrated _absmax_ value of the tensor. Some recent studies Xiao et al. (2022), Wei et al. (2022), Dettmers et al. (2022) have indicated that per-channel activation scaling can benefit INT8 quantization. However, such methods may require special kernel implementations that are likely to incur higher compute overheads, hence they are not included in our study. **First and Last Operator:** Previous studies Han et al. (2015), Choi et al. (2018), Micikevicius et al. (2022) on convolution networks have shown that the first convolution and the last fully-connected layers are more sensitive to quantization. These two operators typically constitute < 1% of the total computation. Therefore, we continue to maintain these layers in higher precision to preserve model accuracy. Please note that this exception is only applicable to convolutional neural networks. Figure 2: _Standard Quantization Scheme_: default configuration for broad set of operations across different workloads, _Extended Quantization Scheme_: configuration for additional operator coverage (Ex: LayerNorm, BatchNorm & element-wise), mixed FP8 formats, dynamic quantization, _BatchNormCom Calibration_: recalibrate mean and variance parameters to recover accuracy lost due to quantization, _Range calibration_: max scaling, outlier clipping (more discussions in Appendix A.1). ### Extended Quantization Scheme This section outlines the quantization scheme that is selectively applied to address the specific needs of an application. These methods are applied incrementally to maximize model efficiency while preserving accuracy. **Expanded Operator Coverage:** Neural networks spend significant fraction of their execution time in memory-bound operations such as LayerNorm, BatchNorm2 and element-wise operators such as Add and Mul. Previous attempts Bhandare et al. (2019); Kim et al. (2021) to quantize these operators using integer approximation were unsuccessful in maintaining the model accuracy. Our experiments show that FP8 formats are capable of handling these operators without sacrificing model accuracy. Footnote 2: Ones that cannot be folded into Convolution layers, Ex: Densenet **Mixed FP8 Formats:** The data distributions of weights and activations can vary depending on the architecture of the model and the dataset it is trained on. Figure 3 shows typical distributions of weight and activation tensors in NLP and computer vision workloads. The weight distributions in both classes of models tend to follow normal distributions with lots values near zero. These tensors require more mantissa bits in the data format to represent the distribution accurately. In contrast, activations of NLP models show a lot of outliers which demand a larger dynamic range in the data format to ensure the outliers are accurately represented. We balance this trade-off by assigning E5M2 or E4M3 format for _range-bound_ tensors and E3M4 for _precision-bound_ tensors. **Static vs. Dynamic Quantization:** We use static quantization as the default method throughout our study because it is computationally more efficient. However, we studied the accuracy impact of dynamic quantization on all FP8 formats and found that it offers no additional benefits to E5M2 but observed a noticeable improvement in accuracy for E4M3 and E3M4 formats on selected models. ## 4 Results ### Experimental Setup We demonstrate the FP8 quantization results using a software emulation framework which contains two major components, _data type emulation_ and _model quantization_. For data type emulation, we utilized the FP8 Emulation Toolkit, which provides a reference implementation that runs FP32 hardware. We leverage Neural Compressor to perform model quantization by incorporating both standard and extended quantization schemes, along with FP8 specific quantization methods such as BatchNorm calibration and support for mixed FP8 formats. Our framework supports a wide range of quantized operators, including compute operators such as Convolution, Linear, MatMul, BatchMatMul and memory operators such as Embedding, BatchNorm, LayerNorm, Add and Mul. We evaluated our quantization methods on more than 200 different tasks, using 75 unique model architectures and over 20 different datasets. The models were selected randomly from mainstream hubs such as Hugging Face Models and Torch Vision, as well as individual models from Github based on their popularity. The following is a partial list of workloads that are broadly categorized under Natural Language Processing (NLP) and Computer Vision (CV). Figure 3: Tensor Distributions: (**left**) activations in NLP workloads contain outliers, hence they are _range-bounded_, (**center**) Activation in CV workloads tend to be _precision-bounded_, (**right**) Weight tensors from both CV & NLP networks tend to be _precision-bounded_. **Text and Natural Language Processing**: We have evaluated 38 different networks in this category on a wide range of NLP tasks, which can be further subdivided as follows: * _Generative language modeling_. We evaluated _Bloom_Scao et al. (2022) and _LLaMA_Touvron et al. (2023), two representative open-source LLMs, and evaluate the accuracy using _lambda-openai_. * _Text classification_. We evaluated over 30 different networks (e.g, _Bert-Large_Devlin et al. (2018), _DistillBert_Sanh et al. (2019), _Longformer_Beltagy et al. (2020)) on a wide variety of tasks (e.g., _mrpc_, _cola_, _sts-b_, _sts2_). * _Summarization_. We measured the accuracy of _pegasus_Zhang et al. (2020) on _samsum_dataset. * _Other NLP tasks_. Few other selected models such as MarianMT Junczys-Dowmunt et al. (2018) for neural machine translation and DialogGPT Zhang et al. (2019) for language modeling on WMT_EN_RO and wikitext datasets. **Image and Computer Vision**: We evaluated 34 different networks on various computer vision tasks from the following categories. * _Image generation_. We evaluated Stable Diffusion, an open-source state-of-the-art latent text-to-image diffusion model and evaluate using FID Heusel et al. (2017). * _Image classification_. We evaluate a wide range of convolutional neural networks (CNNs) such as VGG Simonyan and Zisserman (2014), GoogleNet Szegedy et al. (2015), ResNet He et al. (2016), ShuffleNet Zhang et al. (2018), EfficientNet Tan and Le (2019), and Transformer-based vision models such as ViT Dosovitskiy et al. (2020) on ImageNet ILSVRC 2012 and CIFAR-10. * _Image segmentation & object detection_. We select typical models such as U-Net Ronneberger et al. (2015) for image segmentation using the dataset from Kaggle Carvana Image Masking Challenge Brian Shaler (2017) and YoloV3 Redmon and Farhadi (2018) for object detection using COCO2014 Lin et al. (2014). **Audio and Speech Processing**. We evaluated two models HuBERT Hsu et al. (2021) and wav2vec 2.0 Baevski et al. (2020) for speech recognition and evaluate the accuracy using LibriSpeech Panayotov et al. (2015). **Recommendation System**. We evaluated Deep Learning Recommendation Model (DLRM) Naumov et al. (2019) and measured the accuracy on Criteo Terabyte. ### Quantization Results #### 4.2.1 Accuracy Note that the _pass rate_ in Table 2 is the percentage of workloads that meet the accuracy criterion of 1% relative loss against FP32 baseline. SmoothQuant Xiao et al. (2022) is enabled on NLP models with the default smoothing alpha value (alpha tuning is out of scope in this paper). Figure 4 illustrates the variability of accuracy loss for different data formats across CV and NLP workloads. Table 3 shows the accuracy of a few representative samples from all CV and NLP workloads. Figure 5 shows the accuracy loss of all workloads sorted by the model size in ascending order. \begin{table} \begin{tabular}{l c c c c} \hline \hline Data Type & Quantization Approach & Pass Rate (CV) & Pass Rate (NLP) & Pass Rate (All) \\ \hline E5M2 & Direct & 55.26\% & 78.42\% & 74.89\% \\ E4M3 & Static & 73.68\% & **96.32\%** & **92.64\%** \\ E4M3 & Dynamic & 71.05\% & 92.11\% & 88.74\% \\ E3M4 & Static & **78.95**\% & 92.11\% & 90.04\% \\ E3M4 & Dynamic & **78.95**\% & 92.11\% & 90.04\% \\ INT8 & Static CV \(|\) Dynamic NLP & 57.89\% & 67.65\% & 65.87\% \\ \hline \hline \end{tabular} \end{table} Table 2: Workload Pass Rate. The **bold** shows the overall highest pass rate where E4M3 is 92.64% and INT8 is 65.87%. In particular, E4M3 shows the promising workload coverage 96.32% on NLP. Figure 4: Variability in accuracy loss: INT8 shows higher variability for CV models than E4M3 and E3M4 due to its ineffectiveness on models such as EfficientNet, MobileNetV3, and ViT. Quantization-aware training may partially mitigate this issue, but it is out of scope of this paper. E4M3 and E3M4 show better accuracy & less variability with very few outliers compared to INT8. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & Dataset/Task & FP32 & E5M2 & E4M3 & E3M4 & INT8 \\ \hline ResNet-50 & ImageNet 2012 & 0.7615 & **0.7544** & **0.7592** & **0.7604** & **0.7595** \\ DenseNet-121 & ImageNet 2012 & 0.7444 & **0.7435** & **0.7451** & **0.7459** & 0.7253 \\ Wav2Vec2 & LibriSpeech & 0.9660 & **0.9632** & **0.9661** & **0.9658** & 0.9552 \\ DLRM & Criteo Terabyte & 0.8027 & **0.8016** & **0.8025** & **0.8025** & **0.8024** \\ Bert-Base & STS-B & 0.8975 & **0.8934** & **0.8979** & **0.8966** & 0.8809 \\ Bert-Large & COLA & 0.6257 & **0.6238** & **0.6257** & **0.6282** & **0.6389** \\ DistilBert & MRPC & 0.8916 & **0.8897** & **0.8943** & **0.895** & **0.9042** \\ Bloom-7B1 & Lambda-openai & 0.5764 & 0.5424 & **0.5748** & **0.5824** & **0.5977** \\ Bloom-176B & Lambda-openai & 0.6777 & **0.6753** & **0.6757** & **0.6938** & **0.6899** \\ LLAMA-65B & Lambda-openai & 0.7908 & **0.7840** & **0.7914** & 0.7778 & 0.7155 \\ \hline \hline \end{tabular} \end{table} Table 3: Model Accuracy. The **bold** shows the accuracy is less than 1% loss against FP32 baseline. Figure 5: Accuracy Loss by Size on CV (top) and NLP (bottom). The model size is represented by the ball size in the scale of \(log10(model\_size)\), where tiny/small/medium/large is defined by the size range in MB \(<=32\), \((32,384]\), \((384,512]\), and \(>512\) respectively. Note that some points are overlayed due to the similar accuracy (e.g., E4M3 in blue and E3M4 in green on NLP models). #### 4.2.2 Generation Quality Figure 6 shows the image generated by Stable Diffusion with the prompt "A photo of an astronaut riding a horse on Mars". Our subjective analysis reveals that FP8 formats achieve superior image quality compared to INT8, as indicated by the green arrow. Additionally, E4M3 and E3M4 produce smoother images and generate more intricate details, particularly on the astronaut. We employ FID score to compare the quality of generated images (lower is better) and see that FID score aligns with our subjective evaluation. More samples on Stable Diffusion are shown in Appendix A.2. Table 4 shows the sample text generated by Bloom on the prompt with 32 input tokens using beam search size 4. Given the prompt as the input, you can see E3M4 shows better response than INT8 with more comprehensive content and few repeated tokens (e.g., _saw many strange_). Appendix A.3 shows the full output on different data format and quantization approach. ### Discussion #### 4.3.1 Standard Quantization Scheme **Quantizing First and Last Operators :** For convolutional networks, quantizing the first and last operators reduced the _Pass Rate_ for E5M2 and E4M3 formats by 25% and 15% respectively. However, E3M4 can maintain a _Pass Rate_ of 70% even with the first and last operators quantized. Therefore, we recommend the enabling of first and last operators for FP8 quantization as a tuning option. **BatchNorm Calibration:** We use data augmentation to enhance the feature diversity of the calibration data which impacts the quality of BatchNorm statistics and model accuracy. Figure 7 compares the effectiveness of training and inference data augmentation methods in preserving model accuracy at different calibration data sample sizes. We found training transform to be more effective even at \begin{table} \begin{tabular}{p{227.6pt}} \hline \hline **Prompt**: Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun. \\ \hline **Output (FP32)**: One day, she decided to go on a trip. She packed her suitcase and went to the airport. When she got there, she found out that there was no flight to her destination, so she decided to take a bus. When she got there, she found out that there was no bus to her destination... \\ \hline **Output (INT8)**: This little girl was very adventurous. One day she decided to go on a trip to a faraway country. When she got there the little girl _saw many strange_ things. She _saw many strange_ people. She _saw many strange_ animals. She saw many strange_sights. She saw many... \\ \hline **Output (E3M4)**: One day, she decided to go on an adventure. She packed her suitcase and went to the airport. She boarded a plane and flew to New York City. There, she met a man, and they had a great time together. They went to a restaurant and ate delicious food. Then, they went to... \\ \hline \hline \end{tabular} \end{table} Table 4: Generated Text of Bloom on 32 Input Tokens Figure 6: Stable Diffusion with Prompt "A photo of an astronaut riding a horse on Mars" smaller sample sizes (\(<\)3K). However, we recommend sample size of 3K with training transform for achieving best results across a wide range of networks. #### 4.3.2 Extended Quantization Scheme **Mixed FP8 Formats:** Figure 8 illustrates how using mixed FP8 formats on the input can impact the quantization error of the output of a Linear operator from BERT-base (MPRC) model. Our experiments show that using E4M3 for activations and E3M4 for weights produced best accuracy results on a range of NLP workloads. The accuracy improvements achieved by this scheme for Bert, Funnel, and Longformer models are presented in Table 5. **Expanded Operator Coverage:** Appendix A.4 has the results from our quantization studies extended to a wider range of operators such as BatchMatMul, MatMul, Embedding and LayerNorm. Our results show that E4M3 achieves overall better accuracy and smaller variability in accuracy loss across a broad range of NLP tasks. **Static vs. Dynamic Quantization:** While static quantization is the default approach in our recipes, we also studied the impact of dynamic quantization on model accuracy. The results indicate that dynamic quantization can improve the accuracy of NLP models when quantizing with E4M3 and E3M4 formats as shown in Table 6. ## 5 Summary and Future Work We present a set of post-training quantization recipes for FP8 inference and demonstrate the effectiveness across 75 unique network architectures covering a wide range of tasks such as language Figure 8: MSE of FP8 Quantization with Mixed Formats vs. Single Format on Bert-Base (MRPC) Figure 7: CV Models with BatchNorm Operation \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & Task & FP32 & E5M2 & E4M3 & E3M4 & Mixed \\ \hline Bert-Base & MRPC & 0.9069 & 0.9040 & 0.9050 & 0.9050 & **0.9069** \\ Bert-Large & RTE & 0.7256 & 0.6968 & 0.7329 & 0.6931 & **0.7365** \\ Funnel & MRPC & 0.9225 & 0.9215 & 0.9207 & 0.3704 & **0.9233** \\ Longformer & MRPC & 0.9146 & 0.8374 & 0.9113 & 0.9084 & **0.9143** \\ \hline \hline \end{tabular} \end{table} Table 5: Model Accuracy of FP8 Format (Single vs. Mixed). Mixed FP8 formats (in bold) show higher accuracy than all the other single FP8 formats on the below NLP workloads. modeling, text generation, image classification and generation. We recommend E3M4 and E4M3 as the default FP8 format for CV and NLP models respectively, while additional recipes such as mixed FP8 formats and expanded FP8 operator coverage are worthwhile exploring to produce an optimal FP8 model. As our future work, we plan to apply FP8 quantization recipes to more diverse LLM models (e.g., BioGPT Luo et al. (2022), Llama2 Chat Touvron et al. (2023), Code Llama **?**), and contribute our recipes and implementation to the open source community.
2309.15854
Dyonium Induced Fermion Number Violation
Dyonium induced fermion number violation is studied in a SU(2) x U(1) gauge theory with a doublet Higgs field. Dyonium is a generalization of Nambu's monopolium, having finite sized pair of dyon and anti-dyon, connected with a thin string under the linear plus Coulomb force. This is a follow-up of the author's paper on monopolium induced neutrino mass (including the lepton and baryon number violations), studied with a SO(10) Grand Unified model in 1983. To fulfill the requirement from the chiral anomaly or its index theorem, an electric field should be excited in parallel to the dipole magnetic field. This crucial dynamical problem, not fully answered then, has been solved, by considering not monopolium but dyonium. The fermionic zero modes in the dyonium background fields are necessary in evaluating the transition rate of the fermion number violation processes. Their Dirac equation can be reduced to a single component partial differential equation, similar to the renormalization group equation, which may be useful to estimate the reaction rates.
Akio Sugamoto
2023-09-08T06:56:57Z
http://arxiv.org/abs/2309.15854v1
###### Abstract ###### Abstract Dyonium induced fermion number violation is studied in a \(SU(2)_{L}\times U(1)_{Y}\) gauge theory with a doublet Higgs field. Dyonium is a generalization of Nambu's monopolium, having finite sized pair of dyon and anti-dyon, connected with a thin string under the linear plus Coulomb force.a Footnote a: Monopole has a magnetic charge without electric charge, while dyon has both charges. Monopolium is a bound state of monopole and anti-monopole, while dyonium is that of dyon and anti-dyon. **OCHA-PP-376** **Dyonium Induced Fermion Number Violation** **Akio Sugamoto** _Department of Physics, Graduate School of Humanities and Sciences, Ochanomizu University, 2-1-1 Ohtsuka, Bunkyo-ku, Tokyo 112-8610, Japan_ ## 1 Introduction The important issues to be clarified in the present high energy physics are the baryogenesis and the neutrino mass. Regarding the baryogenesis, many scenarios have been proposed so far, which satisfy the Sakharov's three criterions [1]. The popular scenario is the "sphaleron transition" by Manton and Klinkhammer [2][3], which is applicable to the baryogenesis as well as the leptogenesis [4], functioning at the electroweak (EW) scale. The original scenario by Yoshimura [5] of generating baryon (B) number at the grand unified (GUT) scale, or the other scenarios generating B-number by the oscillation of a scalar field with B or L (lepton) number, such as the supersymmetric Affleck-Dine mechanism [6], the Dimopoulos-Susskind [7], the Cohen-Kaplan [8] and others [9][10][11], still remain as plausible candidates for the baryogenesis scenario. There are different ways to satisfy the two Sakharov's criterions, the existence of baryon number violating interaction and that of CP violation. One way is to break explicitly B-conservation and CP-symmetry, and the other way is to break them by "chiral anomaly" via a certain gauge field configurations with non-vanishing topological Chern number (\(n_{C}\)) or Chern-Simmons number (\(n_{CS}\)). The sphaleron transition is an example of this anomaly mediated way, but other scenarios do exist. Indeed a while ago before the sphaleron was found, the author proposed another scenario [12], in which the neutrino mass (including L- and B-number violation processes) is induced by the monopolium configuration (monopole-anti-monopole dumbell system in SO(10) GUTs). It is a generalization of Rubakov-Callan effect of monopole induced B-violation [13], but the configuration used was the monopolium found by Nambu [14]. In the standard model with a doublet Higgs field, the monopolium is the more natural solution than the isolated monopole obtained in the gauge model with a triplet Higgs field. Nambu's monopolium (called monopole-anti-monopole dumb-bell system) obtained in the standard model, consists of a pair of monopole and anti-monopole, having a thin flux tube made of massive gauge boson \(Z\), with solenoidal magnetic fields spreading out from the monopole and entering in the anti-monopole. This thin flux tube is called \(Z\)-string in this paper. The author's previous paper [12] was, however, so primitive regarding the quantitative analysis of the fermion number violation processes. One crucial problem not solved completely then, was how to generate time-dependent electric field \(\mathbf{E}(t,\mathbf{x})\), being parallel to the static magnetic field \(\mathbf{B}(\mathbf{x})\) of the monopolium. The generation of this electric fields is crucial so that it may have the non-zero Chern number: \[\text{Chern number}=\frac{g^{2}}{8\pi^{2}}\int d^{4}x\ \mathbf{E}(x)\cdot\mathbf{B}(x), \tag{1}\] since the conservation of the L-handed fermion number current \(J_{L}^{\mu}(x)=\bar{\psi}_{L}(x)\gamma^{\mu}\psi_{L}(x)\) is violated by the chiral anomaly: \[\partial_{\mu}J_{L}^{\mu}(x)=-\frac{g^{2}}{8\pi^{2}}\mathbf{E}(x)\cdot\mathbf{B}(x). \tag{2}\] The problem mentioned looks a trivial one, but the dynamics to generate electric field parallel to the magnetic field is not so easy to understood, that was exactly the question raised by Charlie Goebel to the author in 1982. When we apply the chiral anomaly and the associated chirality flip phenomena for the chiral fermions in material science, we have no such difficulty. The spin structure is effectively generated by the band structure of a certain material [17], the magnetic field and the electric field can be applied from outside in the laboratory. Therefore, we can controll the positive and negative sign of the inner product, \((\mathbf{E}\cdot\mathbf{B})\), as we want. It is also true for the heavy ion collision experiments in the laboratory. However, the problem is serious in high energy physics and cosmology, where the sign of the inner product, \((\mathbf{E}\cdot\mathbf{B})\), can not be controlled by us. To overcome this difficulty, we take in this paper, not the monopolium but the dyonium, since the latter has electric field from the beginning supplied by the electric charges located at both end points of the system. Then, it becomes possible to estimate the rate of L- and B-number violations as well as the neutrino mass. Furthermore, we can discuss the physical implications, such as the number density and the interaction with other particles of this dyonium, in the early history of the universe. Four decades ago, the author tried to use an analogy between the monopolium and the magnetosphere of the earth. It is well-known the charged particles such as protons are trapped by the Van Allens bands [15][16]. The probability distribution of the trapped charged particle resembles the fermionic zero mode in the monopolium background. As we know, however, the aurora (a kind of Rubakiv-Callan effect) happens equally at both poles, north and south, which shows the electric field generated by the trapped charged particle does not give a definite direction to itself, parallel or anti-parallel to the magnetic dipole field of the earth. Thus, the study based on the analogy with the magnetosphere of the earth was not successful. This intuitive view may not be completely wrong, since restricting to the L-handed fermions, parallel direction of the spin to the magnetic field is favored, so that the direction of motion of the fermion is favored to be anti-parallel to the magnetic field. Anyway, a long period has passed before arriving at a simple answer with dyonium to the Charlie Goebel's question. The paper is organized as follows: In the introduction, the motivation to use dyonium instead of monopolium in the fermion number violation processes is described. In Section 2, the dyonium solution is given explicitly, in which electric field is excited parallel to the magnetic field, and the contribution of Z-string is included. The standard method to estimate the fermion number violation processes is summarized in Section 3. In Section 4, a way to estimate the fermion zero modes is developed, in which the Dirac equation of \(SU(2)_{L}\times U(1)_{Y}\) gauge theory can be reduced to a renormalization group-like equation for a single component function without isospin and spin. The last section is devoted to conclusion of this paper and discussion on the unsolved issues in it. ## 2 Dyonium as a generalization of Nambu's monopolium Nambu gave a monopolium solution in the standard model (SM) (\(SU(2)_{L}\times U(1)_{Y}\) gauge theory) with a doublet Higgs field [14]. In his solution, the gauge fields have the arbitrariness represented by a function \(a_{\mu}(x)\). Using this arbitrariness we can obtain the dyonium solution, by adding the electric charges on both ends of the monopolium. Here, we choose the gauge group \(SU(2)_{L}\times U(1)_{Y}\), but it is not necessary to the SM. It can be a subgroup embedded in the larger grand unified group such as \(SO(10)\) and others, and prepare \(N_{d}\) L-handed doublet fermions and \(N_{s}\) singlet fermions. The vacuum expectation value \(v\) of the doublet Higgs \[\langle\phi(x)\rangle=\begin{pmatrix}v\\ 0\end{pmatrix}. \tag{3}\] can be any value, not restricted to the SM value of \(246/\sqrt{2}\) [GeV]. In this setup, we can study neutrino mass, as well as L- and B-number violation processes, depending on the choice of gauge group and fermion doublets. The simplest example is the SM with gauge group \(SU(2)_{L}\times U(1)_{Y}\). We have four L-handed fermion doublets for a generation, \[\{\psi^{(0)};\psi^{(1)},\psi^{(2)},\psi^{(3)}\}_{SM}=\left\{\begin{pmatrix}\nu_{ e}\\ e\end{pmatrix}_{L};\begin{pmatrix}u_{1}\\ d_{1}\end{pmatrix}_{L},\begin{pmatrix}u_{2}\\ d_{2}\end{pmatrix}_{L},\begin{pmatrix}u_{3}\\ d_{3}\end{pmatrix}_{L}\right\}_{SM}, \tag{4}\] where indices \(\{1,2,3\}\) represent color quantum numbers. Our theory can be applicable to the models beyond the SM. Some examples of this kind can be found in [12], where the \(SU(2)\) subgroup embedded in \(SO(10)\) group is labeled by \((i,j)\) (\(i,j=1-5\)), and the fermion doubles are the following ones: \[\{\psi^{(0)};\psi^{(1)},\psi^{(2)},\psi^{(3)}\}_{ij}=\left\{\begin{pmatrix}N_{ R}^{C}\\ \psi_{L}(10)_{ij}\end{pmatrix};\begin{pmatrix}\psi_{L}(10)_{kl}\\ \psi_{R}(5)_{m}^{C}\end{pmatrix}\right\}_{ij}, \tag{5}\] where \((i,j,k,l,m)\) is an even permutation of \((1,2,3,4,5)\) and the \(SU(2)_{(ij)}\) group is a subgroup of SO(10).1 Footnote 1: Three generators of the \(SU(2)_{(ij)}\) group can be written by the creation and annihilation operators of the spinor representation of \(SU(5)\), such that \(\tau^{1}_{(ij)}=b^{\dagger}_{i}b^{\dagger}_{j}+b_{j}b_{i},\;\tau^{2}_{(ij)}=i(- b^{\dagger}_{i}b^{\dagger}_{j}+b_{j}b_{i}),\;\tau^{3}_{(ij)}=1-(b^{\dagger}_{i}b _{i}+b^{\dagger}_{j}b_{j}).\) The \(SU(5)\) multiplets \(\psi_{R}(5)^{C},\psi_{L}(10)\) consist of \(\psi_{R}(5)^{C}=(\bar{d}_{i},e,\nu)_{L},\;\psi_{L}(10)=(\bar{u}_{i},u_{i},d_{i },\bar{e})_{L},\;(i=1,2,3;\text{color}).\) Charge conjugation operation is defined by \(\psi^{C}=i\gamma_{2}\psi^{*}\), which gives \((\psi_{L})^{C}=i\sigma_{2}\psi_{L}\) for the L-handed fermion. Now, we can start with the following SM-like Lagrangian, \[\mathcal{L}=-\frac{1}{4}\sum_{a=1}^{3}(F^{a}_{\mu\nu})^{2}-\frac {1}{4}(B_{\mu\nu})^{2}+v^{2}|D_{\mu}(A,B)\phi|^{2}-\lambda v^{4}(\phi^{\dagger }\phi-1)^{2}\] \[+\sum_{a=0}^{N_{d}-1}\bar{\psi}^{(a)}(x)_{L}\gamma^{\mu}D_{\mu}( A,B)\psi^{(a)}(x)_{L}+\sum_{b=1}^{N_{s}}\bar{\psi}^{(b)}_{s}(x)_{L}\gamma^{ \mu}D_{\mu}(B)\psi^{(b)}_{s}(x)_{L}, \tag{6}\] where \[D_{\mu}(A,B)=\partial_{\mu}-i\frac{\tau^{a}}{2}gA^{a}_{\mu}(x)-i\frac{Y}{2}g^ {\prime}B_{\mu}(x),\;\;D_{\mu}(B)=\partial_{\mu}-i\frac{Y}{2}g^{\prime}B_{\mu }(x). \tag{7}\] Here, we have used the normalized Higgs field \(|\phi|=1\) by the vacuum expectation value \(v\), and the fermions are all represented by the L-handed ones, by applying properly the charge conjugation operation (\(C\)). The numbers of fermion doublets and singlets are denoted by \(N_{d}\) and \(N_{s}\), respectively. The monopolium solution is approximately given by Nambu as a solution of the "London equation", \[D_{\mu}(A,B)\phi(x)=0,\;\;\phi^{\dagger}(x)\phi(x)=1, \tag{8}\] where \(Y=-1\) for the Higgs field.2 Footnote 2: Here the up and down components of the Higgs field are exchanged from the usual choice. It is natural to choose the configuration of doublet Higgs field \(\phi(x)\) spinor-likely as \[\phi(x)=\begin{pmatrix}\cos\frac{1}{2}\Theta(x)\\ \sin\frac{1}{2}\Theta(x)\;e^{i\varphi(x)}\end{pmatrix}, \tag{9}\] where \(\cos\Theta=\cos\theta_{1}-\cos\theta_{2}+1\) defined by \(\theta_{1}\) and \(\theta_{2}\)). They are the polar angles between the \(z\) axis and the position vectors of \(x\), seen from the monopole and anti-monopole positions, located at \(z=d/2\) and \(z=-d/2\), respectively. The useful variables to study the monopolium are the curvilinear but orthogonal coordinate system, \((\rho,\Theta,\varphi)\). Here, \(\rho\) is a magnetic potential of the monopole-anti-monopole system, 3 Footnote 3: In this paper \(\rho\) is chosen with dimension 1/[length] so that it may give naturally the potential. This definition differs by \(d\) from the dimensionless \(\rho\) in [12]. \[\rho=\frac{1}{l_{1}}-\frac{1}{l_{2}}, \tag{10}\] given by \(l_{1}=[(z-d/2)^{2}+r^{2}]^{1/2},\ l_{2}=[(z+d/2)^{2}+r^{2}]^{1/2}\) with \(r=(x^{2}+y^{2})^{1/2}\). Then, \[\cos\Theta(x)=\frac{z-\frac{d}{2}}{l_{1}}-\frac{z+\frac{d}{2}}{l_{2}}+1. \tag{11}\] The configurations of gauge fields are obtained by solving the London equation. To solve it, we prepare two equations: \[\phi^{\dagger}(x)D_{\mu}(A,B)\phi(x)=0,\ \mbox{and}\ \phi^{\dagger}(x)\tau^{a}D_{ \mu}(A,B)\phi(x)=0, \tag{12}\] from which the configurations of the gauge fields can be found. The result reads \[gA^{a}_{\mu}(x)=-\epsilon^{abc}(\phi^{\dagger}\tau^{b}\phi) \partial_{\mu}(\phi^{\dagger}\tau^{c}\phi)+(\phi^{\dagger}\tau^{a}\phi)\left\{ -i\xi(\phi^{\dagger}\overleftrightarrow{\partial}_{\mu}\phi)+a_{\mu}(x) \right\}, \tag{13}\] \[g^{\prime}B_{\mu}(x)=i\eta(\phi^{\dagger} \overleftrightarrow{\partial}_{\mu}\phi)+a_{\mu}(x), \tag{14}\] where \(\overleftrightarrow{\partial}=\overrightarrow{\partial}-\overleftarrow{ \partial}\), \((\xi,\ \eta)\) are constants to be fixed with \(\xi+\eta=1\), and \(a_{\mu}(x)\) gives an arbitrariness of the solution. The origin of the arbitrariness for \((\xi,\eta)\) and \(a_{\mu}(x)\) is that the gauge field determined by the London equation is, a sum of \(SU(2)\) and \(U(1)\) gauge fields forming the massive gauge field for \(Z\). Indeed, in the London equation, only the sum \(\xi+\eta=1\) appears and the terms of \(a_{\mu}(x)\) are summed up cancel with each other. Therefore, the separation of the sum into \(SU(2)\) and \(U(1)\) parts has the arbitrariness. Nambu has chosen the arbitrary function to be zero as \(a_{\mu}(x)=0\), and derived the monopolium solution. On the other hand in this paper, we utilize this arbitrariness and introduce the electric charges at both end points of the monopolium, which produces the dipole electric field parallel to the magnetic field. Then, we have a "dyonium solution". For this purpose we adopt the following ansatz, \[a_{i}(x)=0,\ \ a_{0}(x)=Q\rho(x). \tag{15}\] Furthermore, we consider the oscillation (expansion and shrinkage) of the length \(d(t)\) of the dyonium, in estimating the temporal change of the Chern number. Thus, \(l_{1},l_{2}\) and \(\rho\) become time-dependent through the temporal change of the length \(d(t)\). Now, a straightforward calculation shows that the field strengths are given by \[gF^{a}_{\mu\nu}=(\phi^{\dagger}\tau^{a}\phi)\{(\xi-1)f_{\mu\nu }+a_{\mu\nu}\}, \tag{16}\] \[g^{\prime}B_{\mu\nu}=-\eta f_{\mu\nu}+a_{\mu\nu}, \tag{17}\] where \[f_{\mu\nu}(x) =-2i(\partial_{\mu}\phi^{\dagger}\partial_{\nu}\phi-\partial_{\nu} \phi^{\dagger}\partial_{\mu}\phi), \tag{18}\] \[a_{\mu\nu}(x) =\partial_{\mu}a_{\nu}-\partial_{\nu}a_{\mu}. \tag{19}\] For our Higgs configuration (9), we have \[f_{\mu\nu}(x)=-\partial_{\mu}(\cos\Theta(x))\partial_{\nu}\varphi(x)+\partial_{ \nu}(\cos\Theta(x))\partial_{\mu}\varphi(x). \tag{20}\] The corresponding electric field and magnetic field are given by \[f_{0i}=\frac{r}{2}\left(\frac{1}{(l_{1})^{3}}+\frac{1}{(l_{2})^{ 3}}\right)\dot{d}(t)(\mathbf{1}_{\varphi})^{i}\equiv f(r,z)\dot{d}(t)( \mathbf{1}_{\varphi})^{i}, \tag{21}\] \[\frac{1}{2}\epsilon_{ijk}f_{ij}=\partial_{k}\rho, \tag{22}\] where \(\mathbf{1}_{\varphi}\), in the Cartesian coordinate system \((-\sin\varphi,\cos\varphi,0)\), is a unit vector circulating the z-axis in the increasing direction of \(\varphi\). In the derivation of the above, we use \[\partial_{t}(\cos\Theta)=-\dot{d}(t)\times\frac{r^{2}}{2}\left( \frac{1}{(l_{1})^{3}}+\frac{1}{(l_{2})^{3}}\right), \tag{23}\] \[\begin{cases}\mathbf{\nabla}\rho=\left(-r\left(\frac{1}{l_{1}^{3}}- \frac{1}{l_{2}^{3}}\right),0,-\left(\frac{z-d/2}{l_{1}^{3}}-\frac{z+d/2}{l_{2} ^{3}}\right)\right)_{\text{cylinder}},\\ \mathbf{\nabla}\cos\Theta=\left(-r\left(\frac{z-d/2}{l_{1}^{3}}-\frac{z+d/2}{l_{2} ^{3}}\right),0,r^{2}\left(\frac{1}{l_{1}^{3}}-\frac{1}{l_{2}^{3}}\right) \right)_{\text{cylinder}},\\ \mathbf{\nabla}\varphi=\left(0,\frac{1}{r},0\right)_{\text{cylindrical}},\end{cases} \tag{24}\] giving \[\mathbf{\nabla}\rho\cdot\mathbf{\nabla}\cos\Theta=0,\mathbf{\nabla}\cos\Theta \cdot\mathbf{\nabla}\varphi=0,\mathbf{\nabla}\varphi\cdot\mathbf{\nabla}\rho=0, \tag{25}\] \[\mathbf{\nabla}\cos\Theta\times\mathbf{\nabla}\varphi=\mathbf{\nabla}\rho, \ \mathbf{\nabla}\rho\times\mathbf{\nabla}\varphi=-\frac{1}{r^{2}}\mathbf{\nabla}\cos\Theta, \mathbf{\nabla}\rho\times\mathbf{\nabla}\cos\Theta=r^{2}(\mathbf{\nabla}\rho)^{2}\mathbf{ \nabla}\varphi. \tag{26}\] These relations imply that the "dipolar system" such as dyonium (and monopolium), can be quite naturally described by a curved coordinate system, given by \((\rho,\varphi,\cos\Theta)_{\text{dipolar}}\)[12]. Here, however, the cylindrical coordinate system \((dr,rd\varphi,dz)_{\text{cylindrical}}\) is used, (since it is more familiar than the dipolar system) for which the spacial derivative becomes, \[\mathbf{\nabla}=\left(\frac{\partial}{\partial r},\frac{1}{r}\frac{\partial}{ \partial\varphi},\frac{\partial}{\partial z}\right)_{\text{cylindrical}}, \tag{27}\] and also \(\mathbf{1}_{\varphi}=(0,1,0)_{\text{cylindrical}}\). For our choice of \(a_{\mu}(x)\), we have \[a_{0i}(x)=-Q\partial_{i}\rho,\ \ a_{ij}(x)=0. \tag{28}\] Thus the field strengths have simple expressions which helps to understand the structure of the dynonium, \[g\mathbf{B}^{a} =(\phi^{\dagger}\tau^{a}\phi)(\xi-1)\mathbf{\nabla}\rho, \tag{29}\] \[g\mathbf{E}^{a} =(\phi^{\dagger}\tau^{a}\phi)\left\{(\xi-1)rf(r,z)\dot{d}(t)\mathbf{ \nabla}\varphi-Q\mathbf{\nabla}\rho\right\},\] (30) \[g^{\prime}\mathbf{B}^{\prime} =-\eta\mathbf{\nabla}\rho,\] (31) \[g^{\prime}\mathbf{E}^{\prime} =(-\eta)rf(r,z)\dot{d}(t)\mathbf{\nabla}\varphi-Q\mathbf{\nabla}\rho, \tag{32}\] where \[f(r,z)=\frac{r}{2}\left(\frac{1}{l_{1}^{3}}-\frac{1}{l_{2}^{3}}\right), \tag{33}\] and the magnetic field and electric field for the \(U(1)_{Y}\) part are denoted with prime. ### Gauge potentials On the other hand, to express the gauge potentials explicitly, we have to introduce three orthonormal vectors in the \(SU(2)\) iso-space [12]. A typical iso-vector is the triplet Higgs field (\(\phi^{\dagger}\tau^{a}\phi\)) which is a orthonormal basis, denoted by \(n_{1}^{a}\), \[n_{1}^{a}\equiv(\phi^{\dagger}\tau^{a}\phi)=(\sin\Theta\cos\varphi,\ \sin\Theta\sin\varphi,\ \cos\Theta)_{a=1-3}, \tag{34}\] where \(a=1,2,3\) denotes the direction of iso-vector in the iso-space. The remaining two unit vectors are, respectively, \[n_{2}^{a}=(\cos\Theta\cos\varphi,\ \cos\Theta\sin\varphi,\ - \sin\Theta)_{a=1-3}, \tag{35}\] \[n_{3}^{a}=(-\sin\varphi,\ \cos\varphi,\ 0)_{a=1-3}. \tag{36}\] They satisfy the orthonormality condition as iso-vectors, \[\epsilon_{abc}(n_{j})^{b}(n_{k})^{c}=\epsilon_{ijk}(n_{i})^{a}, \tag{37}\] and the derivative of one of them can be expanded in the remaining two, \[\partial_{\mu}(n_{1})^{a}=(n_{2})^{a}\partial_{\mu}\Theta+(n_{3})^{a}\partial_ {\mu}\varphi. \tag{38}\] Thus, we have \[(\phi^{\dagger}\tau^{a}\phi)\equiv n_{1}^{a},\ \partial_{\mu}( \phi^{\dagger}\tau^{a}\phi)=n_{2}^{a}\partial_{\mu}\Theta+n_{3}^{a}\sin\Theta \ \partial_{\mu}\varphi,\ \mbox{and} \tag{39}\] \[(\phi^{\dagger}\overleftrightarrow{\partial_{\mu}}\phi)=-i(1- \cos\Theta)\partial_{\mu}\varphi. \tag{40}\] Then, the gauge potentials become \[gA_{\mu}^{a}(x)=n_{1}^{a}\left\{-\xi(1-\cos\Theta)\partial_{\mu} \varphi+a_{\mu}(x)\right\}+n_{2}^{a}\sin\Theta\partial_{\mu}\varphi+n_{3}^{a} \frac{1}{\sin\Theta}\partial_{\mu}(\cos\Theta), \tag{41}\] \[g^{\prime}B_{\mu}(x)=\eta(1-\cos\Theta)\partial_{\mu}\varphi+a_ {\mu}(x). \tag{42}\] Using (23), (24) and (15), we have obtained the more explicit expressions for the gauge potentials, \[gA_{0}^{a}(x)=n_{1}^{a}Q\rho(x)+n_{3}^{a}\frac{1}{\sin\Theta}f( r,z)\dot{d}(t), \tag{43}\] \[g\mathbf{A}^{a}(x)=-\{(n_{1}^{a}(-\xi)(1-\cos\Theta) \mathbf{\nabla}\varphi+n_{2}^{a}\sin\Theta)\,\mathbf{\nabla} \varphi+n_{3}^{a}\frac{1}{\sin\Theta}(\mathbf{\nabla}\cos\Theta)\},\] (44) \[g^{\prime}B_{0}(x)=Q\rho(x),\ \ g^{\prime}\mathbf{B}(x)=- \eta(1-\cos\Theta)(\mathbf{\nabla}\varphi). \tag{45}\] About the components of the gauge fields, see the following Subsection 2.2. ### Determination of parameters \((\xi,\eta)\) with \(\xi+\eta=1\) Using the terminology in SM, the neutral gauge fields in our model consist of a massive "Z-boson" (\(Z\)) and a massless "photon" (\(\gamma\)). These components are, respectively, \[\sqrt{g^{2}+g^{{}^{\prime}2}}F^{Z}_{\mu\nu}=-(\phi^{\dagger}\tau^{ a}\phi)gF^{a}_{\mu\nu}+g^{\prime}B_{\mu\nu}=0,\text{ and} \tag{46}\] \[\sqrt{g^{2}+g^{{}^{\prime}2}}F^{\gamma}_{\mu\nu}=+(\phi^{\dagger} \tau^{a}\phi)g^{\prime}F^{a}_{\mu\nu}+gB_{\mu\nu}=\left(\frac{g^{2}+g^{{}^{ \prime}2}}{gg^{\prime}}\right)\left\{-\eta f_{\mu\nu}(x)+a_{\mu\nu}(x))\right\} \neq 0, \tag{47}\] where the zero of the first equation means the massive \(Z\) filed is confined inside a thin flux tube connecting a dyon and an anti-dyon, while the non-zero of the second equation means the massless photon field spreads out to form the magnetic and the electric dipole fields. The configurations of gauge potentials are given in (42)-(44). The singularities for the gauge potential appear generally inside the tube connecting monopole and anti-monopole at \(r=0,\;\Theta=\pi\). For the \(Z\) flux, in the approximation using the London equation, we have \[\sqrt{g^{2}+g^{{}^{\prime}2}}\mathbf{A}^{(Z)}=-g(\phi^{\dagger}\tau^{ a}\phi)\mathbf{A}^{a}+g^{\prime}\mathbf{B}\approx\underbrace{-i(\phi^{\dagger} \overleftrightarrow{\nabla}\phi)}_{\text{by London eq.}}+\underbrace{0}_{ \text{correction}}=\frac{1-\cos\Theta}{r}\mathbf{1}_{\varphi}+0, \tag{48}\] so that the gauge field for \(Z\) looks to have a singularity for \(r\to 0\) in this approximation. However, this is not correct when we take into account the decrease of the Higgs field in the flux tube. This is well-known from the Meissner effect in superconductivity and gauge models, in which the singularity of the gauge potential disappears due to the rapid decrease of the Higgs expectation value there. Nambu stated this situation as the singularity disappears due to the smearing by the Higgs field. As for the gauge potential for \(\gamma\), the singularity can not be smeared out by the Higgs field, so that the singularity remains at \(r=0,\;\Theta=\pi\), \[\sqrt{g^{2}+g^{{}^{\prime}2}}\mathbf{A}^{(\gamma)}=+g^{\prime}(\phi^{ \dagger}\tau^{a}\phi)\mathbf{A}^{a}+g\mathbf{B}=\left\{-\xi\frac{g^{\prime}}{g}+\eta \frac{g}{g^{\prime}}\right\}\frac{1-\cos\Theta}{r}\mathbf{1}_{\varphi}. \tag{49}\] Therefore, we have to impose the following condition to discard the singularity, \[-\frac{g^{\prime}}{g}\xi+\frac{g}{g^{\prime}}\eta=0, \tag{50}\] which fixes the parameters \((\xi,\eta)\) as follows: \[\xi=\frac{g^{2}}{g^{2}+g^{{}^{\prime}2}}=\cos^{2}\theta_{W},\;\; \eta=\frac{g^{{}^{\prime}2}}{g^{2}+g^{{}^{\prime}2}}=\sin^{2}\theta_{W}. \tag{51}\] Here, we have used the notation \(\theta_{W}\), but it is the mixing angle between massive and massless gauge bosons in the dyonium model, and is not necessary to take the SM value. On the other hand, the time component of the gauge potential for photon is \[\sqrt{g^{2}+g^{{}^{\prime}2}}A^{\gamma}_{0}=+g^{\prime}(\phi^{ \dagger}\tau^{a}\phi)A^{a}_{0}+gB_{0}=\frac{g^{\prime}}{g}Q\rho(x), \tag{52}\] which is singular at the positions of dyon (\(l_{1}=0\)) and anti-dyon (\(l_{2}=0\)), but these singularities are normal ones for the point charges. If necessary, they can be smeared out by introducing a finite size \(\delta\) to dyon and anti-dyon. Physical picture of the monopolium stated by [14] is the following: A monopole and an anti-monopole are connected by a thin tube having a width \((gv)^{-1}\) of the symmetry breaking scale. The \(SU(2)\) flux of \(4\pi/g\) goes out (in) from the monopole (to the anti-monopole), the part of which \(\eta(4\pi/g)\) is radiated as a dipole field, while the remaining part of which \(\xi(4\pi/g)\) flows inside the tube. On the other hand, the \(U(1)\) flux of \(\eta(4\pi/g^{\prime})\) forms a solenoid without end-points, which gives both a dipole field outside and a flux inside the tube. In our case of dyonium, we have additional electric charges \(Q^{\prime}_{e}=Q/g^{\prime}\) at the monopole position and \(-Q^{\prime}_{e}\) at the anti-monopole position. From the electric chafes, the dipole electric field is radiated. ### Modification of London equation near Z-string We have to know the features of magnetic flux and gauge potential near the \(Z\)-string more precisely. For this purpose, we modify the London equation near the Z-string located on the \(z\)-axis from \(-d/2\) to \(d/2\). The Z-flux flows inside the thin tube along it. The equations of motion of our \(SU(2)_{L}\times U(1)_{Y}\) model are, precisely, \[\circ \left(D_{\mu}(A,B)\right)^{2}\phi(x)=-\lambda v^{2}(\phi^{\dagger }\phi-1)\phi(x), \tag{53}\] \[\circ \left(\delta^{ac}\partial_{\mu}-g\epsilon^{abc}A^{b}_{\mu} \right)F^{c,\mu\nu}(x)=-igv^{2}\left(\phi^{\dagger}\frac{\tau^{a}}{2}D_{\mu} (A,B)\phi-(D_{\mu}(A,B)\phi)^{\dagger}\frac{\tau^{a}}{2}\phi\right),\] (54) \[\circ \partial_{\mu}B^{\mu\nu}(x)=-ig^{\prime}v^{2}\left(\phi^{\dagger} \frac{-1}{2}D_{\mu}(A,B)\phi-\frac{-1}{2}(D_{\mu}(A,B)\phi)^{\dagger}\phi \right). \tag{55}\] London equation is an approximation, in which the Higgs potential in the right hand side of (53) vanishes. This is valid when the expectation value of the Higgs doublet \(\langle\phi(x)^{\dagger}\phi(x)\rangle\) takes unity everywhere, when normalized by a constant \(v^{2}\). This is, however, not valid near the \(Z\)-string. Accordingly, London equation is to be modified near the \(Z\) string, which is well-known from Meissner effect in superconductivity by Ginzburg and Landau [19], and the string-like solution in gauge theories by Nielsen and Olesen [20]. Therefore, we modify the Higgs field \(\phi(x)\) and the gauge field \(\mathbf{A}^{(Z)}(x)\) of \(Z\), from the previous ones based on the London equation, to the following ones: \[\circ \phi(x)=H(x)\phi_{0}(x)=\left(\underbrace{\downarrow}_{\mbox{ \scriptsize{by London eq.}}}+\underbrace{\bar{H}(x)}_{\mbox{\scriptsize{ correction}}}\right)\phi_{0}(x),\ \ \mbox{and} \tag{56}\] \[\circ \sqrt{g^{2}+g^{\prime 2}}\mathbf{A}^{(Z)}(x)=-(\phi^{ \dagger}\tau^{a}\phi)g\mathbf{A}^{a}+(\phi^{\dagger}\phi)g^{\prime} \mathbf{B}\] (57) \[= \underbrace{\frac{-i\left(\phi(x)^{\dagger}\overleftrightarrow{ \nabla}\phi(x)\right)}{(\phi^{\dagger}\phi)}}_{\mbox{\scriptsize{by London eq.}}}+\underbrace{\sqrt{g^{2}+g^{\prime 2}}\bar{\mathbf{A}}(x)}_{\mbox{\scriptsize{ correction}}}, \tag{58}\] where \(\phi_{0}(x)\) is the normalized Higgs configuration used in Eq.(10). The last equation is the modification of the previous Eq.(48). Then, the equations of motion (53), (54), (55) can be written only in terms of the corrections \(\bar{H}\) and \(\bar{\mathbf{A}}\), with the definition of \(M_{Z}=\sqrt{g^{2}+g^{{}^{\prime}2}}\ v/\sqrt{2}\) and \(m_{H}=2\sqrt{\lambda}v\), namely, \[\circ \mathbf{\nabla}^{2}\bar{H}(x)=\frac{1}{4}(g^{2}+g^{{}^{ \prime}2})\bar{\mathbf{A}}(x)^{2}\left(1+\bar{H}(x)\right)+m_{H}^{2} \left(\bar{H}(x)+\frac{3}{2}\bar{H}(x)^{2}+\frac{1}{2}\bar{H}(x)^{3}\right), \tag{59}\] \[\circ \nabla\times(\nabla\times\bar{\mathbf{A}}(x))=m_{Z}^{2} \left(1+\bar{H}(x)\right)^{2}\bar{\mathbf{A}}(x), \tag{60}\] where \(\frac{-i(\phi^{\dagger}\nabla\,\phi)}{(\phi^{\dagger}\phi)}=\frac{1-\cos\Theta}{r} \mathbf{1}_{\varphi}\) and the identity \((\tau^{a}n_{1}^{a})\phi(x)=\phi(x)\) have been used. The former relation confirms \(\boldsymbol{A}^{(Z)}\) in Eq.(49). In solving these equations, it is not so easy to use the coordinate system \((\rho,\cos\Theta)\), even on a plane with a fixed \(\varphi\).4 (See Discussion on the issue 2) about the usage of the rotating elliptic coordinate system.) Footnote 4: The Laplacian to use is, \(\nabla^{2}_{(2D)}=r(\nabla\rho)^{2}\left\{\partial_{\rho}\frac{1}{r}\partial_ {\rho}+\partial_{\cos\Theta}r\partial_{\cos\Theta}\right\}\), where \(r\) should be expressed in terms of \((\rho,\cos\Theta)\). This is at least possible numerically. Therefore, what we can do here is to assume the infinitely long dyonium, by considering the dependency only on \(r\) and ignoring \(z\). Then, our equations become identical to those studied by Nielsen and Olesen in gauge theories. The equations of motion become5 Footnote 5: In deriving the second equation, Stokes theorem is applied for a circle \(S\) and its boundary curve \(C\), surrounding the \(Z\)-string with radius \(r\), \[\int\int_{S(r)}(\nabla\times\boldsymbol{A})\cdot d\boldsymbol{S}=\oint_{ \partial S=C(r)}\boldsymbol{A}\cdot d\boldsymbol{x},\ \ \mbox{or}\ \ (\nabla\times\boldsymbol{A})_{z}=\frac{1}{r}\partial_{r}(rA(r)).\] \[\left(\partial_{r}^{2}+\frac{1}{r}\partial_{r}\right)\bar{H}(r)=\frac{1}{4}(g^ {2}+g^{{}^{\prime}2})\bar{\boldsymbol{A}}(r)^{2}\left(1+\bar{H}(r)\right)+m_{ H}^{2}\left(\bar{H}+\frac{3}{2}\bar{H}^{2}+\frac{1}{2}\bar{H}^{3}\right), \tag{61}\] \[\partial_{r}\left(\frac{1}{r}\partial_{r}\left(r\bar{A}(r)\right)\right)= \left(\partial_{r}^{2}+\frac{1}{r}\partial_{r}-\frac{1}{r^{2}}\right)\bar{A} (r)=m_{Z}^{2}\left(1+\bar{H}(r)\right)^{2}\bar{A}(r). \tag{62}\] In the linear approximation, the solution is well-known since [20] as follows: \[\bar{A}(r)=a\ m_{Z}K_{1}(m_{Z}r),\ \mbox{and}\ \bar{H}(r)=b\ K_{0}(m_{H}r), \tag{63}\] where \(K_{m}(r)\) is the modified Bessel function, satisfying \[\left(\partial_{r}^{2}+\frac{1}{r}\partial_{r}-\frac{m^{2}}{r^{2}}\right)K_{m }(r)=K_{m}(r), \tag{64}\] and the asymptotic behavior for \(r\rightarrow\infty\) is, in general, \[K_{m}(r)\sim\sqrt{\frac{\pi}{2r}}e^{-r}\left\{1+\frac{m^{2}-1/4}{2r}+O\left( \frac{1}{r^{2}}\right)\right\}\ (r\rightarrow\infty). \tag{65}\] As for the behavior of \(r\to 0\), \(K_{1}(r)\) is simple, but \(K_{0}(r)\) is a little delicate, \[K_{1}(r\to 0)\sim\frac{1}{r},\ \ \mbox{but}\ \ K_{0}(r\to 0) \sim-b(\gamma_{E}+\ln(r/2)), \tag{66}\] where \(\gamma_{E}\) is the Euler's constant. We will fix \(a\) in the next subsection, but leave \(b\) free. Now, the gauge field coming from the \(Z\)-string can be expressed by \[\boldsymbol{A}^{a(Z)}(x)=(n_{1}^{a}\mathbf{1}_{\varphi})\left\{\frac{1-\cos \Theta}{r\sqrt{g^{2}+g^{{}^{\prime}2}}}+a\ m_{Z}K_{1}(m_{Z}r)\right\}. \tag{67}\] We need to know the tension of \(Z\)-string, the energy stored per unit length of the string, which was estimated in [20]: \[E_{\mbox{\scriptsize Z-string}}=Kd,\ \mbox{with}\ \ K=c\ m_{Z}^{2}. \tag{68}\] In the next subsection, we will estimate the parameter \(c\). ### Estimation of Z-string parameters, \(a\) and \(c\) To estimate the parameters \(a\), we use the property that the gauge field \(\mathbf{A}^{(Z)}\) is smeared so that it is free from the singularity at \(r=0\). Then, we have \[A^{(Z)}\ \mathop{\longrightarrow}_{r\to 0}\ \frac{2}{r\sqrt{g^{2}+g^{{}^{ \prime}2}}}+a\frac{1}{r}=0,\ \ {\rm giving}\ \ a=\frac{-2}{\sqrt{g^{2}+g^{{}^{\prime}2}}}. \tag{69}\] Now, we have finally obtained the expression for the gauge field of \(Z\)-string, as follows: \[\mathbf{A}^{(Z)a}(x)=n_{1}^{a}A^{(Z)}(x){\bf 1}_{\varphi},\ \ {\rm and}\ \ A^{(Z)}(x)=\frac{1}{\sqrt{g^{2}+g^{{}^{ \prime}2}}}\left(\frac{1-\cos\Theta}{r}-2m_{Z}\ K_{1}(m_{Z}r)\right). \tag{70}\] To fix \(c\), the coefficient of Z-string tension, we have to estimate the energy per unit length, coming from the magnetic field \(\nabla\times\mathbf{A}^{(Z)}\), \[(\nabla\times\mathbf{A}^{(Z)})_{z}=\frac{2m_{Z}^{2}}{ \sqrt{g^{2}+g^{{}^{\prime}2}}}\ K_{0}(m_{Z}r). \tag{71}\] Thus, the tension \(K=E_{\rm Z-string}(d)/d=cm_{Z}^{2}\), or the parameter \(c\), becomes \[c=\frac{4\pi}{(g^{2}+g^{{}^{\prime}2})}\int_{0}^{\infty}rdr\ K_ {0}(r)^{2}=\frac{2\pi}{(g^{2}+g^{{}^{\prime}2})}, \tag{72}\] where \(c\) is finite, even if \(K_{0}(r)\) diverges logarithmically at \(r\to 0\).6 Footnote 6: Shiro Komata informed the author of the estimation of \(c\) in their private communication: \[\int_{0}^{\infty}r\ drK_{0}(r)^{2}=\frac{1}{2}\left[r^{2}\left\{K_{0}(r)^{2}- K_{1}(r)^{2}\right\}\right]_{0}^{\infty}=\frac{1}{2}.\] In this way, the parameters \(a\) and \(c\) for \(Z\)-string have been determined. ### Charge quantization condition for dyon and fixing of \(Q\) We have to impose the quantization condition between electric and magnetic charges, following Schwinger [18]. We know that no magnetic charge exists for \(U(1)\) part (in QED), so that we consider both magnetic charge \(Q_{m}^{a}\) and electric charge \(Q_{e}^{a}\) for \(SU(2)\) part, while only the electric charge \(Q_{e}^{\prime}\) for \(U(1)\) part, giving a list of charges \((Q_{m}^{a},Q_{e}^{a};Q_{e}^{\prime})\). In our dyonium we have the following field strengths for \(SU(2)\) and \(U(1)\) parts, in the static situation without time-dependency: \[g\mathbf{B}^{a} =(\phi^{\dagger}\tau^{a}\phi)(-\eta)(+\mathbf{\nabla} \rho),\ g\mathbf{E}^{a}=(\phi^{\dagger}\tau^{a}\phi)Q(-\mathbf{\nabla}\rho), \tag{73}\] \[g^{\prime}\mathbf{B}^{\prime} =-\eta(+\mathbf{\nabla}\rho),\ g^{\prime}\mathbf{E}^{\prime}=Q(-\mathbf{\nabla}\rho). \tag{74}\] This shows the solenoid-like dipole electric and magnetic fields, spreading out from the dyonium. The magnetic flux inside the thin tube coming from the \(SU(2)\) monopole is ignored in this expression. This missing part can be recovered by taking a pure \(SU(2)\) model with \(\xi=1\), \[(g\mathbf{B}^{a})_{SU(2){\rm monopole}}=(\phi^{\dagger} \tau^{a}\phi)(-1)(+\mathbf{\nabla}\rho). \tag{75}\] Now, we have \[\left(\mathbf{B}^{a}_{SU(2){\rm monopole}},\mathbf{E}^{a};\mathbf{E}^{\prime} \right)=\left(\frac{n_{1}^{a}}{g},\frac{n_{1}^{a}Q}{g},\frac{Q}{g^{\prime}} \right)(-\mathbf{\nabla}\rho), \tag{76}\] where \(n_{1}^{a}=(\phi^{\dagger}\tau^{a}\phi)\). Using \(\mathbf{\nabla}(-\mathbf{\nabla}\rho)=4\pi\left\{\delta^{(3)}(l_{1})-\delta^{(3)}(l_{2 })\right\}\), we can read magnetic and electric charges \((Q_{m},Q_{e})\) from \(\mathbf{\nabla}\mathbf{B}=4\pi Q_{m}\delta^{(3)}(\mathbf{x}-\mathbf{x}_{m})\) and \(\mathbf{\nabla}\mathbf{E}=4\pi Q_{e}\delta^{(3)}(\mathbf{x}-\mathbf{x}_{e})\). The charges at \(z=\pm\frac{d}{2}\) are \[(Q_{m}^{a},Q_{e}^{a};Q_{e}^{\prime})=\pm\left(\frac{n_{1}^{a}}{ g},\frac{n_{1}^{a}Q}{g},\frac{Q}{g^{\prime}}\right). \tag{77}\] It is known that the electric charge and the magnetic charge \((Q_{e},Q_{m})\) of the dyon should be quantized [18], in order to define the non-singular angular momentum between two dyons with \((Q_{e},Q_{m})_{1}\) and \((Q_{e},Q_{m})_{2}\). The condition is \[(Q_{e})_{1}(Q_{m})_{2}-(Q_{e})_{2}(Q_{m})_{1}=(\hbar c)\times( \mbox{integer}), \tag{78}\] where \((\hbar c)\) is included to clarify the dimension of electric and magnetic charges.+ Footnote †: \({}^{10}\) The charge \(\frac{1}{2}\) is not related to the charge \(\frac{1}{2}\). In applying this condition to \(SU(2)\) monopole and charge, if there exists single type of dyoniums, then \(Q\) is arbitrary. However, if we consider two kinds of dyoniums exist; in one kind of dyonium, dyon has a charge \((Q_{m}^{a},Q_{e}^{a};Q_{e}^{\prime})_{1}=\left(\frac{n_{1}^{a}}{g},\frac{n_{ 1}^{a}Q}{g},\frac{Q}{g^{\prime}}\right)\), while in the other kind, dyon has a charge \((Q_{m}^{a},Q_{e}^{a};Q_{e}^{\prime})_{2}=\left(\frac{n_{1}^{a}}{g},-\frac{n_ {1}^{a}Q}{g},-\frac{Q}{g^{\prime}}\right)\); then the quantization condition determines \(Q\) as \[Q=\frac{g^{2}}{2}\times(\mbox{integer}). \tag{79}\] Then, we can choose as an example, \(Q=\frac{g^{2}}{2}\), which determines \[(Q_{m}^{a},Q_{e}^{a};Q_{e}^{\prime})=\pm\left(n_{1}^{a}\frac{1} {g},\pm n_{1}^{a}\frac{g}{2},\frac{g}{2\tan\theta_{W}}\right). \tag{80}\] Now, the dyonium configuration in the \(SU(2)_{L}\times U(1)_{Y}\) gauge model has been established. ## 3 A method to estimate L- and B-number violation processes In the previous section, we have succeeded in exciting the time-dependent electric fields of \(SU(2)\) part \(\mathbf{E}^{a}(x,t)\) and \(U(1)\) part \(\mathbf{E}^{\prime}(x,t)\), so that they can be parallel to the magnetic fields \(\mathbf{B}^{a}(x,t)\) and \(\mathbf{B}^{\prime}(x,t)\). The important point here is the electric field and magnetic field are always parallel or anti-parallel, having the definite sign (positive or negative) of \(\mathbf{E}(x,t)\cdot\mathbf{B}(x,t)\). Then, Chern number and fermion number are "monotonously increasing or decreasing". This is the reason why dyonium was chosen in this paper. Even if the generation of electric field is expected for a given configuration of magnetic fields, it is not easy to control the direction of the electric field, parallel or anti-parallel to the magnetic field. Stochastically at finite temperature, the direction of electric field is arbitrary at each time, resulting no net L- and B- number generation, unless the external bias is introduced by the non-vanishing chemical potential, (\(\mu_{B}\), \(\mu_{L}\) or else)\(\neq 0\). ### Chiral anomaly and Chern number In the presence of parallel or anti-parallel electric fields, the "chiral anomaly" is induced radiatively. To make definite the statement in Introduction, we introduce the total fermion number current for the left-handed fermion doublets, \(J^{\mu}_{(d)L}(x)=\sum_{a=0}^{N_{d}-1}\bar{\psi}^{(a)}_{L}(x)\gamma^{\mu}\psi^{(a )}_{L}(x)\), and fermion singlets \(J^{\mu}_{(s)L}(x)=\sum_{a=0}^{N_{s}-1}\bar{\psi}^{(a)}_{L}(x)\gamma^{\mu}\psi^{ (a)}_{L}(x)\), where the conservation of them are violated via the chiral anomaly for \(SU(2)\) and \(U(1)\) groups, \[\partial_{\mu}J^{\mu}_{(d)L}(x)=-\frac{g^{2}N_{d}}{32\pi^{2}} \left(F^{a}_{\mu\nu}(x)\tilde{F}^{a\mu\nu}(x)\right)=-\frac{g^{2}N_{d}}{8\pi^{2 }}\left(\mathbf{E}^{a}(x)\cdot\mathbf{B}^{a}(x)\right), \tag{81}\] \[\partial_{\mu}J^{\mu}_{(s)L}(x)=-\frac{g^{\prime 2}N_{s}}{32\pi^{2 }}\left(B_{\mu\nu}(x)\tilde{B}^{\mu\nu}(x)\right)=-\frac{g^{{}^{\prime}2}N_{s }}{8\pi^{2}}\left(\mathbf{E}^{\prime}(x)\cdot\mathbf{B}^{\prime}(x)\right). \tag{82}\] The expression of gauge fields obtained in (29)-(32) gives the non-vanishing Chern numbers for the dyonium, \[q=\text{Chern}(SU(2))\equiv\frac{g^{2}}{8\pi^{2}}\int d^{4}x\,( \mathbf{E}^{a}(x)\cdot\mathbf{B}^{a}(x))=\left(\frac{e}{4\pi}\right)^{2}\int d^{4}x\;( -\mathbf{\nabla}\rho(x))^{2}, \tag{83}\] \[q^{\prime}=\text{Chern}(U(1))\equiv\frac{g^{\prime 2}}{8\pi^{2 }}\int d^{4}x\,(\mathbf{E}^{\prime}(x)\cdot\mathbf{B}^{\prime}(x))=\left(\frac{e}{4 \pi}\right)^{2}\int d^{4}x\;(-\mathbf{\nabla}\rho(x))^{2}, \tag{84}\] where \(e=g\sin\theta_{W}\) is the "electric charge". Therefore, the four dimensional integral of the anomaly equation yields \[[n_{F}]_{t=-\infty}^{t=\infty}=\left[\sum_{a=0}^{N_{d}-1}n_{F}^{(a)}\right]_{ t=-\infty}^{t=\infty}+\left[\sum_{b=0}^{N_{s}-1}n_{F}^{{}^{\prime}(b)}\right]_{ t=-\infty}^{t=\infty}=-(N_{d}\;q+N_{s}\;q^{\prime})\;(\neq 0), \tag{85}\] where \(n_{F}^{(a)}\) and \(n_{F}^{{}^{\prime}(b)}\) stands for, respectively, the number of fermions for \(a\)-th doublet and \(b\)-th singlet, \[n_{F}^{(a)}=\int d^{3}x\;\psi^{(a)}_{L}(x)^{\dagger}\psi^{(a)}_{L}(x),\;n_{F}^ {{}^{\prime}(b)}=\int d^{3}x\;\psi^{(b)}_{L}(x)^{\dagger}\psi^{(b)}_{L}(x). \tag{86}\] ### Dyonium action As for the action of dyonium \(S_{\text{dyonium}}\), the contribution from the dipole fields is given in (29)-(32), but the other contribution from the thin Z-string should also be included. This additional term comes from the linear potential of the \(Z\)-string, \[S_{Z-\text{string}}=-\int d^{4}x\;V(x)_{Z-\text{string}}=-\int dt\;Kd(t), \tag{87}\] where the constant \(K\) gives the tension of Z-string, given by (68) in Section 2.3, \[K=\frac{2\pi m_{Z}^{2}}{g^{2}+g^{\prime 2}}. \tag{88}\] Then, we have \[S_{\text{dyonium}}\] \[=\frac{1}{e^{2}}\int d^{4}x\left\{\left(-\sin^{4}\theta_{W}+\frac {g^{4}}{4}\right)(-\mathbf{\nabla}\rho)^{2}+\frac{g^{4}}{4}f(r,z)^{2}\dot{d}(t)^{2 }\right\}-\int dt\;Kd(t), \tag{89}\] with \(e=gg^{\prime}/\sqrt{g^{2}+g^{{}^{\prime}2}}=g\sin\theta_{W}\). ### Temporal development of Chern number and action To estimate the temporal development of Chern number and action, we need to know: \[\int_{D}d^{3}x(-\mathbf{\nabla}\rho(x))^{2}=2\times\left[4\pi l^{2} \left(\frac{1}{l}-\frac{1}{d(t)}\right)\times\frac{1}{l^{2}}\right]_{l=\delta}=8 \pi\left(\frac{1}{\delta}-\frac{1}{d(t)}\right). \tag{90}\] Here, \(D\) is the three dimensional domain, excluding the two small spheres with radius \(\delta\) around dyon and anti-dyon. The \(\delta\) is the same order as \(1/m_{W}\) or \(1/m_{Z}\). Now we have \[q=q^{\prime}=\int dt\left(\frac{1}{\delta}-\frac{1}{d(t)}\right). \tag{91}\] Here, we note again that Chern number is a time integral of a positive function, so that "Chern number is generated monotonously in time". This is due to the dynonium, having definite electric and magnetic charges at both end points. As for the action \(S_{A}\) of the dyonium, the dipole type \(\frac{1}{2}\left\{(\mathbf{B}^{a})^{2}+(\mathbf{E}^{a})^{2}+(\mathbf{B}^{\prime})^{2}+( \mathbf{E}^{\prime})^{2}\right\}\) energy and the string type energy \(\frac{1}{2}(\mathbf{B}^{(Z)})^{2}\) contribute to it, \[S_{\rm dyonium}\] \[=\int dt\left\{\frac{8\pi}{e^{2}}\left(-\sin^{4}\theta_{W}+\frac{ g^{4}}{4}\right)\left(\frac{1}{\delta}-\frac{1}{d(t)}\right)+\frac{g^{4}}{4e^{2}} \left(\int d^{3}x\;f(r,z)^{2}\right)\dot{d}(t)^{2}-Kd(t)\right\}. \tag{92}\] Introducing the constants, \(M\) and \(C\), \[M=\frac{g^{4}}{2e^{2}}\left(\int d^{3}x\;f(r,z)^{2}\right),\; \text{and} \tag{93}\] \[C=\frac{8\pi}{e^{2}}\left(\sin^{4}\theta_{W}-\frac{g^{4}}{4} \right), \tag{94}\] we have arrived at \[S_{\rm dyonium}=\int dt\left\{\frac{M}{2}\dot{d}(t)^{2}-V_{\rm dyonium }(d(t))\right\}, \tag{95}\] \[\text{where}\;\;V_{\rm dyonium}(d)=Kd-\frac{C}{d}+\frac{C}{ \delta}. \tag{96}\] The action of dyonium obtained in this way seems very suggestive; the classical solution of electric and magnetic fields by solving the London equation gives the Coulomb interaction between dyon and anti-dyon. Missing in the London equation but definitely existing as Z-string of connecting dyon and anti-dyon, provides a linear potential. Thus, the dyonium has the same type of potential for mesons, linear plus Coulomb type. ### Classical motion of length \(d(t)\) of dyonium We have obtained the action of the dyonium in terms of the distance \(d(t)\), which gives the classical motion of \(d(t)\). The energy conservation reads, \[\frac{M}{2}(\dot{d})^{2}-\frac{C}{d}+Kd=E_{\rm tot}, \tag{97}\] from which, we have, (by changing the notation from \(d\) to \(r\), in order not to be mislead) \[dt=dr\sqrt{\frac{M/2}{E_{\rm tot}+\frac{C}{r}-Kr}}. \tag{98}\] This implies a periodic motion between \(-d_{*}<d(=r)<d_{*}\), with a period \(T\) for \(-T/2<t<T/2\). Here, \(T\) and \(d_{*}\) are given by \[\frac{T}{2}=\int_{\delta}^{d_{*}}dr\sqrt{\frac{M/4}{E_{\rm tot}+ \frac{C}{r}-Kr}},\ \mbox{with}\ V_{\rm dyonium}(d_{*})=-\frac{C}{d_{*}}+Kd_{*}=E_{\rm tot}. \tag{99}\] Accordingly, the Chern number increases a constant amount \(|q_{1/2}|\) during the half-period \(T/2\), \[|q_{1/2}|=\int_{0}^{\frac{T}{2}}dt\left(\frac{1}{\delta}-\frac{1 }{d(t)}\right)=\int_{\delta}^{r_{*}}dr\left(\frac{1}{\delta}-\frac{1}{r} \right)\sqrt{\frac{M/4}{E_{\rm tot}+\frac{C}{r}-Kr}}. \tag{100}\] During the multiple half-periods from \(t=0\) to \(t=nT/2+\Delta t\), where \(n\) (positive integer) stands for the number of repetition of half-periods, and \(0<\Delta t<T/2\), the "Chern number of dyonium increases monotonously, namely almost linearly with periodic modulations", \[q(t)=n|q_{1/2}|+\Delta q,\ \ (0<\Delta q<|q_{1/2}|,\ n=0,1,2,\cdots). \tag{101}\] Since the Hamiltonian of dyonium is conserved as \(E_{tot}\), its action also increases linearly in time, \[S_{\rm dyonium}=-E_{\rm tot}\int dt. \tag{102}\] ### Transition amplitude of L- and B-number violation processes If Chern number \(q\) is generated, the B- and L-number violation occurs, as shown in Subsection 3.1. That is, the \(q-\)point function of the chiral fermions can be evaluated as follows: \[\langle\psi_{L}(x_{1})_{1}\cdots\psi_{L}(x_{n})_{q}\rangle\] \[\equiv\frac{\int\mathcal{D}A(x)\mathcal{D}B(x)\int\mathcal{D} \bar{\psi}(x)\mathcal{D}\psi(x)\ e^{iS_{\rm dyonium}^{(q)}}\ e^{iS_{\rm fermion }}\left(\psi(x_{1})_{1}\cdots\psi(x_{q})_{q}\right)}{\int\mathcal{D}A(x) \mathcal{D}B(x)\int\mathcal{D}\bar{\psi}(x)\mathcal{D}\psi(x)\ e^{iS_{\rm dyonium }^{(q)}}\ e^{iS_{\rm fermion}}} \tag{103}\] \[=\frac{\int\mathcal{D}A(x)\mathcal{D}B(x)e^{iS_{\rm dyonium}^{(q )}}\left(\chi(x_{1})_{1}\cdots\chi(x_{q})_{q}\right)}{\int\mathcal{D}A(x) \mathcal{D}B(x)e^{iS_{\rm dyonium}^{(q)}}}\equiv\langle\chi(x_{1})_{1}\cdots \chi(x_{q})_{q}\rangle, \tag{104}\] where the zero mode \(\chi(x)_{i}\) for the L-handed fermion \(\psi_{L}(x)_{i}\), satisfying \(S_{\rm fermion}(\chi_{i})=0\), or \[i\gamma^{\mu}D_{\mu}(A,B)\chi_{i}(x;A(x),B(x))=0, \tag{105}\] gives the dominant contribution. The zero mode is obtained for each fermion species \(i\), depending on a given configuration of gauge fields. Here, we consider the case of doublet fermions. The fermion number violation process to occur depends on the model chosen. #### 3.5.1 Simple Examples The simplest example is given in the standard model (SM) with four fermion doublets, which is relevant to the proton decay. We have four doubles of quarks and leptons in a generation, \[\text{SM}:\left\{\begin{pmatrix}\nu_{e}\\ e\end{pmatrix}_{L},\ \begin{pmatrix}u_{a}\\ d_{a}\end{pmatrix}_{L}\right\}_{SM}\text{ (with three colors }a=1-3). \tag{106}\] As was estimated in Eq.(104), Whenever the dyonium gains Chern number \(q\), the non-vanishing fermionic amplitude in the dyonium background requires the \(q\) fermion zero modes, one from each doublet, \[\langle\psi_{L}(x_{1})_{1}\cdots\psi_{L}(x_{q})_{q}\rangle=\langle\chi(x_{1})_ {1}\cdots\chi(x_{n})_{q}\rangle. \tag{107}\] The amplitude should be invariant under the global symmetry of \(SU(2)_{L}\times U(1)_{Y}\). In the background field of the dyonium, the zero mode solution plays the role of wave function for fermion. Following the fermion wave functions of annihilation of particle \(u(\mathbf{p},s)\), and creation of anti-particle \(v(\mathbf{p},s)\), we define the wave functions in the momentum representation of a fermion species \(i\), in the background field of the dyonium \((D\bar{D})\), \[u_{i}^{(D\bar{D})}(p,s)\equiv\int d^{4}x\;e^{ipx}[\chi_{i}(x)]_{s3=s},\ \text{ and }\ v_{i}^{(D\bar{D})}(p,s)=u(p.s)^{C}=(i\sigma_{2})u_{i}^{(D\bar{D})}(p.s)^{*}, \tag{108}\] where \(C\) is the charge conjugation operation. Therefore, we can write down the fermion operator near the dyonium as follows: \[\hat{\psi}_{i}(x)_{L}=\sum_{s=\pm\frac{1}{2}}\int\frac{d^{3}p}{(2\pi)^{3}2| \mathbf{p}|}\left(\hat{b}_{i}(p,s))u_{i}^{(D\bar{D})}(p,s)e^{-ipx}+\hat{d}_{i}(p,s )^{\dagger}v_{i}^{(D\bar{D})}(p,s)e^{ipx}\right), \tag{109}\] where \(\hat{b}\) and \(\hat{d}^{\dagger}\) are annihilation operator of fermion \(i\), and creation operator of its anti-fermion \(\bar{i}\), respectively. Then, the amplitude \(\mathcal{A}\left(u_{1L}+u_{2L}\rightarrow\overline{d_{3}}_{R}+e_{R}^{+}\right)\), contributing to the proton decay \(p\rightarrow\pi^{0}+e^{+}\), can be estimated following the ordinary field theory, by using the fermion zero modes as wave functions. In the same way, we can examine another example beyond the SM. To generate the neutrino mass, a \(SO(10)\) GUT model with four fermion doublets, \(\{\psi^{(0)},\cdots,\psi^{(3)}\}\) in (5), was studied in [12]. If we choose \(SU(2)_{ij}\) group with \((ij,klm)=(14,235)\), then we have the following four fermion doublets: \[\text{Beyond SM}:\left(\begin{matrix}\overline{N_{R}}\\ u_{1}\end{matrix}\right)_{L},\ \left(\begin{matrix}\overline{u_{1}}\\ \nu\end{matrix}\right)_{L},\ \left(\begin{matrix}\frac{d_{3}}{d_{2}}\\ \overline{d_{2}}\end{matrix}\right)_{L},\ \left(\begin{matrix}\frac{d_{2}}{d_{3}}\\ \overline{d_{3}}\end{matrix}\right)_{L}. \tag{110}\] Similarly, we can estimate the amplitude \(\mathcal{A}\left(N_{R}\rightarrow\nu_{L}+\overline{(d_{c})}_{L}+(d_{c})_{L} \right)\ (c=1,2)\), which is relevant to the generation of neutrino mass. Now, we are ready to examine the estimation of fermion zero modes. ## 4 Fermon zero modes In order to estimate the fermion zero modes, we begin with the Dirac equation. ### Dirac equation Choosing the chiral basis 8, the L-handed fermionic zero mode satisfies Footnote 8: Chiral basis, \(\gamma^{0}=\begin{pmatrix}0&I\\ I&0\end{pmatrix},\,\gamma^{5}=\begin{pmatrix}I&0\\ 0&-I\end{pmatrix},\,\mathbf{\gamma}=\begin{pmatrix}0&-\mathbf{\sigma}\\ \mathbf{\sigma}&0\end{pmatrix},\,\,\text{and}\,\,\,\psi(x)=\begin{pmatrix}\psi_{R} (x)\\ \psi_{L}(x)\end{pmatrix}.\) \[\left\{\left(i\partial_{t}+\frac{\tau^{a}}{2}gA^{a}_{0}(x)+\frac{Y}{2}g^{ \prime}B_{0}(x)\right)+\mathbf{\sigma}\cdot\left(\mathbf{p}+\frac{\tau^{a}}{2}g\mathbf{A}^ {a}(x)+\frac{Y}{2}g^{\prime}\mathbf{B}(x)\right)\right\}\psi_{L}(x)=0. \tag{111}\] We may write it with the notation \(\bar{\sigma}^{\mu}\) and the Minkowski metric,9 Footnote 9: The four component notation \(\bar{\sigma}^{\mu}=(1,-\mathbf{\sigma})\), with \(\mathbf{p}=-i\mathbf{\nabla},\mathbf{A}^{a}=-A^{a}_{i},\mathbf{B}=-B_{i}\) for (\(i=1-3\)). \[i\bar{\sigma}^{\mu}D_{\mu}(A,B)\psi_{L}(x)=\bar{\sigma}^{\mu}\left(i\partial_ {\mu}+\frac{\tau^{a}}{2}gA^{a}_{\mu}(x)+\frac{Y}{2}g^{\prime}B_{\mu}(x)\right) \psi_{L}(x)=0. \tag{112}\] Here, \(Y\) is the hypercharge for the fermion species \(\psi_{L}\). A direct way to solve the zero mode equations gives a coupled partial differential equations among four wave functions. Rather than this direct way, we will adopt the simpler way, based on some ansatz. Our way is to search for the solution as the tensor product of a spinor in the iso-space times that in the spin-state. Furthermore, we identify the iso-spinor to the Higgs doublet \(\phi(x)\). We denote the other spinor in the spin-space as \(\psi(x)\) which is so determined as the tensor product may satisfy the original Dirac equation for zero modes. Thus, the ansatz for \(\psi_{L}(x)\) can be written as \[\psi_{L}(x)=e^{im\varphi}\phi(x)\otimes\psi(x)=e^{im\varphi}\begin{pmatrix} \phi(x)\psi_{1}(x)\\ \phi(x)\psi_{2}(x)\end{pmatrix},\,\,\,\,\psi(x)=\begin{pmatrix}\psi_{1}(x)\\ \psi_{2}(x)\end{pmatrix} \tag{113}\] Then, we starts to solve the Dirac equation with hypercharge \(Y\) and \(J_{3}=m\) in the dyonium background field, namely, \[0=\bar{\sigma}^{\mu}iD_{\mu}(A,B)^{(Y)}e^{im\varphi}\phi(x) \otimes\psi_{Y}(x) \tag{114}\] \[=\bar{\sigma}^{\mu}\left(iD_{\mu}(A,B)^{(Y=-1)}+\frac{1+Y}{2}g^{ \prime}B_{\mu}+\sqrt{g^{2}+g^{\prime 2}}\frac{\tau^{a}}{2}A^{a}_{\mu}(\text{Z- string})\right)e^{im\varphi}\phi(x)\otimes\psi_{Y}(x)\] \[=e^{im\varphi}\left(g^{\prime}\frac{1+Y}{2}B_{\mu}+\sqrt{g^{2}+g^ {\prime 2}}\frac{\tau^{a}}{2}A^{a}_{\mu}(\text{Z-string})\right)\phi(x)\otimes \bar{\sigma}^{\mu}\psi_{Y}(x)\] \[+\,\,e^{im\varphi}\phi(x)\otimes i\bar{\sigma}^{\mu}(\partial_{ \mu}+im\partial_{\mu}\varphi)\psi_{Y}(x), \tag{115}\] where the London equation, \(D_{\mu}(A,B)^{(Y=-1)}\phi(x)=0\) was used. To estimate this equation, we need the explicit expressions for \(B_{\mu}\) and \(A^{a}_{\mu}(\text{Z-string})\). The latter is the additional contribution from the \(Z\)-string given in (70). Non-vanishing components for \(B_{\mu}\) and \(A^{a}_{\mu}(\text{Z-string})\) are \[g^{\prime}B_{0}=Q\rho(x)=-\frac{g^{2}}{2}\rho(x),\,\,g^{\prime}B _{\varphi}=\eta\frac{1-\cos\Theta}{r}, \tag{116}\] \[A^{a}_{\varphi}(\text{Z-string})=n^{a}_{1}\frac{1}{\sqrt{g^{2}+g ^{\prime 2}}}\left(\frac{1-\cos\Theta}{r}-2m_{Z}K_{1}(m_{Z}r)\right). \tag{117}\] Here, we meet with a welcome formula, which implies that the non-Abelian isospin structure in \(A^{a}(\text{Z-string})\)) can be completely absorbed into the Higgs doublet itself, that is, \[\frac{\tau^{a}n_{1}^{a}}{2}\phi(x)=\frac{1}{2}H(r)\begin{pmatrix} \cos\Theta,&\sin\Theta e^{-i\varphi}\\ \sin\Theta e^{i\varphi},&-\cos\Theta\end{pmatrix}\begin{pmatrix}\cos\frac{ \Theta}{2}\\ \sin\frac{\Theta}{2}e^{i\varphi}\end{pmatrix}=\frac{1}{2}\phi(x). \tag{118}\] Owing to this formula, the iso-doublet field \(\phi(x)\) decouples from the equation, leading to a simple two component Dirac equation of iso-singlet field \(\psi_{Y}(x)\), \[\left\{i\partial_{t}+\left(\frac{1+Y}{2}\frac{-g^{2}}{2}\rho \right)-i\mathbf{\sigma}\cdot\mathbf{\nabla}+\sigma_{\varphi}C_{\varphi}(x)\right\} \psi_{Y}(x)\times f_{Y}(x)=0, \tag{119}\] where the decomposition of \(\bar{\sigma}^{\mu}\) and \(\mathbf{A}(x)\) into components10 are used with the definition of \(\sigma_{\varphi}\). Footnote 10: The notation \(\sigma_{\varphi}=i(-\sigma_{+}e^{-i\varphi}+\sigma_{-}e^{i\varphi})\), is given by \(\mathbf{\sigma}\cdot\mathbf{\nabla}=\sigma_{z}\partial_{z}+\sigma_{r}\partial_{r}+ \sigma_{\varphi}\frac{1}{2}\partial_{\varphi}=\sigma_{z}\partial_{z}+(\sigma _{+}e^{-i\varphi}+\sigma_{-}e^{i\varphi})\partial_{r}+i(-\sigma_{+}e^{-i \varphi}+\sigma_{-}e^{i\varphi})\frac{1}{r}\partial_{\varphi}\), and \(\mathbf{\sigma}\cdot\mathbf{A}=\sigma_{z}A_{z}+\sigma_{r}A_{r}+\sigma_{\varphi}A_{\varphi}\). Then, the \(\varphi\) component gauge field \(C_{\varphi}(x)\) is given by \[C_{\varphi}(x)=\frac{1+Y}{2}\eta\frac{1-\cos\Theta}{r}+\frac{1} {2\sqrt{g^{2}+g^{\prime 2}}}\left(\frac{1-\cos\Theta}{r}-2m_{Z}K_{1}(m_{Z}r) \right)+\frac{m}{r}. \tag{120}\] What we are going to do in the following is to choose \(\psi_{Y}(x)\), two component real-spinor, as a solution of another "London equation", \[\left\{i\partial_{\mu}-C_{\mu}(x)\right\}\psi_{Y}(x)=0, \tag{121}\] where \(C_{\varphi}(x)\) is the only non-vanishing component for \(C_{\mu}(x)\). ### Axial symmetry Our dyonium solution has an axial symmetry about a rotation around the \(z\)-axis. It is easily seen that if we perform the following two rotations successively, the iso-space rotation around the iso-spin's third axis and the real-space rotation around its third axis \(z\) with the same angles, then the Dirac equation is invariant. The space rotation is associated with the rotation of spin for fermions. Therefore, the wave function undergoes a phase change by the following rotation, \[e^{i\left(\hat{L}^{3}+\frac{\tau^{3}}{2}+\frac{\sigma^{3}}{2} \right)}\psi_{L}(x)=e^{im\phi}\psi_{L}(x), \tag{122}\] where \(\hat{L}^{3}\) is the third component of the angular momentum operator. Thus, the third component \(J^{3}\) is conserved for the axial symmetric dyonium, \[J^{3}=L^{3}+\frac{\tau^{3}}{2}+\frac{\sigma^{3}}{2}=m. \tag{123}\] If we label the wave function as \(\psi_{L_{3},\frac{\tau_{3}}{2},\frac{\sigma_{3}}{2}}\), with the third components of angular momentum, iso-spin and spin, then there are four fermion states with \(J_{3}=m\), \[\psi_{L}(J_{3}=m)=\left(\psi_{m-1,\frac{1}{2},\frac{1}{2}},\psi_ {m,-\frac{1}{2},\frac{1}{2}},\psi_{m,\frac{1}{2},-\frac{1}{2}},\psi_{m+1,- \frac{1}{2},-\frac{1}{2}}\right)_{L}, \tag{124}\] where all four components have the same \(J^{3}=m\). Therefore, the ansatz of the tensor product for the fermion zero mode, consistent with the axial symmetry, can be \[\chi_{L}(x)=e^{im\varphi}\phi\otimes\psi_{Y}=e^{im\varphi}\times \begin{pmatrix}\cos\Theta\\ e^{i\varphi}\sin\Theta\end{pmatrix}\otimes\begin{pmatrix}-e^{-i\varphi}\sin \Psi_{Y}\\ \cos\Psi_{Y}\end{pmatrix}\times f_{Y}(x). \tag{125}\] Here, we have used the normalized 2-component spinors for both \(\phi\) and \(\psi_{Y}\), so that the extra \(f_{Y}(x)\) field is multiplied. We have denoted the zero mode fermion as \(\chi_{L}(x)\) as before. Reduction of Dirac equation to a partial differential equation for a single function \(f_{Y}(t,\mathbf{x})\) Before determining the spinor \(\psi_{Y}(x)\) in the spin-space, let us consider why the form of gauge potentials (appeared in the dyonium solution) has a special form. In other words, what is the origin of the form of gauge potential, \(A_{\mu}\sim\frac{1-\cos\Theta}{r}\partial_{\mu}\varphi\)? Indeed this form frequently appears in the dyonium solution. We know already that it comes from the spinor relation in the iso-space, \[-i\left(\phi(x)^{\dagger}\overleftrightarrow{\partial_{\mu}}\phi (x)\right)=-i\left(\cos\tfrac{\Theta}{2},\ \ e^{i\varphi}\sin\tfrac{\Theta}{2}\right)^{\dagger} \overleftrightarrow{\partial_{\mu}}\begin{pmatrix}\cos\tfrac{\Theta}{2}\\ e^{i\varphi}\sin\tfrac{\Theta}{2}\end{pmatrix} \tag{126}\] \[=2\sin^{2}\frac{\Theta}{2}\ \partial_{\mu}\varphi=\frac{1-\cos \Theta}{r}\hat{1}_{\varphi}. \tag{127}\] In the same manner, for the spinor in the "spin-space", if it takes the following form, \[\psi_{Y}(x)=\begin{pmatrix}-e^{-i\varphi}\sin\tfrac{\Psi_{Y}}{2} \\ \cos\tfrac{\Psi_{Y}}{2}\end{pmatrix}, \tag{128}\] we have the same relation \[i\left(\psi_{Y}(x)^{\dagger}\overleftrightarrow{\partial_{\mu}}\psi_{Y}(x) \right)=2\sin^{2}\frac{\Psi_{Y}}{2}\ \partial_{\mu}\varphi=\frac{1-\cos\Psi_{Y}}{r}\hat{1}_{\varphi}. \tag{129}\] On the other hand, this solution of the gauge field is obtained from the London equation (121). Therefore, we have \[C_{\mu}(x)=\frac{i}{2}\left(\psi_{Y}^{\dagger}(x)\overleftrightarrow{ \partial_{\mu}}\psi_{Y}(x)\right),\ \ \text{or}\ \ C_{\varphi}(x)=\frac{1-\cos\Psi_{Y}}{2r}. \tag{130}\] To fix \(\Psi_{Y}(r,z)\), \(C_{\varphi}(x)\) should be chosen as Eq.(120), which implies \[1-\cos\Psi_{Y}\equiv\left((1+Y)\eta+\frac{1}{\sqrt{g^{2}+g^{{}^{\prime}2}}} \right)(1-\cos\Theta)+\frac{2m_{Z}r}{\sqrt{g^{2}+g^{{}^{\prime}2}}}K_{1}(m_{Z }r)+2m. \tag{131}\] Here, we have to choose a proper angular momentum \(J_{3}=m\) to have a solution of \(\psi_{Y}(r,z)\) for a given hypercharge \(Y\). Now, we have determined the time-independent spinor in the spin space \(\psi_{Y}(r,z)\), explicitly. Next, we are going to determine the remaining part of the wave function, that is, the time-dependent \(f_{Y}(x)\), so that \(\chi_{L}\) may satisfies the zero mode Dirac equation. Indeed, Eq.(119) gives the following equation for \(f_{Y}(x)\), \[\psi_{Y}\left(i\partial_{t}+\frac{1+Y}{2}\frac{-g^{2}}{2}\rho(x)\right)f_{Y}-i( \boldsymbol{\sigma}\psi_{Y})\cdot\boldsymbol{\nabla}f_{Y}=0, \tag{132}\] Multiplying \(\psi_{Y}^{\dagger}\) from the left, we have \[\left(i\partial_{t}+\frac{1+Y}{2}\frac{-g^{2}}{2}\rho(x)\right)f_{Y}-i\left( \hat{\boldsymbol{n}}_{Y}(x)\cdot\boldsymbol{\nabla}\right)f_{Y}=0, \tag{133}\] where \(\hat{\boldsymbol{n}}_{Y}(x)\) is the the vector field in the real-space, an analog of the Higgs triplet vector \(n_{1}^{a}=(\phi^{\dagger}\tau^{a}\phi)\) in the iso-space: \[\hat{\boldsymbol{n}}_{Y}(x)=(\psi_{L}^{\dagger}\boldsymbol{\sigma}\psi_{L})=- (\sin\Psi_{L}\cos\varphi,\sin\Psi_{L}\sin\varphi,\cos\Psi_{L})_{\text{ Cartesian real space}}. \tag{134}\] The obtained equation Eq.(133), can be solved without much difficulty, since all the coefficient functions are explicitly given by \(r\) and \(\cos\Theta\). What we have done in the above can be summarized as follows: Complexities originally existing associated with iso-spin and real-spin structures, have been completely solved, by introducing a tensor product of iso-spinor \(\phi\) and real-spinor \(\psi_{Y}\). These two spinors can be determined by the dyonium solution, and the remaining single component function \(f_{Y}(t,\boldsymbol{x})\) satisfies a simple equation, which describes the dynamics of fermions. The final equation obtained for \(f_{Y}(x)\), is a kind of renormalization group (RG) equation for a zero-point function, in which \(t\) plays a role of the logarithm of energy scale, and three parameters \((d,r,z)\) play the roles of three coupling constants. ### Renormalization group-like equation for fermion zero modes Let us discuss a little on the obtained renormalization group (RG)-like equation. From the definition of \(\hat{\boldsymbol{n}}_{Y}\) we have11 Footnote 11: \(\nabla=\left(\cos\varphi\partial_{r}-\sin\varphi\frac{1}{r}\partial_{\varphi},\sin\varphi\partial_{r}+\cos\varphi\frac{1}{r}\partial_{\varphi},\partial_{z }\right)_{\text{Cartesian real space}}\) \[\hat{\boldsymbol{n}}_{Y}\cdot\boldsymbol{\nabla}=-(\sin\Psi_{Y}\partial_{r}+ \cos\Psi_{Y}\partial_{z}). \tag{135}\] Since the time variation is induced by the temporal change of the distance \(d(t)\) between dyon and anti-dyon, the equation for \(f_{Y}\) yields, by using the three variables \(x=(d,r,z)\), as \[\left(\partial_{t}+\dot{d}(t)\;\partial_{d}+\sin\Psi_{Y}(x)\;\partial_{r}+ \cos\Psi_{Y}(x)\;\partial_{z}+(-i)\frac{1+Y}{2}\frac{-g^{2}}{2}\rho(x)\right)f _{Y}(x)=0, \tag{136}\] where the coefficient functions \(\rho(x)\) and \(\Psi_{Y}(x)\) as well as \(\Theta(x)\), are given by \[\circ\rho(x)=\frac{1}{\sqrt{r^{2}+(z-d/2)^{2}}}-\frac{1}{\sqrt{r^{ 2}+(z+d/2)^{2}}}, \tag{137}\] \[\circ 1-\cos\Psi_{Y}\equiv\left((1+Y)\eta+\frac{1}{\sqrt{g^{2}+g^{ \prime\,2}}}\right)(1-\cos\Theta)+\frac{2(m_{Z}r)K_{1}(m_{Z}r)}{\sqrt{g^{2}+g^ {\prime\,2}}}+2m,\] (138) \[\circ \cos\Theta(x)-1=\frac{z-d/2}{\sqrt{r^{2}+(z-d/2)^{2}}}-\frac{z+d /2}{\sqrt{r^{2}+(z+d/2)^{2}}}. \tag{139}\] The temporal change of \(\dot{d}(t)\) is classically determined as \[\dot{d}(t)=v_{d}(x)=\sqrt{2E_{tot}/M+2C/(Md)-2Kd/M}, \tag{140}\] where \(C=8\pi/e^{2}\left(\sin^{4}\theta_{W}-g^{4}/4\right)\), and \(K=v^{2}/4\pi\), and \(M=g^{4}/2e^{2}\int drdz\ 2\pi rf(r,z)^{2}\). The equation is simply \[D_{t}\;f_{Y}(x,t)=\left\{\partial_{t}+\sum_{i=d,r,z}v_{i}(x)\partial_{i} \right\}f_{Y}(x,t)=-i\gamma(x)f_{Y}(x,t), \tag{141}\] where \(D_{t}\) is the Lagrangian derivative in hydrodynamics. Then, this equation can be solved, by using the running variables \(\bar{x}_{i}(t)\) (\(i=d,r,z\)), denoted usually with bars, \[\left\{\begin{array}{ll}\circ&\frac{d(\bar{d}(t))}{dt}=v_{d}(\bar{x})=\sqrt {2E_{tot}/M+2C/(Md)-2Kd/M},\\ \circ&\frac{\bar{d}\bar{r}(t)}{dt}=v_{r}(\bar{x})=\sin\Psi_{Y}(\bar{x}),\\ \circ&\frac{d\bar{z}(t)}{dt}=v_{z}(\bar{x})=\cos\Psi_{Y}(\bar{x})\\ &=1-\left((1+Y)\eta+\frac{1}{\sqrt{g^{2}+g^{\prime 2}}}\right)(1-\cos\Theta( \bar{d},\bar{r},\bar{z})-\frac{2(m_{Z}\bar{r})K_{1}(m_{Z}\bar{r})}{\sqrt{g^{2 }+g^{\prime 2}}}-2m.\end{array}\right. \tag{142}\] with the "anomalous dimension" \[\gamma(x)=\frac{1+Y}{2}\frac{g^{2}}{2}\rho(x). \tag{143}\] Now, we can write the solution of \(f_{Y}(x)\) formally as follows: \[f_{Y}(\mathbf{x},t)=e^{-i(\frac{1+Y}{2}\frac{g^{2}}{2})\int_{0}^{t}dt^{\prime}\; \rho(\bar{x}(t^{\prime}))}\times f_{Y}(\mathbf{x}_{0},t=0). \tag{144}\] Here, \(\mathbf{x}=\bar{\mathbf{x}}(t)\) expresses the final port at time \(t\), after leaving \(\mathbf{x}_{0}=\bar{\mathbf{x}}(0)\) at \(t=0\), by taking a boat which floats with the flow of the stream. The velocity field \(\mathbf{v}(x)\) (or a set of \(\beta\) functions) in our problem is explicitly known, so that it is expected to guide us to any place at any time \((\mathbf{x},t)\), starting from a boundary point. The boundary are the spacial infinity, \(|\mathbf{x}|\to\infty\), and the axis of the Z-string, \(r\to 0\) and \(-d/2\leq z\leq d/2\). From the normalizability of the wave function, we expect the boundary condition is that \(f_{Y}\) vanishes exponentially at the spacial infinity, while on the Z-string it approaches to zero power likely. Anyway the detailed analysis on the boundary condition is necessary. If everything works well, then the fermion zero mode with hypercharge \(Y\) can be determined as follows: \[\chi_{L}(x)^{(Y)}=e^{im\varphi}\times\phi(x)\otimes\psi_{Y}(x)\times f_{Y}(x), \tag{145}\] where \(f_{Y}(x)\) is the solution of the RG-like equation, and two spinors \(\phi(x)\) and \(\psi_{Y}(x)\) in the iso-space and the real-space, respectively, are given by the dyonium configuration, \[\phi(x)=\begin{pmatrix}\cos\Theta\\ e^{i\varphi}\sin\Theta\end{pmatrix},\text{ and }\psi_{Y}(x)=\begin{pmatrix}-e^{-i \varphi}\sin\frac{\Psi_{Y}}{2}\\ \cos\frac{\Psi_{Y}}{2}\end{pmatrix}. \tag{146}\] Here, \(\Theta(r,z)\) and \(\Psi_{Y}(r,z)\) are explicitly given in terms of \((d,r,z)\), if a proper angular momentum \(m\) is chosen for a given \(Y\). Conclusion and Discussion ### Conclusion This paper studied dynonium induced fermion number violation mechanism in a \(SU(2)_{L}\times U(1)_{Y}\) gauge theory with a doublet Higgs field, in which the following issues have been solved. These issues could not be answered in the previous paper [12]. 1) [C. J. Goebel's question (1982)] "How does the electric field be excited, being parallel to the dipole magnetic field of the monopolium." To answer this question, we consider this time the dyonium, by generalizing the Nambu's solution of monopolium [14]. 2) The obtained dyonium oscillates, under the linear plus Coulomb potential, acting between dyon and anti-dyon. The linear potential comes from the "\(Z\)-string" connecting monopole and anti-monopole, while the Coulomb potential comes from both the electric and magnetic dipole forces. This osicillation creates the Chern number (or the fermion number), as monotonously increasing or decreasing in time. 3) The gauge field configuration coming from the \(Z\)-string, ignored previously, is taken into account this time in the approximation of infinitely long dyonium. The study of the Z-string for a finite sized dyonium is the next target. 4) Towards estimating the rate of the fermion number violation processes, we have examined the Dirac equation. In this study we have found a way to reduce the Dirac equation of the \(SU(2)_{L}\times U(1)\) theory to a single component renormalization group-like equation, without spin and iso-spin. This reduction is simple, so that it is useful to evaluate the reaction rate of the fermion number violation processes explicitly as a product of the fermion zero modes. ### Discussion The issues not studied well in this paper are summarized as follows: #### 5.2.1 Solving RG-like equation [Issue 1)] Numerical estimation of the fermion zero modes by solving the RG-like equation, and to estimate the reaction rates for the B- and L-number violation processes including the neutrino mass, are remainedl. In this study we should consider any \(SU(2)_{L}\) and \(U(1)_{Y}\) groups embedded in a larger group. It is not clear at present, but there is a hope that the RG equation can be solved easily, by separating the variables in the rotating elliptic coordinate system \((\xi,\eta)\), which will be discussed next. #### 5.2.2 Rotating elliptic (ellipsoidal) coordinate system [Issue 2)] A natural coordinate system for the dyonium is \((\rho,\cos\Theta,\varphi)\), since \(\rho(x)\) is a potential of the dipole field, and \(\cos\Theta\) depicts the equi-potential curves. This coordinate system is, however, a curvilinear coordinate system, but has been surely studied in the past and there exist the mathematical formulae we can refer to. Especially, we have to know the special functions on the two dimensional plane described by the coordinate system \(\mathbf{x}_{(2D)}=(\rho,\cos\Theta)\) with fixed azimuthal angle \(\varphi\): \[\left\{\mathbf{\nabla}_{(2D)}^{2}-\frac{m^{2}}{r^{2}}\right\}\tilde{K}_{ m}(\mathbf{x}_{(2D)},M)=M^{2}\tilde{K}_{m}(\mathbf{x}_{(2D)},M). \tag{147}\] About the expression of \(\mathbf{\nabla}_{(2D)}^{2}\), see the footnote 4). If the replacement is successfully done, from the modified Bessel function \(K_{m}(x,M)\) to this new special function \(\tilde{K}_{m}(\mathbf{x}_{(2D)},M_{Z}\) or \(M_{H})\), then we can solve the finite length case for \(Z\)-string. In case of the infinitely long dyonium, we keep only \(r\) dependency and ignore the dependency on \(z\) and \(d\). For the finite sized case, \(1-\cos\Theta\) seems more natural than \(r\). Indeed, if we can impose a condition \(|z|\ll d\), restricting to the central region of dyonium, then we may find a relation, \[\cos\Theta\approx 1-\frac{d}{\sqrt{r^{2}+(d/2)^{2}}},\ \ {\rm or}\ \ r\approx d \sqrt{\frac{1}{(1-\cos\Theta)^{2}}-\frac{1}{4}}. \tag{148}\] This suggests the usage of \((1-\cos\Theta)\) instead of \(r\); the former variable seems not so bad for the dyonium, since it reproduces \(\cos\Theta\approx-1\) near Z-string for \(r\ll d\), and \(r\approx d/(1-\cos\Theta)\) for \(r\gg d/2\). The more precise discussion is, however, inevitable to elucidate the inner structure of the finite sized dyonium. We finally recognized from the discussions with the colleagues at Open U. of Japan (OUJ) that the issue 2) can be solved completely, by using a different coordinate system, the so-called "Rotating Elliptic Coordinate System". Its coordinates \((\xi,\eta,\varphi)\) are related to the cylindrical coordinates \((r,z,\varphi)\), and ours \((\rho,\cos\Theta,\varphi)\) as follows: \[r = \frac{d}{2}\sqrt{(\xi^{2}-1)(1-\eta^{2})},\ z=\left(\frac{d}{2} \right)\xi\eta, \tag{149}\] \[\rho = \left(\frac{4}{d}\right)\frac{\eta}{\xi^{2}-\eta^{2}},\ 1-\cos \Theta=\frac{2\xi(1-\eta^{2})}{\xi^{2}-\eta^{2}}, \tag{150}\] where the oblate-type elliptic coordinates are chosen. The range of variables can be understood as \(1\leq\xi\leq\infty\) and \(-1\leq\eta\leq 1\), from the other parametrization of \((\xi=\cosh u,\eta=\cos v)\). The \(\xi=1\) represents the finite length \(Z\)-string and \(\xi=\infty\) is the spacial infinity, while \(\eta=\cos v\) gives the periodic coordinate along the elliptic curves with two foci located on the positions of dyon and anti-dyon. Its Laplacian is well-known, \[\left(\frac{d}{2}\right)^{2}\nabla_{(2D)}^{2}=\frac{1}{\xi^{2}-\eta^{2}}\left\{ \partial_{\xi}(\xi^{2}-1)\partial_{\xi}+\partial_{\eta}(1-\eta^{2})\partial_ {\eta}-m^{2}\left(\frac{1}{\xi^{2}-1}+\frac{1}{1-\eta^{2}}\right)\right\}. \tag{151}\] Thus, the wave equation at rest (without momenta) with mass \(M\), can be \[\left\{\partial_{\xi}(\xi^{2}-1)\partial_{\xi}+\partial_{\eta}(1- \eta^{2})\partial_{\eta}-m^{2}\left(\frac{1}{\xi^{2}-1}+\frac{1}{1-\eta^{2}} \right)\right\}\psi(\xi,\eta)\] \[=\left(\frac{Md}{2}\right)^{2}(\xi^{2}-\eta^{2})\psi(\xi,\eta). \tag{152}\] Surprisingly, the wave function can be solved by the separation of variables. The solution is a superposition of \(\psi(\xi,\eta)=\sum_{l,m}c_{lm}\tilde{K}_{lm}(\xi)\tilde{Y}_{lm}(\eta)\), each factor of which satisfies \[\left\{\partial_{\eta}(1-\eta^{2})\partial_{\eta}-\frac{m^{2}}{1- \eta^{2}}+\left(\frac{Md}{2}\right)^{2}\eta^{2}\right\}\tilde{Y}_{lm}(\eta) =\lambda_{lm}\tilde{Y}_{lm}(\eta), \tag{153}\] \[\left\{\partial_{\xi}(\xi^{2}-1)\partial_{\xi}-\frac{m^{2}}{\xi^ {2}-1}-\left(\frac{Md}{2}\right)^{2}\xi^{2}\right\}\tilde{K}_{lm}(\xi) =-\lambda_{l,m}\tilde{K}_{lm}(\xi). \tag{154}\] We used a suggestive notation which implies \(\tilde{Y}_{lm}(\eta)\) is a generalization to the dipole case of the spherical harmonics for \(d=0\), while \(\tilde{K}_{lm}(\xi)\) is that of the modified Bessel function for the cylindrical case of \(d\to\infty\). To go a step forward by writing down \(\mathbf{A}^{(Z)}\) in terms of \(\tilde{Y}_{lm}(\eta)\times\tilde{K}_{lm}(\xi)\), we have to be familiar with the special functions in the rotating elliptic coordinates (see for example [22]). So, we put this issue to the next target. #### 5.2.3 Production and decay of dyonium [Issue 3]) How is the dyonium produced in the history of the universe. It is interesting to note that the dyonium solution is quite similar to the meson states. Therefore, the knowledge on the quarkonium and the monopolium [21] can be utilized. indeed the quantum mechanical treatment of pair creation of dyon and anti-dyon, and the decay of dyonium into hadrons or multi-photons, can be discussed in the similar manner. Dyoniums may be produced in the reheating era at temperature \(T_{R}\) after the inflation ends, when the inflaton decays into a pair of \(D\) and \(\overline{D}\): \(\sigma_{\rm prod}(\mbox{inflaton}\to D,\overline{D}).\) We are able to predict the number density of dyonium (\(D\overline{D}\)) when the reheating ends. Afterwards, the produced dyonium starts the periodic oscillation, during which the various fermion number violation processes are induced, with the quarks and leptons existing in the neighborhood of it. The dyonium finally decays into multi-photons, when the dyon \(D\) and the anti-dyon \(\overline{D}\) collide at \(d<2\delta\) (\(\delta\) is the radius of dyon and anti-dyon), \((D\overline{D})\to 2\gamma s,\ 3\gamma s,\ \cdots\) may occur quite frequently. It is also expected to find the dyonium at the collider experiments. The rate of production at the collider, \(\sigma_{\rm prod}(q+\overline{q},g+g\to D,\overline{D})\), and its decay rate \(\sigma_{\rm decay}((D\overline{D})\to\mbox{multi}-\gamma s)\), as well as the life-time \(\tau_{\rm dyonium}\) of the dyonium, can be estimated, by using the quantum field theory as a generalization of monopolium to dyonium (see [21]). The fermion number violation processes (of neutrino mass and of B- or L-number violation) occur within the life-time of the dyonium. The author leaves these unsolved issues to the future study, if possible by himself or someone else who is interested in this topic. ## Acknowledgments The author gives his sincere thanks to Professor Charles J. Goebel for his essential question raised to him at Madison in 1982. The main purpose of this paper is to answer to his question. He also thanks Kaoru Hagiwara for inviting him to give a seminar at Wisconsin U., where we had an opportunity to discuss with Professor Goebel for long hours. He is grateful to Neil David Barrie, Mathew Talia and Kimiko Yamashita for a number of valuable discussions and comments. He gives his thanks to the colleagues in OUJ, Shiro Komata, So Katagiri, Yoshiki Matsuoka, Mamoru Sugamoto, Koichiro Yamaguchi and Ken Yokoyama for valuable discussions, reading the manuscript and giving useful comments.
2309.07288
A divergence free $C^0$-RIPG stream function formulation of the incompressible Stokes system with variable viscosity
Pointwise divergence free velocity field approximations of the Stokes system are gaining popularity due to their necessity in precise modelling of physical flow phenomena. Several methods have been designed to satisfy this requirement; however, these typically come at a greater cost when compared with standard conforming methods, for example, because of the complex implementation and development of specialized finite element bases. Motivated by the desire to mitigate these issues for 2D simulations, we present a $C^0$-interior penalty Galerkin (IPG) discretization of the Stokes system in the stream function formulation. In order to preserve a spatially varying viscosity this approach does not yield the standard and well known biharmonic problem. We further employ the so-called robust interior penalty Galerkin (RIPG) method; stability and convergence analysis of the proposed scheme is undertaken. The former, which involves deriving a bound on the interior penalty parameter is particularly useful to address the $\mathcal{O}(h^{-4})$ growth in the condition number of the discretized operator. Numerical experiments confirming the optimal convergence of the proposed method are undertaken. Comparisons with thermally driven buoyancy mantle convection model benchmarks are presented.
Nathan Sime, Paul Houston, Cian R. Wilson, Peter E. van Keken
2023-09-13T20:14:57Z
http://arxiv.org/abs/2309.07288v1
A divergence free \(C^{0}\)-RIPG stream function formulation of the incompressible Stokes system with variable viscosity ###### Abstract Pointwise divergence free velocity field approximations of the Stokes system are gaining popularity due to their necessity in precise modelling of physical flow phenomena. Several methods have been designed to satisfy this requirement; however, these typically come at a greater cost when compared with standard conforming methods, for example, because of the complex implementation and development of specialized finite element bases. Motivated by the desire to mitigate these issues for 2D simulations, we present a \(C^{0}\)-interior penalty Galerkin (IPG) discretization of the Stokes system in the stream function formulation. In order to preserve a spatially varying viscosity this approach does not yield the standard and well known biharmonic problem. We further employ the so-called robust interior penalty Galerkin (RIPG) method; stability and convergence analysis of the proposed scheme is undertaken. The former, which involves deriving a bound on the interior penalty parameter is particularly useful to address the \(\Theta(h^{-4})\) growth in the condition number of the discretized operator. Numerical experiments confirming the optimal convergence of the proposed method are undertaken. Comparisons with thermally driven buoyancy mantle convection model benchmarks are presented. ## 1 Introduction ### Motivation Large areas of computational fluid dynamics rely on the accurate solution of the Stokes system which requires correct satisfaction of the mass conservation equation. We are particularly interested in geophysical flow calculations that model slow convection in the Earth's mantle to help understand the Earth's thermal and chemical evolution (Bercovici, 2015; Ricard, 2015; Schubert et al., 2001). Specific applications include: the modeling of the recycling and subsequent mixing of oceanic crust (Brandenburg et al., 2008; Christensen and Hofmann, 1994) potentially within preexisting heterogeneity (Gulcher et al., 2021), the thermal and chemical evolution of subduction zones (Gerya et al., 2021; Wada and King, 2015); and the formation of hotspots and large igneous provinces by mantle plumes. Exact, or at least approximately local, mass conservation is particularly important for particle methods (Christensen and Hofmann, 1994; Tackley and King, 2003; van Keken et al., 1997). It has been shown, for example, that significant artefacts can occur when using the popular Taylor-Hood (TH) finite element method for the numerical approximation of the Stokes system with particles. The TH element pair for velocity-pressure does not satisfy mass conservation locally. Such artefacts include the formation of holes and concretions in the particle distribution or the artificial settling of particles in gravity driven flows. Recent geodynamical examples demonstrating this are for purely compositionally-driven flow (Samuel, 2018; Sime et al., 2021), thermochemical convection (Jones et al., 2021; Pusok et al., 2017; Wang et al., 2015), and when mathematical fields represented by particles are advected (Maljaars et al., 2019, 2021; Sime et al., 2021, 2022). We demonstrate these artifacts in Figure 1 and further refer to Jenny et al. (2001); McDermott and Pope (2008). In this paper we will explore a new approach to guarantee exact mass conservation. Here, we specifically consider the incompressible Stokes system in 2D in a simply connected domain; this covers many of the typical geometries that are frequently used in geodynamical applications (Brandenburg et al., 2008; Hernlund and Tackley, 2008; Jones et al., 2021; Li and McNamara, 2022). The extension to simple compressible flow modeling for mantle convection (Bossman and van Keken, 2013; Jarvis and McKenzie, 1980; Tackley, 2008) is possible in the same fashion as in the transition made from Sime et al. (2021) to Sime et al. (2022). ### Context Several numerical schemes have been developed to exactly satisfy mass conservation of the velocity approximation, denoted here by \(\mathbf{u}_{h}\), in a pointwise sense, by which we mean that \(\nabla\cdot\mathbf{u}_{h}(\mathbf{x})=0\) for all \(\mathbf{x}\) in the computational domain (e.g., Brezzi et al., 1985; Cockburn et al., 2011; John et al., 2017; Raviart and Thomas, 1977; Rhebergen and Wells, 2018, 2020; Scott and Vogelius, 1985). In our recent work, which demonstrated the need for exact mass conservation (Jones et al., 2021; Sime et al., 2021, 2022), we employed an embedded discontinuous Galerkin-hybrid discontinuous Galerkin (HDG) method which yields a solenoidal velocity approximation (as developed in Rhebergen and Wells (2020) and implemented in Maljaars et al. (2021)). However, even with the utilization of static condensation, which factorizes the element-local problems in favor of the global system defined on the facets, the system assembly and computation of the solution is expensive. Seeking a more computationally efficient numerical scheme in the 2D setting is the key motivation of this article. The general approach which the above mentioned numerical schemes exploit to achieve exact pointwise satisfaction of mass conservation is by ensuring that the divergence of the velocity approximation lies in the space in which we seek the pressure approximation, denoted by \(Q^{h}\), i.e., that \(\nabla\cdot\mathbf{u}_{h}\in Q^{h}\). These schemes typically require the definition of specialized finite element (FE) basis functions which may be difficult to implement. Libraries such as basix(Scroggs et al., 2022) simplify and automate the creation of these FE bases; however, the variational formulation, assembly and computational solution of these systems may remain difficult. Also, as previously mentioned above, assembling HDG systems via static condensation requires careful management to preserve scalable and efficient local assembly into a global system. A popular alternative is to consider the biharmonic formulation of the Stokes system which provides an exactly divergence free approximation of the velocity. In this setting, the velocity is cast as the curl of an unknown potential field, i.e., \(\mathbf{u}_{h}=\nabla\times\phi_{h}\), where \(\phi_{h}\) is the FE approximation of the stream function. Hence, the numerical approximation of the velocity vector is trivially solenoidal. The standard FE discretization of the fourth-order biharmonic problem requires a \(C^{1}\) conforming basis for the approximation of \(\phi_{h}\)(e.g., Argyris et al., 1968; Morley, 1968) or, for example, a \(C^{k}\), \(k\geq 1\), continuous divergence conforming B-spline basis (e.g., Christensen and Hofmann, 1994; Evans and Hughes, 2013; Kopitzke, 1979; van Keken et al., 1993). In this work, our objective is to develop a FE formulation of the Stokes system by taking the same approach as the \(C^{0}\)-interior penalty Galerkin (IPG) formulation of the biharmonic problem, whilst preserving the symmetric gradient operator permitting variable viscosity models. IPG methods in the context of discontinuous Galerkin (DG) FE discretizations for second order partial differential equations (PDEs) are well understood (see, for example, Arnold et al., 2001). In the context of fourth-order problems we seek to extend the technique presented in, for example, Engel et al. (2002); see also the recent complimentary paper by Dong and Mascotto (2023). In this setting, \(\phi_{h}\) is approximated with a \(C^{0}\) FE basis and continuity of its first derivative is enforced weakly. This formulation will yield many new terms on the facets of the FE mesh which may be verbose and arduous to implement. However, we exploit modern computational symbolic Figure 1: Advecting particles through velocity fields which are not pointwise divergence free may lead to spurious results. Here, we show an example advecting tracers using a third order accurate Runge–Kutta scheme in a mantle convection model exhibited as case T4 in our numerical experiments later in Section 5.2 (comprising a numerical benchmark in Tosi et al. (2015)). (a) The velocity field approximation computed using the \(C^{0}\)-RIPG method developed in this work; shown are the computational mesh, velocity streamlines and normalized speed field. (b) The initial configuration of \(256^{2}\) equidistant particles colored solely as a visual aid. (c) The advected particles after approximately 10 overturns in a _non_-pointwise divergence free velocity field computed using the TH method. (d) The advected particles after approximately 10 overturns in the pointwise divergence free velocity field computed using the \(C^{0}\)-RIPG method developed in this work. The average number of particles per cell is 512 with standard deviation of 36.67, 240.06 and 35.00 in (b), (c) and (d), respectively. The velocity field is discretized using polynomials of degree 2 in both the TH and \(C^{0}\)-RIPG cases shown. specification (Alnaes et al., 2014) and automatic generation of FE formulations (Kirby and Logg, 2006) which vastly reduces implementation complexity. Given this objective, we highlight what we see as clear advantages of the \(C^{0}\)-IPG formulation and therefore our motivation: 1. Solenoidal velocity approximation. 2. Exploiting strong imposition of \(\phi_{h}=0\) on the computational boundary yields pointwise satisfaction of \(\mathbf{u}_{h}\cdot\mathbf{n}=0\), where \(\mathbf{n}\) denotes the unit outward pointing normal vector on the boundary of the computational domain. 3. In the context of the \(2D\) system, reduction of the Stokes system's unknown velocity and pressure variables to a single unknown scalar potential. 4. Assembly using a standard \(C^{0}\) FE basis with no special requirements. However, we must highlight that the \(C^{0}\)-IPG method does come with caveats: 1. The condition number of the underlying matrix stemming from the discretization of the fourth-order PDE exhibits growth at a rate of approximately \(h^{-4}\) as the mesh is uniformly refined; here, \(h\) is a measure of the mesh element size. 2. A dimensionless interior penalty parameter must be chosen to ensure stability of the scheme. 3. The matrix associated with the \(C^{0}\)-IPG discretization is more dense than a standard conforming method. Caveat 2 further impacts caveat 1 in the sense that the interior penalty parameter should be carefully selected not to be so large as to further adversely affect the condition number of the underlying matrix. For this reason, we employ the robust IPG (RIPG) formulation introduced in Dong and Georgoulis (2022). Here, the key modification to the underlying IPG scheme is the exploitation of a weighted average operator; this then allows for a lower bound on the interior penalty parameter to be determined in a simple manner. ### Structure of the remainder of the paper This article is structured as follows: in Section 2 we define the Stokes system and the associated stream function formulation. In Section 3 we define the \(C^{0}\)-RIPG formulation, based on employing a suitable weighted average operator and interior penalty parameter. The stability and error analysis of the proposed scheme is studied in Section 4 where \(hp\)-optimal error bounds are established with respect to a given DG energy norm; this is in accordance with the analogous results derived in the case of a constant viscosity coefficient (Dong and Mascotto, 2023). The numerical performance of the \(C^{0}\)-RIPG scheme is considered in Section 5. Finally, in Section 6 we summarize the work presented in this article and discuss potential future developments. ## 2 Stream function formulation Let \(\Omega\subset\mathbb{R}^{2}\) denote a simply connected domain with boundary \(\partial\Omega\) and outward pointing unit normal vector \(\mathbf{n}\). The boundary is subdivided into Dirichlet and Neumann components \(\partial\Omega_{D}\) and \(\partial\Omega_{N}\), respectively, which do not overlap, i.e., \(\partial\Omega_{D}\cup\partial\Omega_{N}=\partial\Omega\) and \(\partial\Omega_{D}\cap\partial\Omega_{N}=\emptyset\). In \(\Omega\) we seek the velocity \(\mathbf{u}:\Omega\mapsto\mathbb{R}^{2}\) and pressure \(P:\Omega\mapsto\mathbb{R}\) which satisfy the Stokes system \[-\nabla\cdot(2\mu\varepsilon(\mathbf{u}))+\nabla P =\mathbf{f}, \tag{1}\] \[\nabla\cdot\mathbf{u} =0, \tag{2}\] subject to the boundary conditions \[\mathbf{u} =\mathbf{0}\quad\text{on }\partial\Omega_{D}, \tag{3}\] \[2\mu\varepsilon(\mathbf{u})\cdot\mathbf{n}-P\mathbf{n} =\mathbf{g}_{N}\text{ on }\partial\Omega_{N}. \tag{4}\] Here, \(\varepsilon(\mathbf{u})=\frac{1}{2}(\nabla\mathbf{u}+\nabla\mathbf{u}^{\top})\) is the symmetric gradient, \(\mu(\mathbf{x}):\Omega\mapsto\mathbb{R}^{+}\) is the viscosity, and \(\mathbf{f}(\mathbf{x}):\Omega\mapsto\mathbb{R}^{2}\) is a given forcing function. Defining the space \(\mathbf{H}^{1}_{\mathbf{0}}(\Omega)=\big{\{}\mathbf{v}\in[H^{1}(\Omega)]^{2}:\mathbf{v}=\mathbf{0} \text{ on }\partial\Omega_{D}\big{\}}\), the weak formulation of equations (1) and (2) reads: find \((\mathbf{u},P)\in\mathbf{H}^{1}_{\mathbf{0}}(\Omega)\times L_{2}(\Omega)\) such that \[B(\mathbf{u},\mathbf{v})-(\nabla\cdot\mathbf{v},P) =l(\mathbf{v}), \tag{5}\] \[(\nabla\cdot\mathbf{u},q) =0 \tag{6}\] for all \((\mathbf{v},q)\in\mathbf{H}^{1}_{\mathbf{0}}(\Omega)\times L_{2}(\Omega)\), where \[B(\mathbf{u},\mathbf{v}) =(2\mu\varepsilon(\mathbf{u}),\varepsilon(\mathbf{v})), \tag{7}\] \[l(\mathbf{v}) =(\mathbf{f},\mathbf{v})+\langle\mathbf{g}_{N},\mathbf{v}\rangle_{\partial\Omega _{N}}. \tag{8}\] Here, \((\cdot,\cdot)\) is the standard \(L_{2}(\Omega)\) inner product and \(\langle\cdot,\cdot\rangle_{\omega}\) denotes the \(L_{2}(\omega)\) inner product on a subset \(\omega\) of the boundary \(\partial\Omega\). Defining the divergence free space \(\mathbf{Z}_{\mathbf{0}}=\{\mathbf{v}\in\mathbf{H}^{1}_{\mathbf{0}}(\Omega):\nabla\cdot\mathbf{v}=0\}\) allows us to state the divergence free weak formulation: find \(\mathbf{u}\in\mathbf{Z}_{\mathbf{0}}\) such that \[B(\mathbf{u},\mathbf{v})=l(\mathbf{v})\qquad\forall\mathbf{v}\in\mathbf{Z}_{\mathbf{0}}. \tag{9}\] Given that \(\Omega\subset\mathbb{R}^{2}\) is simply connected, then for every \(\mathbf{v}\in\mathbf{Z}_{\mathbf{0}}\), there exists a unique \(\psi\in H^{2}(\Omega)\setminus\mathbb{R}\) such that \(\mathbf{v}=\nabla\times\psi\), where we define the curl vector acting on a scalar, \(\psi:\Omega\mapsto\mathbb{R}\) and a vector, \(\mathbf{z}=(\mathbf{z}_{x},\mathbf{z}_{y})^{\top}:\Omega\mapsto\mathbb{R}^{2}\), by \[\nabla\times\psi=\left(\frac{\partial\psi}{\partial y},-\frac{\partial\psi}{ \partial x}\right)^{\top}\quad\text{and}\quad\nabla\times\mathbf{z}=\frac{ \partial\mathbf{z}_{y}}{\partial x}-\frac{\partial\mathbf{z}_{x}}{\partial y}, \tag{10}\] respectively, cf. Girault and Raviart (1986). Setting \(\Psi_{\mathbf{0}}=\{\psi\in H^{2}(\Omega)\setminus\mathbb{R}:\nabla\times\psi=\mathbf{ 0}\text{ on }\partial\Omega_{D}\}\) we write the stream function formulation: find \(\phi\in\Psi_{\mathbf{0}}\) such that \[B(\nabla\times\phi,\nabla\times\psi)=l(\nabla\times\psi)\qquad\forall\psi\in \Psi_{\mathbf{0}}. \tag{11}\] ## 3 \(C^{0}\)-IPG formulation Let \(\mathcal{F}^{\,h}\) be the subdivision of \(\Omega\) into a mesh composed of a tessellation of nonoverlapping triangular elements \(\kappa\) such that \(\mathcal{F}^{\,h}=\{\kappa\}\) and \(\overline{\Omega}=\cup_{\kappa\in\mathcal{F}^{\,h}}\overline{\kappa}\). Each element has boundary \(\partial\kappa\) with outward pointing unit normal vector \(\mathbf{n}_{\kappa}\). The interior facets of the mesh are defined by \(\Gamma_{I}=\cup_{\kappa\in\mathcal{F}^{\,h}}\partial\kappa\setminus\partial\Omega\), the exterior Dirichlet facets by \(\Gamma_{E}=\cup_{\kappa\in\mathcal{F}^{\,h}}\partial\kappa\cap\partial\Omega_{D}\) and the mesh skeleton by \(\Gamma=\Gamma_{I}\cup\Gamma_{E}\). Given two neighboring elements \(\kappa^{+}\) and \(\kappa^{-}\) and the smooth functions \(\phi^{\pm}:\kappa^{\pm}\mapsto\mathbb{R}\) and \(\mathbf{v}^{\pm}:\kappa^{\pm}\mapsto\mathbb{R}^{2}\), we define on their common face \(F=\partial\kappa^{+}\cap\partial\kappa^{-}\) the following operators \[\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _where \(\boldsymbol{\tau}\) is the unit vector which lies tangential to \(\partial\Omega\). We may exploit the strong imposition of \(\phi_{h}\big{|}_{\partial\Omega_{D}}=0\) by seeking \(\phi_{h}\in V_{0}^{h,p}\) such that_ \[B_{\text{IP}}(\boldsymbol{u}_{h},\boldsymbol{v}_{h})=l_{\text{IP}}(\boldsymbol{ v}_{h})\quad\forall\psi_{h}\in V_{0}^{h,p}, \tag{23}\] _where \(l_{\text{IP}}(\cdot)\) is the alternative linear functional_ \[l_{\text{IP}}(\boldsymbol{v})=l(\boldsymbol{v})+\langle\boldsymbol{u}_{D} \otimes\boldsymbol{n},\beta\boldsymbol{v}\otimes\boldsymbol{n}-2\mu\varepsilon _{h}(\boldsymbol{v})\rangle_{\partial\Omega_{2p}}. \tag{24}\] _This scheme yields pointwise satisfaction of the boundary flux such that_ \[\|\nabla_{h}\phi_{h}\cdot\boldsymbol{\tau}\|_{L_{2}(\partial\Omega_{D})}=\| \nabla_{h}\times\phi_{h}\cdot\boldsymbol{n}\|_{L_{2}(\partial\Omega_{D})}=\| \boldsymbol{u}_{h}\cdot\boldsymbol{n}\|_{L_{2}(\partial\Omega_{D})}=0. \tag{25}\] ### \(C^{0}\)-Ripg formulation The classical symmetric interior penalty Galerkin (SIPG) formulation sets \(w^{\pm}=\frac{1}{2}\). However we seek to exploit the RIPG formulation introduced in Dong and Georgoulis (2022) which is based on a particular choice of the weights \(w^{\pm}\) and the interior penalty parameter \(\beta\). The key advantage of the RIPG formulation is that it remains parameterless in the sense that there is a known bound on \(\beta\) which ensures stability of the numerical scheme given appropriate choices of \(w^{\pm}\). We first state the values we employ in the context of our 2D numerical experiments when \(\mathcal{F}^{h}\) is composed of triangles. In the remainder of this section we rationalize this choice based on studying the stability of the RIPG scheme, along with discussing the advantages when using this method when compared with a standard SIPG numerical scheme. The RIPG scheme average operator weights and interior penalty parameter are defined, respectively, by \[w^{\pm}=\frac{\zeta^{\pm}}{\zeta^{+}+\zeta^{-}},\quad\beta\big{|}_{F}=\begin{cases} (\zeta^{+}+\zeta^{-})^{-2}&F\subset\Gamma_{I},\\ (\zeta^{+})^{-2}&F\subset\Gamma_{E},\end{cases} \tag{26}\] where \[\zeta^{\pm}\big{|}_{F}=\left(\delta\sqrt{\frac{3p(p-1)}{2}\frac{|F|}{|\kappa^ {\pm}|}}\big{\|}2\mu\boldsymbol{n}^{\pm}\big{\|}_{L_{\infty}(F)}\big{\|}(2 \mu)^{-\frac{1}{2}}\big{\|}_{L_{\infty}(\kappa^{\pm})}\right)^{-1}, \tag{27}\] where for a set \(\omega\subset\mathbb{R}^{n}\), \(n\geq 1\), we write \(|\omega|\) to denote the \(n\)-dimensional Hausdorff measure of \(\omega\) and \(\delta>\sqrt{2}\) is a constant, cf. below. ## 4 Stability and error analysis The aim of this section is to study the stability of the \(C^{0}\)-RIPG formulation (16) and establish an optimal \(hp\)-error bound. For the purposes of the proceeding error analysis we introduce a suitable extension of the bilinear form \(B_{\text{IP}}(\cdot,\cdot)\). To this end, we write \(\Pi_{L_{2}}:[L_{2}(\Omega)]^{2\times 2}\mapsto[V^{h,p-2}]^{2\times 2}\) to denote the orthogonal \(L_{2}\)-projection operator. With this notation for \(\phi,\ \psi\in V:=H^{2}(\Omega)+V_{0}^{h,p}\) we write \[\tilde{B}_{\text{IP}}(\boldsymbol{u},\boldsymbol{v})=B_{h}(\boldsymbol{u}, \boldsymbol{v})-\langle[\![\boldsymbol{u}]\!]_{\otimes},\{\![2\mu\Pi_{L_{2}}( \varepsilon_{h}(\boldsymbol{v}))\}\!]\rangle_{\Gamma}-\langle\{\![2\mu\Pi_{L_{2} }(\varepsilon_{h}(\boldsymbol{u}))\}\!],[\![\boldsymbol{v}]\!]_{\otimes} \rangle_{\Gamma}+\langle\beta[\![\boldsymbol{u}]\!]_{\otimes},[\![\boldsymbol {v}]\!]_{\otimes}\rangle_{\Gamma}, \tag{28}\] where \(\boldsymbol{u}=\nabla_{h}\times\phi\) and \(\boldsymbol{v}=\nabla_{h}\times\psi\). Furthermore, we introduce the following DG norm \[\|\!\|\boldsymbol{v}\|_{\text{IP}}^{2}=\!\Big{\|}\sqrt{2\mu}\varepsilon_{h}( \boldsymbol{v})\Big{\|}_{L_{2}(\Omega)}^{2}+\!\Big{\|}\sqrt{\beta}[\![ \boldsymbol{v}]\!]_{\otimes}\Big{\|}_{L_{2}(\Gamma)}^{2}. \tag{29}\] ### Coercivity and continuity The aim of this section is to study the stability of the \(C^{0}\)-RIPG scheme (16). To this end, we first recall the following inverse equality from Warburton and Hesthaven (2003). **Lemma 4.1**.: _Given \(\kappa\in\mathcal{F}^{h}\) is a triangular element in 2D (\(d=2\) below), let \(F\subset\partial\kappa\) denote one of its faces. Then for \(v\in\mathscr{P}_{p}(\kappa)\) the following inverse inequality holds_ \[\|v\|_{L_{2}(F)}\leq C_{\text{inv}}(\kappa,F,p)\|v\|_{L_{2}(\kappa)}, \tag{30}\] _where_ \[C_{\text{inv}}(\kappa,F,p)=\sqrt{\frac{(p+1)(p+d)}{d}\frac{|F|}{| \kappa|}}. \tag{31}\] Equipped with Lemma 4.1 we now state the main result of this section. **Lemma 4.2**.: _The bilinear form \(\tilde{B}_{\text{IP}}(\cdot,\cdot)\) is coercive for any \(\delta>\sqrt{2}\) and continuous over \(V\times V\); in particular we have that_ \[\tilde{B}_{\text{IP}}(\mathbf{v},\mathbf{v}) \geq C_{\text{coerc}}\|\mathbf{v}\|_{\text{IP}}^{2} \forall\phi\in V,\] \[\tilde{B}_{\text{IP}}(\mathbf{v},\mathbf{w}) \leq C_{\text{cont}}\|\mathbf{v}\|_{\text{IP}}\|\mathbf{w}\|_{\text{IP}} \|\mathbf{w}\|_{\text{IP}} \forall\phi,\psi\in V,\] _where \(\mathbf{v}=\nabla_{h}\times\phi\), \(\mathbf{w}=\nabla_{h}\times\psi\), \(C_{\text{coerc}}=1-\sqrt{2}/\delta\) and \(C_{\text{cont}}=2(1+2/\delta^{2})\)._ Proof.: We first consider the coercivity of the bilinear form \(\tilde{B}_{\text{IP}}(\cdot,\cdot)\). For \(\phi\in V\), \(\mathbf{v}=\nabla_{h}\times\phi\), we note that \[\tilde{B}_{\text{IP}}(\mathbf{v},\mathbf{v})= \left\|\sqrt{2\mu}\varepsilon_{h}(\mathbf{v})\right\|_{L_{2}(\Omega)} ^{2}+\left\|\sqrt{\beta}\llbracket\mathbf{v}\rrbracket_{\otimes}\right\|_{L_{2}( \Gamma)}^{2}-2\int_{\Gamma}\llbracket 2\mu\Pi_{L_{2}}(\varepsilon_{h}(\mathbf{v})) \rrbracket\cdot\llbracket\mathbf{v}\rrbracket_{\otimes}\,\mathrm{d}s. \tag{32}\] In order to determine the lower bound on \(\delta\), we must express the last term in equation (32) in terms of the first two. Exploiting the inverse inequality stated in Lemma 4.1, together with the stability of the \(L_{2}\)-projection operator \(\Pi_{L_{2}}\), for interior faces, we deduce that \[\left|2\int_{\Gamma_{I}}\llbracket 2\mu\Pi_{L_{2}}(\varepsilon_{h}( \mathbf{v}))\rrbracket\cdot\llbracket\mathbf{v}\rrbracket_{\otimes}\,\mathrm{d}s\right|\] \[\qquad\leq 2\sum_{F\in\Gamma_{I}}\int_{F}\left(\sum_{*\in\{+,- \}}w^{*}\|2\mu\mathbf{n}^{*}\|_{L_{\infty}(F)}\left|\Pi_{L_{2}}(\varepsilon_{h}( \mathbf{v}^{*}))\right|\right)\left|\llbracket\mathbf{v}\rrbracket_{\otimes}\right| \mathrm{d}s\] \[\qquad\leq 2\sum_{F\in\Gamma_{I}}\sum_{*\in\{+,-\}}\alpha^{*}w^{*} \Big{\|}\sqrt{2\mu}\varepsilon_{h}(\mathbf{v}^{*})\Big{\|}_{L_{2}(\kappa^{*})} \big{\|}\llbracket\mathbf{v}\rrbracket_{\otimes}\big{\|}_{L_{2}(F)}\,, \tag{33}\] where \[\alpha^{*}=C_{\text{inv}}(\kappa^{*},F,p_{\kappa^{*}}-2)\|2\mu\mathbf{n}^{*}\|_{L_ {\infty}(F)}\Big{\|}(2\mu)^{-\frac{1}{2}}\Big{\|}_{L_{\infty}(\kappa^{*})}\,. \tag{34}\] In order to proceed, following (26) and (27), we select \[w^{*}=\frac{\zeta^{*}}{\zeta^{+}+\zeta^{-}},\quad\zeta^{*}=\frac{1}{\delta \sqrt{m_{\kappa^{*}}\alpha^{*}}}, \tag{35}\] respectively, where \(m_{\kappa^{*}}\) is the number of facets belonging to element \(\kappa^{*}\), i.e., here \(m_{\kappa^{*}}=3\), and \(\delta\) is a positive constant to be determined. Thereby, we deduce that \[\left|2\int_{\Gamma_{I}}\{\!\!\{2\mu\Pi_{L_{2}}(\varepsilon_{h}( \boldsymbol{v}))\}\!\!\}\cdot\llbracket\boldsymbol{v}\rrbracket_{\otimes}\, \mathrm{d}s\right|\] \[\qquad\leq 2\sum_{F\in\Gamma_{I}}\sum_{*\in\{+,-\}}\left(\frac{1}{ \delta\sqrt{m_{\kappa^{*}}}}\frac{1}{\zeta^{+}+\zeta^{-}}\right\|\!\sqrt{2\mu }\varepsilon_{h}(\boldsymbol{v}^{*})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Theorem 4.3**.: _Given that \(\mathcal{T}^{h}\) is a quasi-uniform triangular mesh, write \(\mathbf{u}=\nabla\times\phi\) and \(\mathbf{u}_{h}=\nabla_{h}\times\phi_{h}\) to denote the solutions to (11) and (16), respectively. Then assuming that \(\phi\in H^{k}(\Omega)\), \(k\geq 3\), we have that_ \[\interleave\mathbf{u}-\mathbf{u}_{h}\interleave_{\text{IP}}\leq C\frac{h^{\mu-2}}{p^{k -2}}\|\phi\|_{H^{k}(\Omega)},\] _where \(\mu=\min(p+1,k)\) and \(C\) is a positive constant, which is independent of \(h\) and \(p\)._ Proof.: For \(\mathbf{u}=\nabla\times\phi\) and \(\mathbf{u}_{h}=\nabla_{h}\times\phi_{h}\) we recall Strang's lemma \[\interleave\mathbf{u}-\mathbf{u}_{h}\interleave_{\text{IP}}\leq\left(1+ \frac{C_{\text{cont}}}{C_{\text{coer}}}\right)\inf_{\mathbf{v}_{h}=\nabla_{h} \times\psi_{h},\psi_{h}\in V_{0}^{h,p}}\interleave\mathbf{u}-\mathbf{v}_{h}\interleave _{\text{IP}}\] \[\qquad\qquad\qquad\qquad+\frac{1}{C_{\text{coer}}}\sup_{\mathbf{w}_{ h}=\nabla_{h}\times\varphi_{h},\varphi_{h}\in V_{0}^{h,p}\setminus\{0\}}\frac{| \tilde{B}_{\text{IP}}(\mathbf{u},\mathbf{w}_{h})-\ell(\mathbf{w}_{h})|}{\interleave\mathbf{w} _{h}\interleave_{\text{IP}}}. \tag{41}\] Employing the approximation result (40), together with the \(H^{2}\)-conformity of \(\mathpzc{g}\), the first term on the right-hand side of (41) can be bounded as follows \[\inf_{\mathbf{v}_{h}=\nabla_{h}\times\psi_{h},\psi_{h}\in V_{0}^{h,p }}\interleave\mathbf{u}-\mathbf{v}_{h}\interleave_{\text{IP}} \leq\interleave\nabla\times\phi-\nabla_{h}\times(\mathpzc{g} \phi)\interleave_{\text{IP}}\] \[= \Big{\|}\sqrt{2\mu}\varepsilon_{h}(\nabla\times\phi-\nabla_{h} \times(\mathpzc{g}\phi))\Big{\|}_{L_{2}(\Omega)}\leq C\frac{h^{\mu-2}}{p^{k-2 }}\|\phi\|_{H^{k}(\Omega)},\] as required. For the second (consistency) term, noting that \(\phi\in H^{k}(\Omega)\), \(k\geq 3\), upon application of integration by parts and the Cauchy-Schwarz inequality, we deduce that \[\tilde{B}_{\text{IP}}(\mathbf{u},\mathbf{w}_{h})-\ell(\mathbf{w}_{h}) =\langle\{\!\{2\mu(\varepsilon_{h}(\mathbf{u})-\Pi_{L_{2}} \varepsilon_{h}(\mathbf{u}))\}\!\},[\![\mathbf{w}_{h}]\!]_{\otimes}\rangle_{\Gamma}\] \[\leq\|\beta^{-1/2}\{\!\{2\mu(\varepsilon_{h}(\mathbf{u})-\Pi_{L_{2}} \varepsilon_{h}(\mathbf{u}))\}\!\}\|_{L_{2}(\Gamma)}\|\sqrt{\beta}[\![\mathbf{w}_{h}]\!] _{\otimes}\|_{L_{2}(\Gamma)}.\] We now recall the following approximation result from Chernov (2012) for the \(L_{2}\)-projector. With a slight abuse of notation, we also write \(\Pi_{L_{2}}:L_{2}(\kappa)\mapsto\mathscr{P}_{p}(\kappa)\) to denote the elementwise (scalar-valued) \(L_{2}\)-projector. With this notation, given a triangular element \(\kappa\in\mathcal{T}^{h}\), let \(F\subset\partial\kappa\) denote one of its faces. Then, for \(v\in H^{k}(\kappa)\), \(k\geq 1\), the following bound holds \[\|v-\Pi_{L_{2}}v\|_{L_{2}(F)}\leq C\frac{h^{\mu-1/2}}{p^{k-1/2}}\|v\|_{H^{k}( \kappa)}, \tag{42}\] where \(\mu=\min(p+1,k)\) and \(C\) is a positive constant independent of \(h\), \(p\), \(v\), and \(\Pi_{L_{2}}v\). Equipped with (42), the definition of \(\beta\), cf. (26) and the DG norm (29), we get \[\tilde{B}_{\text{IP}}(\mathbf{u},\mathbf{w}_{h})-\ell(\mathbf{w}_{h})\leq C\frac{h^{\mu-2} }{p^{k-3/2}}\|\phi\|_{H^{k}(\Omega)}\interleave w_{h}\interleave_{\text{IP}}.\] Collecting the above bounds gives the desired result. **Remark 4.4**.: _We remark that the bound derived in Theorem 4.3 is optimal in both the mesh element size \(h\) and the polynomial degree \(p\); this is in agreement to the analogous bound derived for the standard \(C^{0}\)-IPG scheme in Dong and Mascotto (2023) for the Dirichlet problem. In the case when inhomogeneous Dirichlet boundary conditions are employed, then as in the case of second-order linear elliptic partial differential equations \(p\)-optimality is no longer possible, cf. Georgoulis et al. (2009)._ Numerical experiments In this section we present a series of numerical experiments to investigate the practical performance of the proposed \(C^{0}\)-RIPG scheme. We note that all of the computational examples have been implemented using the components of the FEniCS project (Logg et al., 2012). We highlight the Unified Form Language (UFL) (Alnaes et al., 2014) in particular as it facilitates the straightforward specification of the verbose facet terms arising in the \(C^{0}\)-RIPG formulation (16). Initial prototypes of the \(C^{0}\)-IPG and SIPG formulations were developed with the principles outlined in Houston and Sime (2018). The code used to generate the results presented here is available in Sime (2023). Our experiments are constructed in two settings to test the numerical scheme. Firstly, we present a numerical example with a known analytical solution in order to validate the optimality of the a priori error bound derived in Theorem 4.3. Secondly, we study the performance of the proposed \(C^{0}\)-RIPG scheme in the practical setting of reproducing benchmarks in mantle convection cell models. We examine the benefits of the \(C^{0}\)-RIPG scheme's pointwise divergence free velocity approximation when coupled with advection of a scalar (temperature) field, in addition to its performance with viscosity models which are composed of variations over many orders of magnitude. ### Manufactured solution In this section, we let \(\Omega=(-1,1)^{2}\) be a square. We subdivide \(\Omega\) into a hierarchy of meshes composed of \(N\times N\) shape regular quadrilaterals each bisected into triangle elements, where \(N\in\{8,16,32,64,128\}\) is the number of quadrilaterals dividing each orthogonal direction of \(\Omega\). We select the analytical solution and viscosity to be \[\phi=\pi^{-1}\sin(\pi x)\sin(\pi y)\text{ and }\mu=1+\sin^{2}(\pi x)\sin^{2}( \pi y), \tag{43}\] respectively, such that \[\mathbf{u}=\begin{pmatrix}\sin(\pi x)\cos(\pi y)\\ -\cos(\pi x)\sin(\pi y)\end{pmatrix}, \tag{44}\] which then determines \(\mathbf{f}\) according to equation (1). We compute approximations of \(\phi\) using the \(C^{0}\)-RIPG formulation and examine the rate at which the numerical approximation converges to the analytical solution. Furthermore, we examine the influence of the parameter \(\delta\) on the stability of the numerical scheme. Here, the approximation error is measured in the DG-norm (29), as well as the following norms of interest: \[\left\|\phi-\phi_{h}\right\|_{L_{2}(\mathscr{Y}^{h})}^{2} =\sum_{\kappa\in\mathscr{Y}^{h}}\int_{\kappa}(\phi-\phi_{h})^{2} \mathrm{d}\mathbf{x}, \tag{45}\] \[\left\|\mathbf{u}-\mathbf{u}_{h}\right\|_{L_{2}(\mathscr{Y}^{h})}^{2} =\sum_{\kappa\in\mathscr{Y}^{h}}\int_{\kappa}(\mathbf{u}-\nabla\times \phi_{h})^{2}\mathrm{d}\mathbf{x},\] (46) \[\left|\mathbf{u}-\mathbf{u}_{h}\right|_{H^{1}(\mathscr{Y}^{h})}^{2} =\sum_{\kappa\in\mathscr{Y}^{h}}\int_{\kappa}\left(\nabla\mathbf{u} -\nabla(\nabla\times\phi_{h})\right)^{2}\mathrm{d}\mathbf{x}. \tag{47}\] In Figure 2 we plot the error measured in the above norms against the mesh element size \(h\) on the aforementioned sequence of uniform (structured) triangular meshes for \(p=2,3,4\). Here, we observe that for each fixed \(p\), the DG norm of the error tends to zero at the optimal rate of \(\Theta(h^{p-1})\) as \(h\) tends to zero; this is in full agreement with the predicted rate given in Theorem 4.3. Analogous rates are observed for \(\left|\mathbf{u}-\mathbf{u}_{h}\right|_{H^{1}({\mathcal{Y}}^{h})}^{2}\) as expected. The \(L_{2}(\Omega)\) norm of the error in the approximation to the velocity is observed to behave like \(\Theta(h^{p})\) as \(h\) tends to zero, which are again expected. However, we observe that \(\left\|\phi-\phi_{h}\right\|_{L_{2}({\mathcal{Y}}^{h})}^{2}=\Theta(h^{2})\) for \(p=2\), while \(\left\|\phi-\phi_{h}\right\|_{L_{2}({\mathcal{Y}}^{h})}^{2}=\Theta(h^{p+1})\) for \(p=3,4\), as \(h\) tends to zero, which indicates suboptimality in the approximation of the stream function when the error is measured in \(L_{2}(\Omega)\) norm and the lowest order approximation is employed. In Figure 3 we study the influence of the parameter \(\delta\) on the stability of the \(C^{0}\)-RIPG scheme and the approximation error. As indicated in Section 4.1 we observe that setting the penalty parameter \(\delta>\sqrt{2}\) guarantees stability of the underlying method. We also see that smaller values of \(\delta\) may be employed in practice which lead to a slight improvement of the error measured in the above norms. However, as expected if \(\delta\) is reduced too far, then the stability of the \(C^{0}\)-RIPG scheme is no longer guaranteed. ### Buoyancy-driven flow To examine the practicality of the \(C^{0}\)-RIPG method we reproduce numerical benchmarks in the context of geophysical flow driven by thermal buoyancy. These benchmarks further employ temperature and strain-rate dependent viscosity models as exhibited in the works Blankenbach et al. (1989) and Tosi et al. (2015). These benchmarks require that \(\Omega=(0,1)^{2}\) is the unit square, in which we seek the velocity and pressure which satisfy equations (1) and (2) subject to the free Figure 2: Measured error norms of the manufactured solution problem. slip boundary conditions \[\mathbf{u}\cdot\mathbf{n} =0\text{ on }\partial\Omega, \tag{48}\] \[\left(2\mu\varepsilon(\mathbf{u})\cdot\mathbf{n}-P\mathbf{n}\right)\cdot\mathbf{ \tau} =0\text{ on }\partial\Omega. \tag{49}\] For thermally driven buoyancy, we set \[\mathbf{f}=\operatorname{Ra}T\,\hat{\mathbf{y}}, \tag{50}\] with viscosity model \[\mu =\begin{cases}\mu_{\text{lin}}&\sigma_{Y}=0,\\ 2(\mu_{\text{lin}}^{-1}+\mu_{\text{plast}}^{-1})^{-1}&\sigma_{Y}>0,\end{cases} \tag{51}\] \[\mu_{\text{lin}} =\exp\left(-\log(\Delta\mu_{T})T+\log(\Delta\mu_{z})z\right),\] (52) \[\mu_{\text{plast}} =10^{-3}+\frac{\sigma_{Y}}{\sqrt{\varepsilon(\mathbf{u}): \varepsilon(\mathbf{u})}}. \tag{53}\] Here, \(z=1-y\) is a measure of depth, \(\operatorname{Ra}\) is the constant Rayleigh number and \(\Delta\mu_{T}\), \(\Delta\mu_{z}\) and \(\sigma_{Y}\) are constant viscosity model parameters to be defined in each benchmark case. The temperature Figure 3: Measured error norms of the manufactured solution problem with varying parameter \(\delta\). Here we ensure the \(C^{0}\)-RIPG scheme is stable with \(\delta>\sqrt{2}\) as predicted in Section 4.1. field \(T:\Omega\mapsto\mathbb{R}^{+}\cup\{0\}\) satisfies the following advection-diffusion problem \[\mathbf{u}\cdot\nabla T-\nabla^{2}T =0 \text{in }\Omega, \tag{54}\] \[T =0 \text{on }[0,1]\times\{0\},\] (55) \[T =1 \text{on }[0,1]\times\{1\},\] (56) \[\nabla T\cdot\mathbf{n} =0 \text{on }\{0,1\}\times[0,1]. \tag{57}\] We discretize equations (54) to (57) using standard \(C^{0}\) finite elements such that we seek \((\phi_{h},T_{h})\in V_{0}^{h,p}\times S^{h,p}\) such that equation (16) holds simultaneously with \[(\mathbf{u}_{h}\cdot\nabla T_{h},s_{h})+(\nabla T_{h},\nabla s_{h})=0 \tag{58}\] for all \((\psi_{h},s_{h})\in V_{0}^{h,p}\times S_{0}^{h,p}\), where \(S^{h,p}=\{v\in H^{1}(\Omega):\left.v\right|_{\kappa}\in\mathscr{P}_{p}(\kappa) \,\left.\forall\kappa\in\mathscr{T}^{h},\right.\)\(\left.v\right|_{[0,1]\times\{0\}}=0,\)\(\left.v\right|_{[0,1]\times\{1\}}=1\}\) and \(S_{0}^{h,p}=\{v\in H^{1}(\Omega):\left.v\right|_{\kappa}\in\mathscr{P}_{p}( \kappa)\,\left.\forall\kappa\in\mathscr{T}^{h},\right.\)\(\left.v\right|_{[0,1]\times\{0,1\}}=0\}\). In Table 1 we define a number of benchmark cases with corresponding Rayleigh numbers and viscosity models. The functionals of interest measured for comparison with the corresponding benchmark reports are as follows \[\text{Nu} =\int_{0}^{1}\left.(\nabla T_{h}\cdot\mathbf{n})\right|_{y=1}\,\mathrm{ d}x, u_{\text{rms}} =\sqrt{\int_{\Omega}\mathbf{u}_{h}^{2}\,\mathrm{d}\mathbf{x}},\] \[\left\langle W\right\rangle =\sqrt{\int_{\Omega}T_{h}\mathbf{u}_{h}\cdot\hat{\mathbf{y}}\,\mathrm{d} \mathbf{x}}, \left\langle\Phi\right\rangle =\sqrt{\int_{\Omega}2\mu\varepsilon_{h}(\mathbf{u}_{h}):\varepsilon_{h}(\mathbf{u} _{h})\,\mathrm{d}\mathbf{x}}, \Delta =\frac{\left|\left\langle W\right\rangle-\frac{\left\langle\Phi \right\rangle}{\text{Ra}}\right|}{\max\left(\left\langle W\right\rangle,\frac{ \left\langle\Phi\right\rangle}{\text{Ra}}\right)},\] where Nu is the Nusselt number at the top boundary, \(u_{\text{rms}}\) is the root mean square speed, \(\left\langle W\right\rangle\) and \(\left\langle\Phi\right\rangle\) are the average rates of work done against gravity and viscous dissipation, respectively, and \(\Delta\) is a measure of thermal energy conservation. In Table 2 we tabulate the computed functional values from our implementation on a sequence of uniform triangular meshes for \(p=2,3\). In Figure 4 we compute errors in Nu and \(u_{\text{rms}}\) relative to the given reference values for the benchmark cases shown in Table 1. Here, the relative error measurement of computed quantity \(\chi\) compared with reference value \(\chi_{\text{ref}}\) is given by \[\epsilon_{\text{ref}}(\chi)=\frac{\left|\chi-\chi_{\text{ref}}\right|}{\chi_{ \text{ref}}}. \tag{59}\] For the case BB1a (isoviscous model) we see that as the mesh is uniformly refined, the computed Nusselt number and root-mean-square velocity approximations, as compared with reference values, converge to zero at the rates \(\Theta(h^{2(p-1)})\) and \(\Theta(h^{p})\) as \(h\) tends to zero, respectively, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Case & Ra & \(\Delta\mu_{T}\) & \(\Delta\mu_{z}\) & \(\sigma_{Y}\) & Nu\({}_{\text{ref}}\) & \(u_{\text{rms,ref}}\) \\ \hline BB1a & \(10^{4}\) & 1 & 1 & 0 & \(4.884\,409\) & \(42.864\,947\) \\ BB2a & \(10^{4}\) & \(10^{4}\) & 1 & 0 & \(10.065\,899\) & \(480.433\,425\) \\ T2 & \(10^{2}\) & \(10^{5}\) & 1 & 1 & \(8.559\,459\) & \(140.775\,535\) \\ T4 & \(10^{2}\) & \(10^{5}\) & 10 & 1 & \(6.615\,419\) & \(79.088\,809\) \\ \hline \hline \end{tabular} \end{table} Table 1: Benchmark cases exhibited in Blankenbach et al. (1989) and Tosi et al. (2015) (BB and T prefixes, respectively) and corresponding reference values selected from those works. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Case & \(p\) & \(N\) & \multicolumn{1}{c}{Nu} & \(u_{\rm rms}\) & \(\langle W\rangle\) & \(\langle\Phi\rangle/\)Ra & \(\Delta\) \\ \hline BB1a & 2 & 8 & 5.159 790 & 41.877 692 & 3.812 435 & 3.721 304 & 2.390 357 \(\times 10^{-2}\) \\ & & 16 & 5.015 557 & 42.613 520 & 3.866 134 & 3.841 287 & 6.426 728 \(\times 10^{-3}\) \\ & & 32 & 4.923 947 & 42.801 576 & 3.879 774 & 3.873 409 & 1.640 653 \(\times 10^{-3}\) \\ & & 64 & 4.894 819 & 42.849 068 & 3.883 246 & 3.881 644 & 4.124 164 \(\times 10^{-4}\) \\ & & 128 & 4.887 047 & 42.860 973 & 3.884 118 & 3.883 717 & 1.032 472 \(\times 10^{-4}\) \\ \cline{2-7} & 3 & 8 & 4.997 386 & 42.859 417 & 3.883 614 & 3.894 593 & 2.819 064 \(\times 10^{-3}\) \\ & & 16 & 4.894 932 & 42.864 517 & 3.884 342 & 3.887 360 & 7.764 650 \(\times 10^{-4}\) \\ & & 32 & 4.885 120 & 42.864 918 & 3.884 405 & 3.885 177 & 1.986 228 \(\times 10^{-4}\) \\ & & 64 & 4.884 454 & 42.864 943 & 3.884 409 & 3.884 603 & 4.993 627 \(\times 10^{-5}\) \\ & & 128 & 4.884 412 & 42.864 945 & 3.884 409 & 3.884 458 & 1.250 158 \(\times 10^{-5}\) \\ \hline BB2a & 2 & 8 & 11.569 494 & 494.791 616 & 10.639 110 & 9.866 476 & 7.262 202 \(\times 10^{-2}\) \\ & & 16 & 10.731 001 & 487.372 110 & 9.151 660 & 8.917 624 & 2.557 305 \(\times 10^{-2}\) \\ & & 32 & 10.306 838 & 483.106 619 & 9.101 327 & 9.018 523 & 9.098 057 \(\times 10^{-3}\) \\ & & 64 & 10.129 296 & 481.246 438 & 9.072 895 & 9.046 327 & 2.928 342 \(\times 10^{-3}\) \\ & & 128 & 10.081 744 & 480.619 011 & 9.067 279 & 9.059 942 & 8.091 935 \(\times 10^{-4}\) \\ \cline{2-7} & 3 & 8 & 11.039 987 & 481.181 654 & 9.217 020 & 9.258 907 & 4.524 017 \(\times 10^{-3}\) \\ & & 16 & 10.225 818 & 481.626 650 & 9.107 274 & 9.123 698 & 1.800 123 \(\times 10^{-3}\) \\ & & 32 & 10.073 773 & 480.583 438 & 9.064 849 & 9.072 503 & 8.436 236 \(\times 10^{-4}\) \\ & & 64 & 10.066 181 & 480.403 057 & 9.065 588 & 9.067 885 & 2.533 984 \(\times 10^{-4}\) \\ & & 128 & 10.065 910 & 480.427 513 & 9.065 872 & 9.066 485 & 6.764 788 \(\times 10^{-5}\) \\ \hline T2 & 2 & 8 & 8.670 484 & 130.718 863 & 7.175 272 & 7.024 771 & 2.097 492 \(\times 10^{-2}\) \\ & & 16 & 8.852 719 & 135.810 284 & 7.405 343 & 7.330 546 & 1.010 052 \(\times 10^{-2}\) \\ & & 32 & 8.677 987 & 139.186 405 & 7.514 772 & 7.477 494 & 4.960 578 \(\times 10^{-3}\) \\ & & 64 & 8.587 873 & 140.198 404 & 7.542 252 & 7.528 339 & 1.844 703 \(\times 10^{-3}\) \\ & & 128 & 8.565 231 & 140.580 896 & 7.553 466 & 7.548 985 & 5.933 286 \(\times 10^{-4}\) \\ \cline{2-7} & 3 & 8 & 9.044 719 & 137.032 238 & 7.479 251 & 7.480 751 & 2.004 897 \(\times 10^{-4}\) \\ & & 16 & 8.622 321 & 140.017 268 & 7.533 352 & 7.536 932 & 4.750 446 \(\times 10^{-4}\) \\ & & 32 & 8.557 657 & 140.503 070 & 7.550 360 & 7.552 252 & 2.506 208 \(\times 10^{-4}\) \\ & & 64 & 8.557 513 & 140.702 836 & 7.557 034 & 7.557 831 & 1.054 292 \(\times 10^{-4}\) \\ & & 128 & 8.559 291 & 140.769 772 & 7.559 261 & 7.559 543 & 3.730 833 \(\times 10^{-5}\) \\ \hline T4 & 2 & 8 & 6.964 560 & 76.225 863 & 5.486 364 & 5.347 143 & 2.537 569 \(\times 10^{-2}\) \\ & & 16 & 6.879 954 & 78.177 425 & 5.583 371 & 5.521 602 & 1.106 295 \(\times 10^{-2}\) \\ & & 32 & 6.702 668 & 78.807 864 & 5.604 996 & 5.583 245 & 3.880 586 \(\times 10^{-3}\) \\ & & 64 & 6.637 644 & 78.990 505 & 5.611 148 & 5.604 410 & 1.200 818 \(\times 10^{-3}\) \\ & & 128 & 6.620 866 & 79.060 030 & 5.614 094 & 5.612 224 & 3.329 760 \(\times 10^{-4}\) \\ \cline{2-7} & 3 & 8 & 6.931 491 & 78.787 124 & 5.613 935 & 5.617 167 & 5.754 950 \(\times 10^{-4}\) \\ & & 16 & 6.651 302 & 79.002 440 & 5.610 692 & 5.614 473 & 6.734 658 \(\times 10^{-4}\) \\ & & 32 & 6.615 929 & 79.045 743 & 5.612 879 & 5.614 332 & 2.588 058 \(\times 10^{-4}\) \\ & & 64 & 6.615 222 & 79.082 175 & 5.615 025 & 5.615 510 & 8.630 228 \(\times 10^{-5}\) \\ & & 128 & 6.615 397 & 79.088 264 & 5.615 384 & 5 Figure 4: Measured functionals’ errors relative to reference values selected from benchmark problems exhibited in Blankenbach et al. (1989); Tosi et al. (2015). We further compare the \(C^{0}\)-RIPG formulation with a standard TH discretization. for \(p=2,3\). However, we observe that when nonlinear viscosity models are employed, as in cases BB2a, T2 and T4, the order of convergence of these quantities of interest may be reduced; this is particularly evident for the 'highly nonlinear' cases T2 and T4 (cf. Figure 5). Additionally, we compare the performance of the proposed \(C^{0}\)-RIPG method with the TH discretization scheme which highlights the impact of employing an exactly divergence free velocity field approximation. When examining equivalent polynomial degrees in the underlying finite element spaces, we see that the \(C^{0}\)-RIPG formulation yields a more precise Nusselt number approximation for fewer than half the number of degrees of freedom due to its solenoidal velocity field. This result is attained despite the fact that the root-mean-square velocity approximation of the \(C^{0}\)-RIPG scheme is less accurate than that the corresponding quantity computed for the TH method for an equivalent polynomial degree. For the \(C^{0}\)-RIPG method to yield a root-mean-square velocity approximation roughly equivalent to the TH method a polynomial degree one order higher should be employed. Finally, we examine the balance of energy encapsulated by the functional \(\Delta\) as tabulated in Table 2. Here, we see that energy is not exactly conserved. However, the values of \(\Delta\) computed using the \(C^{0}\)-RIPG scheme are comparable with the TH scheme results, \(\Delta<0.01\%\), reported in Tosi et al. (2015). Figure 5: Viscosity fields as computed from the \(C^{0}\)-RIPG discretization with \(p=3\) of the benchmark problems tabulated in Table 1. These fields are interpolated in the space \(V_{\rm DG}^{h,p=1}\) for visualization. Concluding remarks In this article we have presented the \(C^{0}\)-RIPG formulation of the stream function formulation of the Stokes system with varying viscosity. We have shown that this scheme is stable provided the interior penalty parameter is selected so that \(\delta>\sqrt{2}\). Moreover, our analysis and numerical experiments demonstrate that the scheme converges optimally with respect to uniform mesh refinement when the error is measured in an appropriate DG norm. Furthermore, the discretization provides an exactly divergence free approximation of the velocity which, in the context of numerical benchmarks of mantle convection simulations, yields a more precise approximation of the advected temperature field. Our implementation is available in Sime (2023) for use with the FEniCSx library. ## 7 Acknowledgments This research was partly supported by NSF-EAR grant 2021027. N. Some gratefully acknowledges the support of Carnegie Institution for Science President's Fellowship. ## Appendix A \(C^{0}\)-RIPG formulation derivation In this appendix we outline the derivation of the proposed \(C^{0}\)-RIPG scheme (16); to this end we first develop the SIPG DG discretization of the problem in equations (1) to (4), then consider the restriction to the \(H^{1}\)-conforming FE space \(V^{h,p}\). ### Flux formulation For completeness, we consider the following generalized stream function formulation of equation (1) with corresponding boundary conditions: \[\nabla\times\left(-\nabla\cdot\left(2\mu\varepsilon(\nabla\times \phi)-P\mathbb{I}\right)\right) =\nabla\times\mathbf{f} \text{in }\Omega, \tag{60}\] \[\phi =\phi_{D} \text{on }\partial\Omega_{\phi,D}\] (61) \[\nabla\times\phi =\mathbf{u}_{D} \text{on }\partial\Omega_{D},\] (62) \[\left(2\mu\varepsilon(\nabla\times\phi)-P\mathbb{I}\right)\cdot \mathbf{n} =\mathbf{g}_{N} \text{on }\partial\Omega_{N},\] (63) \[\left(-\nabla\cdot\left(2\mu\varepsilon(\nabla\times\phi)-P \mathbb{I}\right)\right)\times\mathbf{n} =\mathbf{f}\times\mathbf{n} \text{on }\partial\Omega_{\phi,N}. \tag{64}\] Here, \(\mathbb{I}\) is the identity tensor and the exterior boundary is split into components for the stream function and velocity boundary conditions such that \(\partial\Omega=\partial\Omega_{\phi,D}\cup\partial\Omega_{\phi,N}\) and \(\partial\Omega=\partial\Omega_{D}\cup\partial\Omega_{N}\) where the boundary components do not overlap, \(\partial\Omega_{\phi,D}\cap\partial\Omega_{\phi,N}=\emptyset\) and \(\partial\Omega_{D}\cap\partial\Omega_{N}=\emptyset\). We define the rank 4 tensor \(G=\partial(2\mu\varepsilon(\mathbf{u}))/\partial(\nabla\mathbf{u})\); given \(\sigma\in\mathbb{R}^{2\times 2}\), its product and transpose product are defined, respectively, by \[G\sigma=G_{ijkl}\sigma_{ij}\text{ and }G^{\top}\sigma=G_{ijkl}\sigma_{kl}, \tag{65}\] such that \(2\mu\varepsilon(\mathbf{u})=G\nabla\mathbf{u}\). Let \(F_{1}:\Omega\mapsto\mathbb{R}^{2}\), \(F_{2}:\Omega\mapsto\mathbb{R}^{2\times 2}\) and \(F_{3}:\Omega\mapsto\mathbb{R}^{2}\) such that we recast equation (60) in terms of four first order equations \[\nabla\times\mathbf{f}=\nabla\times F_{1},\quad F_{1}=-\nabla\cdot F_{2},\quad F_ {2}=G\nabla F_{3}-P\mathbb{I},\quad F_{3}=\nabla\times\phi, \tag{66}\] where it is evident that \[F_{1}(\phi)=-\nabla\cdot\left(2\mu\varepsilon(\nabla\times\phi)-P\mathbb{I}\right) \quad\text{and}\quad F_{2}(\phi)=2\mu\varepsilon(\nabla\times\phi)-P\mathbb{I}. \tag{67}\] We multiply each equation in (66) by \(v_{1}:\Omega\mapsto\mathbb{R}\), \(\mathbf{v}_{2}:\Omega\mapsto\mathbb{R}^{2}\), \(v_{3}:\Omega\mapsto\mathbb{R}^{2\times 2}\) and \(\mathbf{v}_{4}:\Omega\mapsto\mathbb{R}^{2}\), respectively, and integrate over an element \(\kappa\in\mathcal{T}^{h}\) to give \[(\nabla\times\mathbf{f},v_{1})_{\kappa} =(\nabla\times F_{1},v_{1})_{\kappa}, \tag{68}\] \[(F_{1},\mathbf{v}_{2})_{\kappa} =(-\nabla\cdot F_{2},\mathbf{v}_{2})_{\kappa},\] (69) \[(F_{2},v_{3})_{\kappa} =(G\nabla F_{3},v_{3})_{\kappa}-(P\mathbb{I},v_{3})_{\kappa},\] (70) \[(F_{3},\mathbf{v}_{4})_{\kappa} =(\nabla\times\phi,\mathbf{v}_{4})_{\kappa}. \tag{71}\] Integrating by parts equations (68) to (70) once, and given the lack of prescribed boundary data, equation (71) twice, we deduce that \[(\nabla\times\mathbf{f},v_{1})_{\kappa} =(F_{1},\nabla\times v_{1})_{\kappa}-\widehat{\langle F_{1},\mathbf{ n}\times v_{1}\rangle_{\partial\kappa}}, \tag{72}\] \[(F_{1},\mathbf{v}_{2})_{\kappa} =(F_{2},\nabla\mathbf{v}_{2})_{\kappa}-\widehat{\langle F_{2},\mathbf{ v}_{2}\otimes\mathbf{n}\rangle_{\partial\kappa}},\] (73) \[(F_{2},v_{3})_{\kappa} =-(F_{3},\nabla\cdot(G^{\top}v_{3}))_{\kappa}+\widehat{\langle F _{3}\otimes\mathbf{n},G^{\top}v_{3}\rangle_{\partial\kappa}}-(P\mathbb{I},v_{3})_{ \kappa},\] (74) \[(F_{3},\mathbf{v}_{4})_{\kappa} =(\nabla\times\phi,\mathbf{v}_{4})_{\kappa}-\widehat{\langle\phi-\phi,\mathbf{n}\times\mathbf{v}_{4}\rangle_{\partial\kappa}}, \tag{75}\] where \(\widehat{\langle\cdot\rangle}|_{\partial\kappa}\) indicates a consistent and conservative flux approximation. We now proceed to eliminate the additional auxiliary variables introduced in order to derive the so-called flux formulation. To this end, we first select \(\mathbf{v}_{4}=\nabla\cdot(G^{\top}v_{3})\); inserting equation (75) into equation (74) yields \[(F_{2},v_{3})_{\kappa}=-(\nabla\times\phi,\nabla\cdot(G^{\top}v_{3}))_{\kappa }+\widehat{\langle\phi}-\phi,\mathbf{n}\times(\nabla\cdot(G^{\top}v_{3})))_{ \partial\kappa}+\langle\widehat{F}_{3}\otimes\mathbf{n},G^{\top}v_{3}\rangle_{ \partial\kappa}-(P\mathbb{I},v_{3})_{\kappa}. \tag{76}\] Given a lack of prescribed boundary information regarding \(F_{3}\) we integrate the first term in equation (76) by parts again giving \[(F_{2},v_{3})_{\kappa}=(\nabla(\nabla\times\phi_{h}),G^{\top}v_{3})_{\kappa}+ \widehat{\langle\phi}-\phi,\mathbf{n}\times(\nabla\cdot(G^{\top}v_{3})))_{ \partial\kappa}+\langle(\widehat{F}_{3}-\nabla\times\phi)\otimes\mathbf{n},G^{ \top}v_{3}\rangle_{\partial\kappa}-(P\mathbb{I},v_{3})_{\kappa}. \tag{77}\] Let \(\mathbf{v}_{2}=\nabla\times v_{1}\), then inserting equation (73) into equation (72) yields \[(\nabla\times\mathbf{f},v_{1})_{\kappa}=(F_{2},\nabla(\nabla\times v_{1}))_{ \kappa}-\langle\widehat{F}_{2},(\nabla\times v_{1})\otimes\mathbf{n}\rangle_{ \partial\kappa}-\langle\widehat{F}_{1},\mathbf{n}\times v_{1}\rangle_{\partial \kappa}. \tag{78}\] Next we set \(v_{3}=\nabla(\nabla\times v_{1})\); substituting equation (77) into equation (78) and summing over all elements in the mesh \(\mathcal{T}^{h}\) gives \[\sum_{\kappa\in\mathcal{T}^{h}}(\nabla\times\mathbf{f},v_{1})_{\kappa} =\sum_{\kappa\in\mathcal{T}^{h}}(\nabla(\nabla\times\phi),G^{\top }\nabla(\nabla\times v_{1}))_{\kappa}-\sum_{\kappa\in\mathcal{T}^{h}}(P \mathbb{I},\nabla(\nabla\times v_{1}))_{\kappa}\] \[\quad+\sum_{\kappa\in\mathcal{T}^{h}}\langle\widehat{\phi}-\phi,\mathbf{n}\times(\nabla\cdot(G^{\top}\nabla(\nabla\times v_{1})))\rangle_{ \partial\kappa}\] \[\quad+\sum_{\kappa\in\mathcal{T}^{h}}\langle(\widehat{F}_{3}- \nabla\times\phi)\otimes\mathbf{n},G^{\top}\nabla(\nabla\times v_{1})\rangle_{ \partial\kappa}\] \[\quad-\sum_{\kappa\in\mathcal{T}^{h}}\langle\widehat{F}_{2},( \nabla\times v_{1})\otimes\mathbf{n}\rangle_{\partial\kappa}\] \[\quad-\sum_{\kappa\in\mathcal{T}^{h}}\langle\widehat{F}_{1},\mathbf{n} \times v_{1}\rangle_{\partial\kappa}. \tag{79}\] Finally, given \[(P\mathbbm{1},\nabla(\nabla\times v_{1}))_{\kappa}=(P,\nabla\cdot(\nabla\times v_{ 1}))_{\kappa}=0 \tag{80}\] and replacing the flux approximations with their corresponding Neumann boundary data on \(\partial\Omega_{N}\) and \(\partial\Omega_{\phi,N}\), as well as the analytical solution with the (DG) finite element approximation \(\phi_{h}\in V_{\mathrm{DG}}^{h,p}=\{v\in L_{2}(\Omega):\left.v\right|_{\kappa} \in\mathscr{D}_{p}(\kappa)\ \forall\kappa\in\mathscr{T}^{h}\}\), \(p\geq 2\), and selecting \(v_{1}=\psi\) gives rise to the flux formulation: find \(\phi_{h}\in V_{\mathrm{DG}}^{h,p}\) such that \[\sum_{\kappa\in\mathscr{T}^{h}}(\nabla\times\mathbf{f},\psi)_{\kappa} =\sum_{\kappa\in\mathscr{T}^{h}}(\nabla(\nabla\times\phi_{h}),G^{ \top}\nabla(\nabla\times\psi))_{\kappa}\] \[\quad-\sum_{\kappa\in\mathscr{T}^{h}}\langle\mathbf{f}\times\mathbf{n}, \psi\rangle_{\partial\kappa\cap\partial\Omega_{\phi,N}}-\sum_{\kappa\in \mathscr{T}^{h}}\langle\mathbf{g}_{N},\nabla\times\psi\rangle_{\partial\kappa\cap \partial\Omega_{N}}\] \[\quad+\sum_{\kappa\in\mathscr{T}^{h}}\langle\widehat{\phi}_{h}- \phi_{h},\mathbf{n}\times(\nabla\cdot(G^{\top}\nabla(\nabla\times\psi)))\rangle_{ \partial\kappa\backslash\partial\Omega_{\phi,N}}\] \[\quad+\sum_{\kappa\in\mathscr{T}^{h}}\langle(\widehat{F}_{3}- \nabla\times\phi_{h})\otimes\mathbf{n},G^{\top}\nabla(\nabla\times\psi)\rangle_{ \partial\kappa\backslash\partial\Omega_{N}}\] \[\quad-\sum_{\kappa\in\mathscr{T}^{h}}\langle\widehat{F}_{2},( \nabla\times\psi)\otimes\mathbf{n}\rangle_{\partial\kappa\backslash\partial\Omega_ {N}}\] \[\quad-\sum_{\kappa\in\mathscr{T}^{h}}\langle\widehat{F}_{1},\mathbf{n} \times\psi\rangle_{\partial\kappa\backslash\partial\Omega_{\phi,N}}\quad \forall\psi\in V_{\mathrm{DG}}^{h,p}. \tag{81}\] ### Primal formulation We define the specialized average operators \[\left\{\!\left\{\cdot\right\}\!\right\}_{\ell}\right|_{F}=\ell^{+}(\cdot)^{+}+ \ell^{-}(\cdot)^{-}\ \text{and}\ \left\{\!\left\{\cdot\right\}\!\right\}_{\ell^{\mp}}\right|_{F}=\ell^{-}( \cdot)^{+}+\ell^{+}(\cdot)^{-}\quad F\in\Gamma_{I}, \tag{82}\] where the weights are constrained by \(\ell^{+}+\ell^{-}=1\); the relationship to the weights \(w^{\pm}\) will become evident below. For sufficiently smooth vector and scalar valued functions \(\mathbf{v}\) and \(q\), respectively, we define the following jump operators \[\left[\!\left[\mathbf{v}\right]\!\right]_{F} =\mathbf{n}^{+}\cdot\mathbf{v}^{+}+\mathbf{n}^{-}\cdot\mathbf{v}^{-}, F\in\Gamma_{I},\] \[\left[\!\left[\mathbf{v}\right]\!\right]_{\times}\right|_{F} =\mathbf{n}^{+}\times\mathbf{v}^{+}+\mathbf{n}^{-}\times\mathbf{v}^{-}, F\in\Gamma_{I},\] \[\left[\!\left[q\right]\!\right]_{F} =\mathbf{n}^{+}q^{+}+\mathbf{n}^{-}q^{-}, F\in\Gamma_{I}.\] **Lemma A.1**.: _For sufficiently smooth \(q:\Omega\mapsto\mathbb{R}\), \(\mathbf{v},\mathbf{z}:\Omega\mapsto\mathbb{R}^{2}\) and \(\sigma:\Omega\mapsto\mathbb{R}^{2\times 2}\), the following identities hold_ \[\sum_{\kappa\in\mathscr{T}^{h}}\langle q,\mathbf{v}\cdot\mathbf{n}\rangle _{\partial\kappa} =\langle\left[\![q]\!\right],\left\{\!\left\{\mathbf{v}\right\}\!\right\}_ {\ell^{\mp}}\rangle_{\Gamma_{I}}+\langle\left\{\!\left\{q\right\}\!\right\}_ {\ell},\left[\![\mathbf{v}]\!\right]\rangle_{\Gamma_{I}}+\langle q\mathbf{n},\mathbf{v} \rangle_{\partial\Omega}, \tag{83}\] \[\sum_{\kappa\in\mathscr{T}^{h}}\langle\sigma,\mathbf{v}\otimes\mathbf{n} \rangle_{\partial\kappa} =\langle\left[\![\mathbf{v}]\!\right]_{\otimes},\left\{\!\left\{\mathbf{ \sigma}\right\}\!\right\}_{\ell^{\mp}}\rangle_{\Gamma_{I}}+\langle\left\{\!\left\{ \mathbf{v}\right\}\!\right\}_{\ell},\left[\![\mathbf{\sigma}]\!\right]\rangle_{ \Gamma_{I}}+\langle\mathbf{v}\otimes\mathbf{n},\sigma\rangle_{\partial\Omega},\] (84) \[\sum_{\kappa\in\mathscr{T}^{h}}\langle\mathbf{z},\mathbf{n}\times\mathbf{v} \rangle_{\partial\kappa} =\langle\left[\![\mathbf{v}]\!\right]_{\times},\left\{\!\left\{\mathbf{z} \right\}\!\right\}_{\ell^{\mp}}\rangle_{\Gamma_{I}}-\langle\left\{\!\left\{ \mathbf{v}\right\}\!\right\}_{\ell},\left[\![\mathbf{z}]\!\right]_{\times}\rangle_{ \Gamma_{I}}+\langle\mathbf{n}\times\mathbf{v},\mathbf{z}\rangle_{\partial\Omega}. \tag{85}\] Proof.: Consider equation (83). In the absence of interior mesh facets, on the boundary it is clear that equation (83) trivially holds, i.e., \[\sum_{\kappa\in\mathcal{I}^{h}}\langle q,\mathbf{v}\cdot\mathbf{n}\rangle_{\partial\kappa \cap\partial\Omega}=\langle q\mathbf{n},\mathbf{v}\rangle_{\partial\Omega}.\] Let us now consider the interior mesh facets; to this end the left-hand side of equation (83) may be rewritten as follows \[\sum_{\kappa\in\mathcal{I}^{h}}\langle q,\mathbf{v}\cdot\mathbf{n}\rangle_{\partial \kappa\setminus\partial\Omega}=\langle q^{+},\mathbf{v}^{+}\cdot\mathbf{n}^{+} \rangle_{\Gamma_{I}}+\langle q^{-},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}}.\] Using the fact that \(1=\ell^{+}+\ell^{-}\), we get \[\sum_{\kappa\in\mathcal{I}^{h}}\langle q,\mathbf{v}\cdot\mathbf{n}\rangle _{\partial\kappa\setminus\partial\Omega} =\langle\ell^{+}q^{+},\mathbf{v}^{+}\cdot\mathbf{n}^{+}\rangle_{\Gamma_{ I}}+\langle\ell^{-}q^{+},\mathbf{v}^{+}\cdot\mathbf{n}^{+}\rangle_{\Gamma_{I}}+ \langle\ell^{+}q^{-},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}}\] \[\quad+\langle\ell^{-}q^{-},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{ \Gamma_{I}}.\] Noting that on an interior mesh facet we have \(q^{+}\mathbf{v}^{-}\cdot\mathbf{n}^{+}+q^{+}\mathbf{v}^{-}\cdot\mathbf{n}^{-}=0\) and \(q^{-}\mathbf{v}^{+}\cdot\mathbf{n}^{+}+q^{-}\mathbf{v}^{+}\cdot\mathbf{n}^{-}=0\), then \[\sum_{\kappa\in\mathcal{I}^{h}}\langle q,\mathbf{v}\cdot\mathbf{n}\rangle _{\partial\kappa\setminus\partial\Omega}\] \[\quad=\langle\ell^{+}q^{+},\mathbf{v}^{+}\cdot\mathbf{n}^{+}\rangle_{ \Gamma_{I}}+\langle\ell^{-}q^{+},\mathbf{v}^{+}\cdot\mathbf{n}^{+}\rangle_{\Gamma_{I} }+\langle\ell^{+}q^{-},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}}+\langle \ell^{-}q^{-},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}}\] \[\quad\quad\quad+\langle\ell^{+}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{+} \rangle_{\Gamma_{I}}+\langle\ell^{-}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{+}\rangle_{ \Gamma_{I}}+\langle\ell^{+}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I} }+\langle\ell^{-}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}}\] \[\quad\quad\quad+\langle\ell^{+}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{+} \rangle_{\Gamma_{I}}+\langle\ell^{-}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{+}\rangle_{ \Gamma_{I}}+\langle\ell^{+}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I} }+\langle\ell^{-}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}}.\] Collecting coefficients of \(\mathbf{v}^{+}\cdot\mathbf{n}^{+}\) and \(\mathbf{v}^{-}\cdot\mathbf{n}^{-}\) gives \[\sum_{\kappa\in\mathcal{I}^{h}}\langle q,\mathbf{v}\cdot\mathbf{n}\rangle _{\partial\kappa\setminus\partial\Omega}=\langle\{\!\{q\}\!\}_{\ell},\llbracket \mathbf{v}\rrbracket\rangle_{\Gamma_{I}}\] \[\quad\quad\quad+\langle\ell^{-}q^{+},\mathbf{v}^{+}\cdot\mathbf{n}^{+} \rangle_{\Gamma_{I}}+\langle\ell^{+}q^{-},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{ \Gamma_{I}}\] \[\quad\quad\quad+\langle\ell^{+}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{+} \rangle_{\Gamma_{I}}+\langle\ell^{-}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{+}\rangle_{ \Gamma_{I}}+\langle\ell^{-}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}}\] \[\quad\quad\quad+\langle\ell^{+}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{+} \rangle_{\Gamma_{I}}+\langle\ell^{+}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{-}\rangle_{ \Gamma_{I}}+\langle\ell^{-}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}}.\] If we now collect coefficients of \(q^{+}\mathbf{n}^{+}\) and \(q^{-}\mathbf{n}^{-}\), we get \[\sum_{\kappa\in\mathcal{I}^{h}}\langle q,\mathbf{v}\cdot\mathbf{n}\rangle _{\partial\kappa\setminus\partial\Omega}=\langle\{\!\{q\}\!\}_{\ell},\llbracket \mathbf{v}\rrbracket\rangle_{\Gamma_{I}}+\langle\{\!\{\mathbf{v}\}\!\}_{\ell^{\mp}}, \llbracket\mathbf{q}\rrbracket\rangle_{\Gamma_{I}}\] \[\quad\quad\quad+\langle\ell^{-}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{+} \rangle_{\Gamma_{I}}+\langle\ell^{-}q^{+},\mathbf{v}^{-}\cdot\mathbf{n}^{-}\rangle_{ \Gamma_{I}}+\langle\ell^{+}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{+}\rangle_{\Gamma_{I} }+\langle\ell^{+}q^{-},\mathbf{v}^{+}\cdot\mathbf{n}^{-}\rangle_{\Gamma_{I}},\] \[\quad\quad=\langle\{\!\{q\}\!\}_{\ell},\llbracket\mathbf{v}\rrbracket \rangle_{\Gamma_{I}}+\langle\{\!\{\mathbf{v}\}\!\}_{\ell^{\mp}},\llbracket\mathbf{q} \rrbracket\rangle_{\Gamma_{I}}.\] Noting that \(\langle\mathbf{z},\mathbf{n}\times\mathbf{v}\rangle_{\partial\kappa\setminus\partial\Omega}=- \langle\mathbf{n}\times\mathbf{z},\mathbf{v}\rangle_{\partial\kappa\setminus\partial\Omega}\) equations (84) and (85) follow analogously. Employing the identities in equations (83) to (85), equation (81) may be rewritten in the following equivalent manner \[(\nabla_{h} \times\mathbf{f},\psi)_{\Omega}\] \[=(\nabla_{h}(\nabla_{h}\times\phi_{h}),G^{\top}\nabla_{h}(\nabla_{h }\times\psi))_{\Omega}-\langle\mathbf{f}\times\mathbf{n},\psi\rangle_{\partial\Omega_{ \phi,N}}-\langle\mathbf{g}_{N},\nabla\times\psi\rangle_{\partial\Omega_{N}}\] \[\quad+\langle\{\!\!\{\!\widehat{\phi}_{h}-\phi_{h}\}\!\!\}_{ \ell^{\mp}},[\![\nabla_{h}\cdot(G^{\top}\nabla_{h}(\nabla_{h}\times\psi))]\!]_{ \times}\rangle_{\Gamma_{I}}\] \[\quad-\langle[\widehat{\phi}_{h}-\phi_{h}]\!\!\}_{\times},\{\!\! \{\!\nabla_{h}\cdot(G^{\top}\nabla_{h}(\nabla_{h}\times\psi))\}\!\!\}_{\ell} \rangle_{\Gamma_{I}}\] \[\quad+\langle\widehat{\phi}_{h}-\phi_{h},\mathbf{n}\times\nabla_{h} \cdot(G^{\top}\nabla_{h}(\nabla_{h}\times\psi))\rangle_{\partial\Omega_{\phi,D}}\] \[\quad+\langle[\widehat{F}_{3}-\nabla_{h}\times\phi_{h}]\!\!\}_{ \phi},\{\!\!\{\!\widehat{G}^{\top}\nabla_{h}(\nabla_{h}\times\psi)\}\!\!\}_{ \ell^{\mp}}\rangle_{\Gamma_{I}}\] \[\quad+\langle\{\!\!\{\!\widehat{F}_{3}-\nabla_{h}\times\phi_{h}\} \!\!\}_{\ell},[\![G^{\top}\nabla_{h}(\nabla_{h}\times\psi)]\!]\!\rangle_{ \Gamma_{I}}\] \[\quad+\langle(\widehat{F}_{3}-\nabla_{h}\times\phi_{h})\otimes\bm {n},G^{\top}\nabla_{h}(\nabla_{h}\times\psi)\rangle_{\partial\Omega_{D}}\] \[\quad-\langle\{\!\!\{\!\widehat{F}_{2}\}\!\}_{\ell^{\mp}},[\![ \nabla_{h}\times\psi]\!\!]_{\otimes}\rangle_{\Gamma_{I}}-\langle[\widehat{F}_{ 2}]\!\!],\{\!\{\!\!\{\!\!\{\!\!\{\!\!\{\!\{\!\{\!\{\!\!\{\!\!\{\!\! ### \(C^{0}\)-IPG formulation In order to reduce equation (91) to the \(C^{0}\)-IPG formulation we restrict the space in which the FE solution is sought to \(V_{0}^{h,p}\). Note that given the \(C^{0}\) continuity of a function \(\psi_{h}\in V_{0}^{h,p}\) we have \(([\![\psi_{h}]\!]_{\otimes})_{ij}=0\), \([\![\psi_{h}]\!]_{\times}=0\) and \(\left.\psi_{h}\right|_{\partial\Omega_{D}}=0\). Furthermore we notice that for isotropic viscosity we have \(2\mu\varepsilon(\mathbf{u})=G\nabla\mathbf{u}=G^{\top}\nabla\mathbf{u}\), where \[G=\mu\begin{pmatrix}\begin{pmatrix}2&0\\ 0&0\end{pmatrix}&\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\\ \begin{pmatrix}0&1\\ 1&0\end{pmatrix}&\begin{pmatrix}0&0\\ 0&2\end{pmatrix}\end{pmatrix}, \tag{92}\] and integrating by parts the left side we have \[(\nabla_{h}\times\mathbf{f},\psi)=(\mathbf{f},\nabla_{h}\times\psi)-\langle\mathbf{f} \times\mathbf{n},\psi\rangle_{\partial\Omega_{\phi,N}}. \tag{93}\] Eliminating these terms from the facet oriented formulation in equation (91), setting \(w^{\pm}=\ell^{\mp}\) (i.e., \(\{\!\{\cdot\}\!\}=\{\!\{\cdot\}\!\}_{\ell^{\mp}}\)), substituting for \(\mathbf{u}_{h}=\nabla_{h}\times\phi_{h}\) and \(\mathbf{v}_{h}=\nabla_{h}\times\psi_{h}\) and rearranging for bilinear and linear components we arrive at the formulation in equations (16), (17) and (19). ## Acronyms **DG**: discontinuous Galerkin **FE**: finite element **HDG**: hybrid discontinuous Galerkin **IPG**: interior penalty Galerkin **PDE**: partial differential equation **RIPG**: robust interior penalty Galerkin **SIPG**: symmetric interior penalty Galerkin **TH**: Taylor-Hood **UFL**: Unified Form Language
2302.14838
EvoPrompting: Language Models for Code-Level Neural Architecture Search
Given the recent impressive accomplishments of language models (LMs) for code generation, we explore the use of LMs as adaptive mutation and crossover operators for an evolutionary neural architecture search (NAS) algorithm. While NAS still proves too difficult a task for LMs to succeed at solely through prompting, we find that the combination of evolutionary prompt engineering with soft prompt-tuning, a method we term EvoPrompting, consistently finds diverse and high performing models. We first demonstrate that EvoPrompting is effective on the computationally efficient MNIST-1D dataset, where EvoPrompting produces convolutional architecture variants that outperform both those designed by human experts and naive few-shot prompting in terms of accuracy and model size. We then apply our method to searching for graph neural networks on the CLRS Algorithmic Reasoning Benchmark, where EvoPrompting is able to design novel architectures that outperform current state-of-the-art models on 21 out of 30 algorithmic reasoning tasks while maintaining similar model size. EvoPrompting is successful at designing accurate and efficient neural network architectures across a variety of machine learning tasks, while also being general enough for easy adaptation to other tasks beyond neural network design.
Angelica Chen, David M. Dohan, David R. So
2023-02-28T18:37:25Z
http://arxiv.org/abs/2302.14838v3
# EvoPrompting: Language Models for Code-Level Neural Architecture Search ###### Abstract Given the recent impressive accomplishments of language models (LMs) for code generation, we explore the use of LMs as general adaptive mutation and crossover operators for an evolutionary neural architecture search (NAS) algorithm. While NAS still proves too difficult a task for LMs to succeed at solely through prompting, we find that the combination of evolutionary prompt engineering with soft prompt tuning, a method we term EvoPrompting, consistently finds diverse and high performing models. We first demonstrate that EvoPrompting is effective on the computationally efficient MNIST-1D dataset, where EvoPrompting produces convolutional architecture variants that outperform both those designed by human experts and naive few-shot prompting in terms of accuracy and model size. We then apply our method to searching for graph neural networks on the CLRS Algorithmic Reasoning Benchmark, where EvoPrompting is able to design _novel_ architectures that outperform current state-of-the-art models on 21 out of 30 algorithmic reasoning tasks while maintaining similar model size. EvoPrompting is successful at designing accurate and efficient neural network architectures across a variety of machine learning tasks, while also being general enough for easy adaptation to other tasks beyond neural network design. ## 1 Introduction Scaling of Transformers (Vaswani et al., 2017) has produced language models (LM) with impressive performance. Beyond achieving state-of-the-art results on conventional natural language processing tasks, these LMs demonstrate breakthrough technical capabilities, such as learning how to code (Chen et al., 2021), doing math (Noorbakhsh et al., 2021), and solving reasoning problems (Wei et al., 2022). Yet, despite these strides, several works have noted LMs' current limitations in solving complex problems and creating novel solutions (Qian et al., 2022; Dakhel et al., 2022). In this work, we improve upon a base LM's ability to propose novel and diverse solutions to complex reasoning problems by iteratively evolving in-context prompts and prompt-tuning the LM. We call this technique EvoPrompting and demonstrate its success on the narrow but difficult task of deep learning architecture design. Our key finding is that, while LMs largely fail at designing novel and effective neural architectures via naive few-shot prompting, EvoPrompting enables LMs to create novel and effective deep neural architectures, particularly when combined with prompt-tuning methods. EvoPrompting is based on the recently popularized practice of in-context prompting. Prompting is the technique of conditioning a LM's decoded output on a custom prefix known as a _prompt_, which can include natural language task instructions or a few input-output examples. The prompt is used only at inference time and requires no gradient updates (Brown et al., 2020). In past work, prompting has been demonstrated to elicit impressive performance on a wide variety of tasks without requiring task-specific fine-tuning (Sanh et al., 2021; Wei et al., 2022; Kojima et al., 2022). Here, we leverage LM prompting for the task of designing improved deep learning architectures. Figure 1: EvoPrompting, which combines evolutionary search with soft-prompt tuning, discovers smaller and better performing architectures on MNIST1D than alternative search methods. To engineer adequately powerful prompts, we draw inspiration from existing ideas in the field of neural architecture search. There, evolution has long been used to search over discrete spaces to efficiently discover improved deep learning architectures (Yao, 1999; Real et al., 2017). However, evolutionary approaches typically require careful manual design of a discrete search space (_e.g._ a small set of known convolutional neural network components, as in Real et al. (2017) or TensorFlow primitives, as in So et al. (2021)). As a result, the performance of the evolutionary algorithm is then sensitive to and possibly limited by the design of the search space. In EvoPrompting the LM's vocabulary replaces the search space, which both increases the flexibility of the search and reduces reliance on manual design. The LM is also an _adaptive_ mutation/crossover operator, in the sense that it can be improved round over round via prompt-tuning. Furthermore, EvoPrompting also improves on naive few-shot prompting by using an evolutionary search approach to iteratively improve the in-context examples for few-shot prompting. To demonstrate the effectiveness of this method, we first do extensive testing and analyses on the relatively low-compute problem of MNIST-1D (Greydanus, 2020). The key finding of these experiments is that EvoPrompting is capable of producing conventional convolutional architectures superior to published manually designed models. In Section 4.2 we then apply our method to the more challenging task of designing graph neural networks using problems from the CLRS Algorithmic Reasoning Benchmark (Velickovic et al., 2022). There, EvoPrompting generates novel architectures and modules that outperform state-of-the-art models on 21 out of 30 algorithmic tasks (Appx. A.5). The contributions of this work are summarized as follows: 1. We propose EvoPrompting, a method that utilizes evolutionary search to create and curate data to improve LM in-context prompting examples. Although this work focuses on the specific task of neural architecture design to develop this method, EvoPrompting is generally applicable to LM tasks that rely on in-context learning or prompt-tuning. 2. A study applying LMs to code-level neural architecture design. Our experiments demonstrate that applying few-shot prompting alone to neural architecture design is unsuccessful, but few-shot prompting with EvoPrompting enables LMs to create architectures that outperform those designed by human experts. 3. Novel graph neural network architectures that were discovered using EvoPrompting. These architec Figure 2: An overview of EvoPrompting. After _initializing_ the search with a handful of manually designed program seeds, the meta-learning loop begins. First, our code-pretrained LM uses the seeds as in-context prompt examples to _generate_ candidate architectures. Those candidate architectures are then _trained_ on the task training data and _evaluated_ on the task validation set. Next, the most fit members of the population are _selected_ as in-context examples for the next meta-learning loop and all evaluated individuals are used as training data for prompt-tuning the LM. From there, the meta-learning loop begins again. tures outperform the current state-of-the-art architecture, Triplet-GMPNN (Ibarz et al., 2022), on 21 out of 30 CLRS Algorithmic Reasoning Benchmark tasks (Appx. A.5). ## 2 Related Work LMs for code generationScaling Transformers (Vaswani et al., 2017) is currently the most popular route for reliably creating state-of-the-art natural language systems (Brown et al., 2020; Du et al., 2021; BigScience Workshop et al., 2022; Zhang et al., 2022; Thoppilan et al., 2022; Chowdhery et al., 2022). Many works have observed that these LM systems are capable of performing technical tasks such as writing code (Chen et al., 2021), doing math (Noorbakhsh et al., 2021), and solving reasoning problems (Wei et al., 2022). Our work is most closely related to efforts that have applied LMs to coding tasks (Chen et al., 2021; Odena et al., 2021; Xu et al., 2022; Wang et al., 2021; Ahmad et al., 2021; Feng et al., 2020), since our technique proposes architectures in code. PromptingBrown et al. (2020) critically demonstrated that LMs can be prompted with in-context examples to steer LM decoding towards solving specific problems, potentially unseen in the training data. Numerous works have utilized this prompting to further unlock latent LM abilities (Sanh et al., 2021; Wei et al., 2022; Kojima et al., 2022). In this work, we employ prompting to guide our LMs to perform neural architecture design. Because this prompting is critical to LM performance, numerous works have analyzed prompting and proposed different prompt engineering strategies (Min et al., 2022; Liu et al., 2021). Examples include works using retrieval systems to augment prompts (Rubin et al., 2021), determining optimal few-shot permutations (Lu et al., 2021; Zhao et al., 2021), employing LMs to create natural language prompts (Zhou et al., 2022), and including prompt templating in weight training to improve in-context learning (Wei et al., 2021; Ouyang et al., 2022; Sanh et al., 2021). From the perspective of Dohan et al. (2022), prompts are parameters and may be tuned using a variety of probabilistic inference techniques. Brooks et al. (2022) proposes using these prompts to implement both the rollout policy and world model of a policy iteration algorithm, where the in-context examples are selected to be the most relevant towards the current inference. Our EvoPrompting method extends these efforts by proposing evolutionary search as a means to better design prompts for in-context learning. Evolutionary AlgorithmsThe way in which we use evolution to iteratively improve our in-context example architectures is closely related to evolutionary neural architecture search (NAS) (Real et al., 2017, 2018; Elsken et al., 2018; So et al., 2019; Liu et al., 2020). In evolutionary NAS, architecture design is posed as a search problem - architectures are represented as discrete DNAs and evolved based on fitness metrics that assess architecture performance, resulting in a final population of high quality architectures. Our method follows a similar approach, but with our LM replacing two key components. Firstly, the search space is defined over arbitrary strings, represented using the LM's SentencePiece tokens (Kudo and Richardson, 2018). This stands in stark contrast to conventional NAS, which relies on hand-crafted search spaces, that can strongly bias and constrain the searches (Li and Talwalkar, 2019; Sciuto et al., 2019; Bender et al., 2020). Even in cases where much more open ended search spaces are used, these spaces still require human expertise to be designed and implemented (Real et al., 2020; So et al., 2021). In this work, any syntactically valid piece of code is covered in our search space; the only limitations are those imposed by the coding language and libraries (ex., Python and JAX). Secondly, the LM replaces the mutation and crossover functions that are commonly hand-designed (Koza, 1993). Not only does this reduce human bias, but it also allows the mutation and crossover functions to improve over the course of the search via prompt-tuning, as we demonstrate in Section 4. A work close to ours is Lehman et al. (2022), in which an LM is first fine-tuned to produce Python code diffs given one of three fixed messages that describe what should be changed, and then used as the mutation operator in an evolutionary algorithm coupled with the MAP-Elites (Multi-dimensional Archive of Phenotypic-Elites) algorithm designed as a quality-diversity (QD) algorithm. Their work is validated on the Sodarace domain, a virtual game where an agent must navigate a robot through various race tracks. Our work differs in that we use an LM as a crossover operator, without specifying the class of changes to make, which may offer greater flexibility. Furthermore, we evaluate our approach on the real-world task of NAS, rely on mixed temperature sampling of the LM for diversity instead of using a QD algorithm, and also use prompt-tuning in our algorithm. We choose not to use a QD algorithm such as MAP-Elites since this approach requires the design and discretization of a descriptor space, which is complex and difficult to hand-design for the space of all possible neural networks. A concurrent work, Meyerson et al. (2023), also uses an LM as a crossover operator to produce variations of text-based genotypes in the domains of symbolic regression, text sentiment, images, and Sodaracer programs. Like Lehman et al. (2022), they use MAP-Elites to trade off quality with diversity and demonstrate that their overall algorithm reliably produces a diverse range of outputs. Their study varies from ours in a number of ways - we additionally optimize for state-of-the-art task performance (rather than only diversity of outputs), we condition on target performance in our prompts, we do not use MAP-Elites, we use prompt-tuning to iteratively improve the LM's crossover abilities, and we apply our algorithm to the real-world task of NAS instead. Sequence models improving machine learningThis work is not the first time that deep sequence models have been used to improve machine learning workflows. For example, Chen et al. (2022) applies Transformers to hyperparameter optimization and Zoph & Le (2016) uses recurrent neural networks to perform architecture search via reinforcement learning. Our work differs from these approaches in that our model's action space is not specifically crafted for our target task. Instead, we use a pre-trained LM with a conventional SentencePiece output vocabulary to generate Python code (Kudo & Richardson, 2018). ## 3 EvoPrompting Method ### Architecture search problem formulation Let our target task be denoted by \(\mathcal{T}\) and \(\mathcal{D}\) be a dataset consisting of input-output pairs \((x,y)\in\mathcal{D}\) for task \(\mathcal{T}\). Define the probability distribution \(\pi_{\theta}:\mathcal{V}\rightarrow\{0,1\}\) over vocabulary \(\mathcal{V}\) as a language/code model parameterized by \(\theta\), from which we can sample code segments \(c\in\mathcal{V}^{*}\) (for \(\mathcal{V}^{*}\) the Kleene closure of \(\mathcal{V}\), _i.e._ the set of all finite concatenations of symbols in \(\mathcal{V}\)). We also have an evaluation function \(\textsc{Eval}_{\mathcal{T}}(c,\mathcal{D}):\mathcal{V}^{*}\times\mathcal{D} \rightarrow\mathbb{R}\) that trains the model architecture given by code \(c\) on \(\mathcal{D}\) and outputs some real-valued fitness \(s\in\mathbb{R}\), which can be a function of model accuracy and other model characteristics. Our ultimate goal is to identify some set of code samples \(c\sim\mathcal{V}^{*}\) that define neural network architectures that, when trained on \(\mathcal{D}\), maximize the reward \(\textsc{Eval}_{\mathcal{T}}(c,\mathcal{D})\). ### LMs for evolutionary crossover and mutation The goal of our algorithm is to generate a set \(C\) consisting of \(k\) neural network architectures that maximize the reward \(\textsc{Eval}_{\mathcal{T}}(c,\mathcal{D})\) for arbitrary pairs of \((\mathcal{D},\mathcal{T})\): \[\arg\max_{C=\begin{subarray}{c}|c\sim\pi_{\theta}\\ |C|=k\end{subarray}}\mathbb{E}_{c\in C}\mathbb{E}_{(x,y)\in\mathcal{D}}\left[ \textsc{Eval}_{\mathcal{T}}(c,\mathcal{D})\right] \tag{1}\] Since this optimization problem is generally intractable, we turn to a black-box evolutionary approach for iteratively generating, scoring, and selecting the best neural network architectures. Indeed, evolution has been demonstrated to perform particularly well in this domain because of how sparse high quality solutions tend to be (Real et al., 2017, 2018). Although evolution has been used for architecture search many times before (Real et al., 2017, 2018; Elsken et al., 2018; So et al., 2019), we improve upon this approach by using an LM for crossover and mutation operations. Using an LM in this manner has multiple appealing properties. While past evolutionary approaches for neural architecture search have required careful design and specification of a discrete search space (_e.g._ the space of high level modules (Real et al., 2018; So et al., 2019), TensorFlow statements (So et al., 2021), or basic mathematical operations (Real et al., 2020)), our algorithm's search space includes any neural network architecture that can be represented in Python. This allows for greater flexibility and diversity of the output architectures, and reduces the amount of manual design and human bias involved in the algorithm. Furthermore, modern pre-trained LMs are typically trained on massive datasets containing a significant number of source code files. This pre-training process encodes useful knowledge about code structure and functionality that is not otherwise available in evolutionary algorithms. Lastly, LMs can also be used as _self-adaptive crossover operators_, in which the crossover operator is incrementally trained round after round to generate higher reward crossovers. ``` 1:Input: LM \(\pi_{\theta_{0}}\), dataset \(\mathcal{D}\), task \(\mathcal{T}\), \(T\) number of rounds, \(m\) number of few-shot prompts per round, \(n\) number of samples to generate per prompt, \(k\) number of in-context examples per prompt, \(p\) number of survivors to select per generation, \(\alpha\) the upper threshold for the test error 2:\(G\leftarrow[]\) 3:\(P\leftarrow\textsc{InitializePopulation}(p)\) 4:\(t\gets 0\) 5:while\(t<T\)do 6:\(C\leftarrow\textsc{CrossMut}(\pi_{\theta_{t}},P,m,k,n)\) 7:\(C_{\textsc{Evaled}}\leftarrow\textsc{FilterAndEval}(C,\mathcal{T},\mathcal{D},\alpha)\) 8:\(G\gets G+C_{\textsc{Evaled}}\) 9:if\(t<T-1\)then 10:\(P\leftarrow\textsc{GetTop}(G,p)\) 11:\(\theta_{t+1}\leftarrow\textsc{Train}(\theta_{t},C_{\textsc{Evaled}}\setminus P)\) 12:endif 13:\(t\gets t+1\) 14:endwhile 15:Return \(\textsc{GetTop}(G,p)\) ``` **Algorithm 1**Complete meta-learning evolutionary algorithm using \(p_{\theta}\) as a crossover and mutation operator. ### EvoPrompting meta-learning algorithm Our complete algorithm is described in Algorithm 1. At the core of our algorithm is a scoring function, which describes the general "fitness" of a model on the task at hand. Since higher accuracy can often be achieved simply by increasing the number of parameters in a model, we use the negative product of the validation error and the model size as the fitness (see step 6 in Algorithm 3). More complicated objective functions have previously been used for dual objective neural architecture search (Bender et al., 2020), but we find this simple product works best in our case and requires minimal tuning. Generally the higher the fitness, the better (with some caveats, noted in our description of fitness-based selection below). The end-to-end meta-learning algorithm has several stages, which we describe below: InitializationWe start by setting our global historical population \(G\) to the empty list and initializing our current population \(P\) with a few seed architectures that are known to be well-designed (step 3 in Algorithm 1), which _warm-starts_ the search (So et al., 2019). These seed models are evaluated using the same Eval\({}_{\mathcal{T}}(c,\mathcal{D})\) function that is used to evaluate new candidate models (see below). Crossing over and mutating the parent modelsTo mutate and apply crossover to the parents \(P\) selected in the last step, we use both the source code and the evaluation metrics of each model in \(P\) to create few-shot prompts. In the last line of the prompt, we create a target set of metrics to condition \(\pi_{\theta}\)'s generations on that indicate the desired validation accuracy and model size of the proposed architecture. We set the target model size as \(90\%\) of the minimum model size of the parent models, rounded to the nearest 100 parameters, and the target validation accuracy as \(102\%\) of the maximum validation accuracy of the parent models, rounded to the nearest tenth of a percent. We create \(m\) such prompts per round, each with \(k\) in-context examples selected uniformly at random from \(P\). An example of a prompt might look like the following: ``` 1"""Metrics: 2{'num_params':'4800','val_accuracy':'0.865'} 3""" 4classModel(nn.Module): 5#nn.compact 6def_call_(self,x): 7x=nn.Dense(features=10)(x) 8returnx 9 10"""Metrics: 11{'num_params':'4300','val_accuracy':'0.880'} 12""" 13classModel(nn.Module): ``` Listing 1: An example of a few-shot prompt. In practice we use 2-shot prompts but we omit the second in-context example here for brevity. Finally, we use \(\pi_{\theta}\) to generate \(n\) samples per prompt, yielding a total of \(n\times m\) child samples per round of evolution. We denote this portion of the algorithm as CrossMut\((\pi_{\theta_{t}},P,m,k,n)\) (Algorithm 2 and step 6 of Algorithm 1). Filtering and scoring child samplesTo score and filter child samples \(c\) generated by \(\pi_{\theta}\), we use the evaluation function Eval\({}_{\mathcal{T}}(c,\mathcal{D})\), which trains the model encoded by \(c\) on the dataset \(\mathcal{D}\) and returns the lowest validation error encountered during training. All child models are trained for the same number of steps, with the same optimizer hyperparameters. Since our fitness function can potentially be gamed by generating arbitrarily small models, we also add a validation error threshold \(\alpha\), which is the upper limit of the validation error that a model can incur without being removed from \(G\), the global population. We refer to this function as FilterAndEval\((C,\mathcal{T},\mathcal{D},\alpha)\) (Algorithm 3 and step 7 of Algorithm 1). Lastly, we add the remaining trainable models and their associated fitness scores into (step 8 of Algorithm 1). Fitness-based selectionAfter evaluating all child models in the current round, we apply fitness-based selection to identify top candidate models for crossover (step 10 of Algorithm 1). We denote this as GetTop(\(G,p\)), which refers simply to selecting the \(p\) models with the highest fitness scores from \(G\). Once these models have been selected, they are permanently removed from the population and cannot be used again as parents for crossover. Training \(\pi_{\theta_{t}}\)Lastly, all child models generated in the current round that were not previously selected for crossover (_i.e._\(C_{\text{Evaled}}\setminus P\)) are used to prompt-tune \(\pi_{\theta}\) for the next round (step 11 of Algorithm 1). ## 4 Experiments and Results We evaluate our meta-learning algorithm on two datasets - MNIST-1D (Greydanus, 2020) and the CLRS algorithmic reasoning benchmark (Velickovic et al., 2022). While the former benchmark is lightweight and permits us to do a more thorough analysis of our algorithm, the latter is a newer benchmark with more headroom for discovering novel architectures with better performance. In all of our experiments, our \(\pi_{\theta_{0}}\) (_i.e._ the crossover operator) is a 62B parameter PALM model (Chowdhery et al., 2022) pre-trained on 1.3T tokens of conversational, web, and code documents. It is additionally fine-tuned on a corpus of 64B tokens from near-deduplicated, permissively-licensed Python source code files from Github. We always sample from \(\pi_{\theta_{0}}\) with mixed temperature sampling, in which the sampling temperature is selected uniformly from \([0.2,0.6,0.8,1.0]\). Between each round, the model is prompt-tuned (Lester et al., 2021) for 5 epochs with a soft prompt length of 16, batch size of 16, and learning rate of 0.1 (as described in Section 3.3 and Step 11 of Algorithm 1). Unless stated otherwise, we run 10 rounds of evolution with 10 prompts per round and 16 samples generated per prompt, yielding a total of 160 models generated per round and 1600 models generated during the entire search. Duplicate models and un-trainable models are not scored, but do count into the 1600. All other EvoPrompting hyperparameters are listed in Table A.1. ### Mnist-1d DatasetWe apply our method first to MNIST-1D (Greydanus, 2020), a one-dimensional, scaled-down version of the MNIST-1D dataset containing examples that are 20 times smaller than the original MNIST dataset. Each example is only 40-dimensional, with 4000 examples in the training dataset and 1000 in test. Since there is no validation dataset, we randomly set aside 500 examples from the training dataset to use as the validation dataset. Despite being more lightweight, MNIST-1D distinguishes more between different architecture types (Greydanus, 2020) than its larger counterpart MNIST (LeCun et al., 1998). Meta-learning set-upThroughout the model search we use the AdamW optimizer (Loshchilov and Hutter, 2019) to train each child model on a single NVIDIA Tesla P100 GPU for 8000 steps, with learning rate 0.01 and batch size 128. We score child models according to the best validation accuracy achieved during training. We also seed the search with 4 seed models - the 3 hand-designed neural baselines from the original MNIST-1D paper (Greydanus, 2020) (GRU, CNN, and MLP) and a fourth, larger CNN model of our own design. All four are implemented with Flax (Heek et al., 2020). We refer the reader to Appendix A.2 for the source code of these seed models. BaselinesWe compare EvoPrompting with the following baselines: * Naive few-shot prompting: This baseline simply generates code samples \(c\sim\pi_{\theta_{0}}(\cdot|p)\), where \(p\) is a 2-shot prompt constructed using in-context examples randomly selected from the seed models (Listing 1). This is essentially an ablation of steps 7-12 in Algorithm 1 with \(T=1\). We increase the number of samples generated per prompt for the naive prompting baseline such that the total number of samples generated by \(\pi_{\theta}\) matches that of the other baselines. * prompt-tuning): We run the entire algorithm as is, but without prompt-tuning between each round. This is an ablation of step 11 from Algorithm 1 * EvoPrompting (random parents): Instead of selecting the most fit models from the last round as parents for the next round, we select parents randomly. This is an ablation of Step 10 in Algorithm 1, which is the GetTop(\(G,p\)) step. EvoPrompting finds smaller and more accurate modelsFigure 1 shows a comparison of the test error and model size of the top 20 models discovered by EvoPrompting compared with those of our seed models and three baselines. The points approximate a Pareto frontier, below which each algorithm cannot improve on one dimension without hurting the other. EvoPrompting possesses the Pareto frontier closest to the origin, indicating that it finds more optimal models in terms of accuracy and size. In fact, many models in EvoPrompting's top 20 discovered models are orders of magnitude smaller than those of the other baselines, while still having lower test error. We also note that - on this task in particular - EvoPrompting excels especially at optimizing convolutional architectures. Many of the top 20 models are narrower and deeper convolutional architectures, with smaller strides, less padding, and no dense layers. These models consistently perform better than the shallower, denser, and wider convolutional architectures seen in earlier rounds of the model search. Another important aspect of a meta-learning algorithm is the relationship between the number of individuals evaluated and the maximum fitness observed so far, _i.e._ the sample efficiency. Neural architecture search can be an expensive process, with the most open-ended searches requiring the evaluation of trillions of individuals (Real et al., 2020). Thus, it is crucial to identify fit candidates using as few samples as possible. As such, we analyze the rate at which the fitness of the best-performing child model improves as a function of the number of child samples generated thus far, as shown in Figure 3. The random parents baseline plateaus the quickest, reaching a maximum fitness by the time approximately 200 individuals have been generated. Furthermore, the maximum fitness it reaches is significantly worse than that of the other experiments. On the other hand, EvoPrompting without prompt-tuning and normal EvoPrompting do not plateau until much later on. EvoPrompting's plateau is the highest and therefore fitter on average than the individuals discovered by any of the other experiments. It is also evident from both Figure 1 and 3 that performance suffers when any individual component is removed. Interestingly, Figure 1 indicates that prompting with randomly selected parents combined with prompt-tuning is no more effective than naive prompting alone. This highlights the importance of selecting helpful in-context examples, particularly in a task for which we assume that less training signal exists in the pre-training data. However, selecting more fit models as in-context examples without prompt-tuning also does not perform nearly as well as our full method. Trajectory over meta-learning roundsWe also explored the trajectory of our meta-learning algorithm round over round, as shown in Figure 4. In general, we observe that EvoPrompting starts out further away from the origin (in round 0) and ends up closest to the origin in round 10, which signifies that it discovers - on average - the smallest and most accurate models in the last round. However, the search does not always yield improvements on both axes between consecutive rounds. In rounds 0-2 and 6-10, EvoPrompting improves test error while trading off model size. On the other hand, both dimensions are simultaneously improved upon in rounds 3-5. ### Clrs Although the MNIST-1D task offers an efficient and practical setting for evaluating a meta-learning algorithm, CNN architectures already perform fairly well on this task and neural image classification architectures have been extensively studied as a whole. There also exists the possibility that our LM has seen many convolutional architectures in its pre-training data. Instead, we turn to a different learning task and class of neural network architectures in order to assess whether our meta-learning framework generalizes to other tasks, datasets, and neural architectures. DatasetThe CLRS algorithmic reasoning benchmark (Velickovic et al., 2022) evaluates the ability of neural networks to learn algorithmic reasoning across a set of 30 classical algorithms covered in the _Introduction to Algorithms Figure 4: The average model size and test error of the child models produced in each round of the model search. Data points closer to the origin represent rounds that yielded more “fit” models. Figure 3: Number of child models generated versus maximum fitness in sample, as estimated using 100 bootstrap samples of size 20 for each point along the x-axis. textbook by Cormen, Leiserson, Rivest and Stein (Cormen et al., 2009). This benchmark is useful not only as a difficult logical reasoning task for neural networks, but also as a measure of a neural network's _algorithmic alignment_(Xu et al., 2020). In brief, algorithmic alignment refers to a model's ability to reason like an algorithm (_i.e._ using the computation graph for a task), rather than relying upon memorization or other less sample efficient learning strategies. Although a model can approximate an algorithm by pattern-matching against similar inputs or relying on other shortcuts, it cannot generalize to arbitrarily long inputs or edge cases without learning the computation graph underlying the algorithm. Accordingly, the CLRS benchmark represents the algorithms' inputs and outputs as graphs, and the steps of the algorithm as a _trajectory_ of operations over the input graph. This problem setup can be straightforwardly processed by graph neural networks, which is explored in Ibarz et al. (2022). They find that a Triplet-GMPNN model (a message-passing neural network (Gilmer et al., 2017) with gating and triplet edge processing) exhibits the best performance when trained and evaluated across all 30 algorithms at once. Meta-learning set-upSimilar to our MNIST-1D set-up, we use the AdamW optimizer to train each child model on a single NVIDIA Tesla P100 GPU. However, since most of the explored child models were much larger than the MNIST-1D models, we only trained each child model for 2000 steps. Anecdotally, we observed that the performance of different models often diverged by 2000 steps, which provided sufficient signal for the model search process. We otherwise followed the hyperparameters for single-task training in Ibarz et al. (2022) and evaluated models using validation accuracy. Unlike our MNIST-1D set-up, we only search over the triplet representations of a Triplet-GMPNN model (see Ibarz et al. (2022) for more details), rather than the entire graph processor. We also seed the search with nine different seed models - each a variant of a Triplet-GMPNN model with a different triplet representation. Each seed triplet representation incorporates a minor tweak of a single component of the original triplet representation designed by Ibarz et al. (2022). These include a fully-connected output layer, a sum aggregation, fully-connected node/edge/graph representations, a simple linear triplet representation, and a bilinear representation (Mnih and Hinton, 2007). All nine are implemented with Haiku (Hennigan et al., 2020), an object-oriented neural network library for Jax (see Appendix A.4 for the source code of the seed models.) Generalizing beyond image classification modelsWe search using EvoPrompting on 3 individual algorithms in the CLRS benchmark - the articulation points, Graham scan, and Kruskal's minimum spanning tree algorithms. We select these algorithms because our preliminary analyses with hand-designed architectures showed that they had the most headroom for improvement, although we found that the discovered architectures transfer well to other CLRS benchmark tasks as well (Appx. A.5). Our search results are shown in Figure 5. EvoPrompting continues to find models that are more "fit" than our other two baselines, though we observed that the results also show more variation than our results for MNIST-1D did. Analyzing newly discovered modelsOur search across triplet representations yielded several new designs that we sought to evaluate across all algorithms in the CLRS benchmark. Although these new models were discovered in model searches over single algorithms, they oftentimes generalized to other algorithms that were unseen during the model search. Figure 6 shows the trajectory of validation accuracy during training and Table 4 provides OOD accuracies for these models on a few select algorithms. (We defer the reader to Appendix A.3 for the full source code of each newly discovered model and Table A.5 for the full list of OOD accuracies for every algorithm in the CLRS benchmark.) We note that the model search suggested several simple Figure 5: Number of child models generated versus maximum fitness of top model seen so far (as estimated using 100 bootstrap samples of size 20 for each point along the x-axis) when searching over neural network models for three CLRS tasks. As mentioned in Section 4.2, these algorithms were selected because our preliminary analyses indicated that they had the most headroom for architectural improvements. but effective changes. For example, instead of taking the maximum of the triplet representation, the QuadNodeMinMax model uses quadruplet node representations instead of triplets, and it subtracts the minimum of the quad representation from the max instead. ConcatRep represents the node, edge, and graph representations as a concatenation of a projection feedforward layer, and MaxMean takes the maximum of the triplet representations prior to taking the mean and passing it through the output dense layer. Div2Mean scales each of the node representations by \(1/2\) and uses a mean aggregation of the triplet representations instead of the max aggregation. TanhExpandTriplets applies additional dimension expansion to the triplet representations and applies a hyperbolic tangent function after the max aggregation. See Appx. A.3 for the full code of each discovered model. Of the 5 newly discovered models that we chose to analyze, ConcatRep is the only one that increases model size. However, as shown in Table 4.2, ConcatRep frequently yielded improvements in OOD accuracy that far exceeded the percent increase in model size. For instance, on the heapsort algorithm ConcatRep increased OOD accuracy by 125.19% while only increasing model size by 6.68% over the baseline. The other four newly discovered models shown in Table 4.2 simultaneously improved OOD accuracy while decreasing model size on the articulation points, BFS, DFS, insertion sort, quicksort, and task scheduling algorithms. On the rest of the CLRS algorithms (Table A.5), our newly discovered models typically achieved OOD accuracy comparable to or better than the baseline, while maintaining similar model size. ## 5 Conclusion We have shown that embedding a pre-trained LM in an evolutionary algorithm significantly improves the LM's performance on the task of neural architecture design. Our approach has demonstrated success at not only optimizing convolutional architectures for the MNIST-1D task, but also at developing new kinds of GNNs for the CLRS algorithmic benchmark. This demonstrates: 1) using evolutionary techniques can vastly improve the few-shot/in-context capabilities of pre-trained LMs, and 2) EvoPrompting can discover novel, competitive, and even state-of-the-art architectures that optimize for both accuracy and model size. Furthermore, EvoPrompting is general enough to be easily adapted to search for solutions to other kinds of reasoning tasks beyond NAS. However, this study has its limitations. Firstly, we have not compared to typical NAS methods, as these require manually designed search spaces - a confounder that prevents fair comparison on these tasks if we designed these spaces ourselves. Secondly, our study has been orders of magnitudes smaller than previous works in terms of compute. Future work could scale up our approach to compare against more competitive large-scale architectures, such as Transformers. ## 6 Acknowledgements We thank Maarten Bosma, Kefan Xiao, Yifeng Lu, Quoc Le, Ed Chi, Borja Ibarz, Petar Velickovic, Chen Liang, Charles Sutton, and the Google Brain AutoML team for providing valuable discussions and feedback that influenced the direction of this project. We also thank the Google Student Researcher program for providing the resources and opportunities necessary for this project to take place. \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{CLRS task} & \multirow{2}{*}{Best performing model} & \multicolumn{3}{c}{Model Size \(\downarrow\)} & \multicolumn{3}{c}{OOD accuracy \(\uparrow\)} \\ & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Baseline} & \multicolumn{1}{c}{\% Change} & \multicolumn{1}{c}{Ours} & Baseline & \% Change \\ \hline Articulation Points & QuadNodeMinMax & 497969 & 531913 & -6.38\% & **93.46 \(\pm\) 1.77\%** & 88.32\(\pm\) 2.01\% & 5.82\% \\ BFS & MaxMean & 522931 & 523963 & -0.20\% & **99.99 \(\pm\) 0.01\%** & 99.73\(\pm\) 0.04\% & 0.26\% \\ Bubble Sort & ConcatRep & 568533 & 524477 & 8.40\% & **88.87 \(\pm\) 2.77\%** & 67.68\(\pm\) 5.50\% & 31.31\% \\ DFS & Div2Mean & 660158 & 661190 & -0.16\% & **68.14 \(\pm\) 1.38\%** & 47.79\(\pm\) 4.19\% & 42.58\% \\ Floyd Warshall & ConcatRep & 669145 & 625089 & 7.05\% & **61.43 \(\pm\) 0.79\%** & 48.52\(\pm\) 1.04\% & 26.61\% \\ Heapsort & ConcatRep & 703710 & 659654 & 6.68\% & **69.90 \(\pm\) 4.17\%** & 31.04\(\pm\) 5.82\% & 125.19\% \\ Insertion Sort & Div2Mean & 523445 & 524477 & -0.20\% & **89.47 \(\pm\) 2.57\%** & 78.14\(\pm\) 4.64\% & 14.50\% \\ Quicksort & Div2Mean & 524727 & 525759 & -0.20\% & **85.23 \(\pm\) 4.26\%** & 64.64\(\pm\) 5.12\% & 31.85\% \\ Task Scheduling & TanhExpandTriplets & 262333 & 262333 & 0.00\% & **88.23 \(\pm\) 0.44\%** & 87.25\(\pm\) 0.35\% & 1.12\% \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of OOD accuracy and model size (in number of parameters) of models newly discovered by EvoPrompting on select CLRS tasks where EvoPrompting has discovered more accurate architectures without large increases in model size, compared with the baseline model (the Triplet-GMPNN from Ibarz et al. (2022)). OOD accuracy numbers for the baseline model are from Ibarz et al. (2022). For the full table of results on all CLRS tasks, including accuracies of our own implementation of the Triplet-GMPNN, see Appendix A.5.
2309.16385
Radiative Corrections and the Renormalization Group for the Two-Nucleon Interaction in Effective Field Theory
We use a combination of effective field theory and the renormalization group to determine the impact of radiative corrections on the nucleon-nucleon potential and the binding energy of the deuteron. In order to do so, we present a modified version of pionless effective field theory inspired by earlier work in nonrelativistic quantum electrodynamics. The renormalization group improvement of the deuteron binding energy leads to a shift on the order of a few percent and is consistent with the experimental value. This work serves as a starting point for a dedicated study of radiative corrections in few-body systems relevant for precision tests of the Standard Model in an effective field theory framework.
Thomas R. Richardson, Immo C. Reis
2023-09-28T12:32:22Z
http://arxiv.org/abs/2309.16385v2
# Radiative corrections for few-nucleon systems ###### Abstract We use a combination of effective field theory and the renormalization group to determine the impact of radiative corrections on the nucleon-nucleon potential. In order to do so, we present a modified version of pionless effective field theory inspired by earlier work in nonrelativistic quantum electrodynamics. The renormalization group analysis of corrections in the deuteron indicate that radiative corrections generate \(1-2\%\) of the binding energy. This work serves as an important starting point for the study of radiative corrections in few-body systems relevant for precision tests of the Standard Model. _Introduction_--Modern experiments that rely on few-nucleon systems such as \(\beta\)-decay [1; 2], \(\mu\)-capture [3; 4], and muonic atom spectroscopy [5; 6; 7; 8; 9; 10] are reaching subpercent-level precision. Thus, these experiments can provide stringent tests for the Standard Model in low energy systems and possibly shed light on new physics. However, a correct interpretation of the experimental results requires a thorough theoretical understanding and delineation of the different effects involved. In particular, these experiments are sensitive to radiative corrections from electrodynamics. In the context of muonic atom spectroscopy, a subset of these effects has been the subject of significant theoretical interest [11; 12; 13]. It is customary to include radiative corrections through finite nuclear size effects and the exchange of two or more photons between the nucleus and the bound muon. The nuclear wavefunctions and currents, however, only include electromagnetic effects implicitly by fitting the parameters of the nuclear Hamiltonian and currents to data. Because of this, there is no way to distill how much of an observable comes from quantum chromodynamics (QCD) as opposed to electroweak interactions. In the case of \(\beta\)-decays, this topic has received renewed interest in recent years with respect to single-neutron \(\beta\)-decay [14; 15; 16; 17; 18; 19; 20; 21]. Interestingly, Ref. [15] finds a percent level shift in the nucleon axial coupling \(g_{A}\) due to radiative corrections that shifts the lattice QCD determination of \(g_{A}\) closer to the more precise experimental value. This represents a significant step towards disentangling the myriad of effects involved in neutron \(\beta\) decay in terms of Standard Model parameters. The goal of this work is to begin bridging the gap in few-nucleon systems with effective field theory (EFT) techniques. We use a combination of pionless effective field theory (\(\text{EFT}_{\not{\pi}}\)) [22; 23; 24; 25; 26; 27; 28; 29; 30] and the velocity renormalization group (vRG) [31] developed for nonrelativistic QED (NRQED) [32]. This theory is valid for momenta \(p\ll m_{\pi}\), the pion mass, which is in the regime relevant for many of these experiments. Certain aspects of this work can also be applied in chiral EFT [33; 34; 35; 36; 30], which has a larger radius of convergence. On the other hand, the entire framework can immediately be applied in an EFT for halo nuclei [37; 38; 39; 30] with trivial modifications. In this work, we calculate the leading \(O(\alpha)\) corrections, where \(\alpha=e^{2}/4\pi\) is the fine structure constant, to the neutron-proton potential. We derive a general form of the counterterms required for renormalization. The running couplings that follow from the vRG equations can in principle be embedded in _ab initio_ calculations using few- or many-body methods. To illustrate the impact of the running induced by the radiative corrections, we use renormalization group improved perturbation theory to calculate the deuteron binding energy and compare the result to the fixed order calculation. In order to generate numerical results, the vRG equations require a boundary condition to fix the final value of low energy coefficients (LECs). Ideally, the LECs in a nuclear EFT in the absence of electroweak effects would be determined by lattice QCD rather than data. However, available few-nucleon lattice calculations have greater than physical \(m_{\pi}\) and the uncertainties are quite large. In the meantime, we make use of the scattering parameters of the phenomenological Argonne \(v18\) (AV18) potential without electromagnetic interactions found in Table 8 of Ref. [40] (also see Ref. [41]). Here, we find that radiative corrections drive a percent level shift in the deuteron binding momentum (this corresponds to a few keV in the binding energy). This observation is consistent with the AV18 potential, but it recasts the main result in terms of a modern EFT with the full machinery of the renormalization group. _Reorganizing EFT_--Now, we recast \(\text{EFT}_{\not{\pi}}\) in the language of velocity NRQED (vNRQED) [31]. In \(\text{EFT}_{\not{\pi}}\) it is typical to count powers of the momentum \(p\), but in NRQED powers of velocity \(v=p/M_{N}\), where \(M_{N}\) is the nucleon mass, are counted. The relevant energy and momentum scales are then expressed as hard (\(m_{\pi}/M_{N},m_{\pi}/M_{N}\)), soft (\(M_{N}v,M_{N}v\)), ultrasoft (\(M_{N}v^{2},M_{N}v^{2}\)), and potential (\(M_{N}v^{2},M_{N}v\)). Power counting issues are avoided by splitting the photon into multiple modes describing the soft and ultrasoft regions and multipole expanding the ultrasoft modes [42; 43; 44; 45; 46; 47]. The potential photons can be integrated out because they are far off-shell; their effects are encoded in the coefficients of four-nucleon operators. The four-momentum of the nucleon is decomposed as \[P=(0,\mathbf{p})+(k_{0},\mathbf{k})\,, \tag{1}\] where \(\mathbf{p}\sim M_{N}v\) is the soft component of the momentum and \(k\sim M_{N}v^{2}\) is the residual four-momentum on the ultrasoft scale. The on-shell condition becomes \(k_{0}=\mathbf{p}^{2}/2M_{N}\). The nucleon field is now written as \(N_{\mathbf{p}}(x)\) where \(\mathbf{p}\) is a soft label and \(x\) is the Fourier conjugate of the residual momentum \(k\). The photon field is also split into a soft field \(A_{p}(k)\) with soft label four-momentum \(p\) and a residual four-momentum \(k\) and an ultrasoft field \(A(k)\). Conservation of energy excludes interactions of the type \(A_{q}N_{\mathbf{p}}^{\dagger}N_{\mathbf{p}}\), i.e., only vertices with two soft photon lines are allowed. The kinetic term of the photon field is split into \[\mathcal{L}\supset -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\sum_{p}\left|p^{\mu}A_{p}^{\nu} -p^{\nu}A_{p}^{\mu}\right|^{2}. \tag{2}\] Reparameterization invariance implies that derivatives acting on the nucleon fields appear in the combination \(i\mathbf{p}+\mathbf{D}\), where \(\mathbf{p}\) acts on the soft label and \(\mathbf{D}\) is a covariant derivative acting on the residual piece of the nucleon field. In the kinetic term for the nucleon, the term \(\left(\mathbf{p}-i\mathbf{D}\right)^{2}\) should be expanded, which is equivalent to the multipole expansion, and only the \(\mathbf{p}^{2}\) should be kept in the leading order propagator. Therefore, the nucleon propagator will be \[S(k_{0},\mathbf{p})=\frac{i}{k_{0}-\frac{\mathbf{p}^{2}}{2M_{N}}+i\epsilon}\,. \tag{3}\] Terms containing factors of \(\mathbf{p}\cdot\nabla\) or \(\nabla^{2}\) are treated as perturbations. While EFT\({}_{\not{\!\!p}}\) is usually formulated in an isospin basis, we find it more convenient to study the ultrasoft renormalization of the potential in terms of physical neutron and proton fields \(n\) and \(p\), respectively. The LECs can of course be translated into the isospin basis after the renormalization has been carried out. In this basis, the proton-neutron potential is written as \[V_{pn}=\sum_{v=-1}\sum_{\mathbf{p}^{\prime},\mathbf{p}}V_{abcd}^{(v)}( \mathbf{p}^{\prime},\mathbf{p})p_{\mathbf{p}^{\prime},a}^{\dagger}p_{\mathbf{ p},b}n_{-\mathbf{p}^{\prime},c}^{\dagger}n_{-\mathbf{p},d}\,, \tag{4}\] where \(v\) tracks the order in the velocity expansion of each coefficient. The leading order (LO), next-to-leading order (NLO), and next-to-next-to-leading order (N\({}^{2}\)LO) potential coefficients in the S-wave are given by \[V_{abcd}^{(-1)} =C_{0,pn}^{(S_{1})}P_{ai,bj}^{(1)}+C_{0,pn}^{(S_{0})}P_{ai,bj}^{(0)}\,, \tag{5}\] \[V_{abcd}^{(0)} =\frac{1}{2}\left(\mathbf{p}^{\prime 2}+\mathbf{p}^{2}\right) \left[C_{2,pn}^{(S_{1})}P_{ai,bj}^{(1)}+C_{2,pn}^{(S_{0})}P_{ai,bj}^{(0)}\right]\,,\] (6) \[V_{abcd}^{(1)} =\frac{1}{4}\left(\mathbf{p}^{\prime 2}+\mathbf{p}^{2}\right)^{2} \left[C_{4,pn}^{(S_{1})}P_{ai,bj}^{(1)}+C_{4,pn}^{(S_{0})}P_{ai,bj}^{(0)}\right]\,. \tag{7}\] Note that our definition of \(C_{4}\) is a linear combination of \(C_{4}+\tilde{C}_{4}\) that appears in the literature (see for example Refs. [26; 27; 28]). The \(V^{(0)}\) potential should also be supplemented with a correction to the Coulomb potential that arises from a potential photon coupled to the proton charge and the neutron charge radius; however, this term is also suppressed by a factor of \(\alpha\). The neutron-neutron potentials have an identical structure with respect to the purely strong interactions. The part of the potential that arises from potential photon exchange is \(O(\alpha v^{2})\). The strong part of the proton-proton potential is also identical to the proton-neutron potential. However, we have to add the Coulomb potential to the leading order term. \[V_{abcd}^{(-1,pp)}\supset\sum_{\mathbf{p}^{\prime},\mathbf{p}}\frac{4\pi \alpha}{(\mathbf{p}^{\prime}-\mathbf{p})^{2}}\delta_{ab}\delta_{cd}. \tag{8}\] All together, the Lagrangian we will work with is \[\mathcal{L}= \sum_{\mathbf{p}}N_{\mathbf{p}}^{\dagger}\left(iD_{0}-\frac{ \left(\mathbf{p}-i\mathbf{D}\right)^{2}}{2M_{N}}\right)N_{\mathbf{p}}-\frac{1 }{4}F_{\mu\nu}F^{\mu\nu}\] \[+\sum_{\mathbf{p}}\left|p^{\mu}A_{p}^{\nu}-p^{\nu}A_{p}^{\mu} \right|^{2}-\sum_{\mathbf{p}^{\prime},\mathbf{p}}V(\mathbf{p}^{\prime}, \mathbf{p})\] \[-\frac{4\pi\alpha}{2M_{N}}\sum_{q,q^{\prime},\mathbf{p},\mathbf{ p}^{\prime}}\mathbf{A}_{q^{\prime}}\cdot\mathbf{A}_{q}N_{\mathbf{p}^{\prime}}^{ \dagger}QN_{\mathbf{p}}\] \[+\frac{e}{2M_{N}}\epsilon^{ijk}\left(\nabla^{j}A^{k}\right)\sum_{ \mathbf{p}}N_{\mathbf{p}}^{\dagger}\sigma^{i}\left[\kappa_{0}+\kappa_{1}\tau^ {3}\right]N_{\mathbf{p}}. \tag{9}\] Counting powers of velocity in diagrams is fairly straightforward. Nucleon and soft photon propagators count as \(1/v^{2}\) while ultrasoft photon propagators count as \(1/v^{4}\). The purely \(N\!N\) potentials follow the standard power counting of EFT\({}_{\not{\!\!p}}\), where \(Q\sim M_{N}v\). Finally, a soft loop has an integration measure that scales as \(v^{4}\), a potential loop scales as \(v^{5}\), and an ultrasoft loop scales as \(v^{8}\). In order to implement the vRG, we determine the \(O(\alpha/v)\) counterterms and obtain the soft and ultrasoft anomalous dimensions from [31; 48] \[\mu_{U}\frac{dV}{d\mu_{U}} =\gamma_{U}\,, \tag{10}\] \[\mu_{S}\frac{dV}{d\mu_{S}} =\gamma_{S}\,, \tag{11}\] where \(\mu_{S}\) is the scale introduced in dimensional regularization for the potentials and soft interactions and \(\mu_{U}\) is the scale introduced for the ultrasoft interactions. Through these scales we introduce the subtraction velocity \(\nu\) as \(\mu_{S}=M_{N}\nu\) and \(\mu_{U}=M_{N}\nu^{2}\) so that the vRG equation is \[\nu\frac{dV}{d\nu}=\gamma_{S}+2\gamma_{U}\,. \tag{12}\] n NRQED, this procedure is fairly easy because the fine structure constant \(\alpha\) does not run and the LO Coulomb potential is not renormalized [49]. Moreover, the \(\alpha\) and \(v\) expansions are identical since the average velocity in a Coulomb bound state is \(O(\alpha)\). As we will see below, because the \(\alpha\) and \(v\) expansions are not strictly linked in the nuclear EFT and because \(\alpha\) runs, there is a much richer structure that arises from the vRG. In the remainder of this work, we will focus mainly on the neutron-proton sector at \(O(\alpha/v)\). The neutron-neutron potential will be renormalized at higher orders in the \(v\) expansion. Renormalizing the proton-proton potential is much more involved. The Coulomb interaction will generate a nonzero soft anomalous dimension for \(C_{0}\)[50] leading to a faster running. Thus, we expect the vRG to lead to interesting results in this channel. _Renormalization_--The renormalization procedure in this theory is reminiscent of the role of radiation pions in EFT [51]. However, there are several important differences. First, we can treat both ultraviolet and infrared divergences in dimensional regularization, which simplifies the loop integrals. Second, the neutron has no coupling to \(A_{0}\) photons at the order we are working. With this set-up, the basic topologies that renormalize the potential are shown in Fig. 1. In Feynman gauge, the dominant contribution, which is \(O(\alpha/v)\), comes from an \(A_{0}\) photon coupled to the proton on both the incoming and outgoing lines with insertions of the \(C_{0}\) potential. Inside the ultrasoft loop, an arbitrary number of \(N\!N\) bubbles with only \(C_{0}\) vertices will contribute at the same order; therefore, the internal bubble diagrams must be summed to all orders. This infinite sum of diagrams often makes explicit renormalization of the series intractable. The argument in the case of radiation pions is that the bubble sum should be performed before the ultrasoft integration [51]. However, it should really be understood that the _finite_ parts of the bubbles are being summed, i.e., all divergences are canceled by the appropriate counterterms and the remainder is resummed. In this case, we can actually perform this renormalization to all orders in \(C_{0}\). In the bubble series, each graph is divergent. However, each graph with an odd number of bubbles is ultraviolet finite and the divergence is purely infrared. Each graph with an even number of \(N\!N\) bubbles has both ultraviolet and infrared divergences which must be separated. Specifically, a graph with \(l=2j\) bubbles, where \(j\) is an integer, requires a counterterm that renormalizes the \(2j\)-derivative potential. For example, the diagram with \(0\)\(N\!N\) bubbles renormalizes the \(V^{(-1)}\) potential while the diagram with \(2\)\(N\!N\) bubbles renormalizes the \(V^{(0)}\) potential. For arbitrary \(j\), the appropriate counterterm in modified minimal subtraction is \[\delta C_{2j}=\frac{\alpha C_{0}}{2\pi}\left(\frac{iM_{N}C_{0}}{4\pi}\right)^{ 2j}\frac{1}{j+1}\frac{1}{\epsilon}\,. \tag{13}\] For the LO potential, we find \(\gamma_{S,0}=0\) while \[\gamma_{U,0}=\frac{1}{2\pi}\alpha(M_{N}\nu^{2})C_{0}\,, \tag{14}\] which leads to the vRG equation \[\nu\frac{dC_{0}}{d\nu}=\frac{1}{\pi}\alpha(M_{N}\nu^{2})C_{0}. \tag{15}\] For \(j\geq 1\), we find \[\gamma_{S,2j} \supset\frac{\alpha}{\pi}\frac{2j}{j+1}C_{0}\left(\frac{iM_{N}C_ {0}}{4\pi}\right)^{2j}\,, \tag{16}\] \[\gamma_{U,2j} \supset\frac{\alpha}{2\pi}\frac{3+4j}{j+1}C_{0}\left(\frac{iM_{N} C_{0}}{4\pi}\right)^{2j}\,. \tag{17}\] There is also a contribution to the ultrasoft anomalous dimension of the \(2j\)-derivative operator from an insertion of the operator itself into the one-loop diagram, i.e., the first diagram on the right hand side of Fig. 1. This contribution is identical to that for \(C_{0}\) though only with \(C_{2j}\) appearing instead. Dressing the potential vertex with additional \(C_{0}\) interactions leads to diagrams of the same order in \(v\), which will also generate contributions to the soft anomalous dimension of higher-derivative operators. These contributions should still be suppressed relative to the anomalous dimensions presented here. Retaining only the leading contribution to the anomalous dimension leads to the vRG equation \[\nu\frac{dC_{2j}}{d\nu}=\frac{\alpha}{\pi}\frac{3+6j}{j+1}C_{0}\left(\frac{iM_ {N}C_{0}}{4\pi}\right)^{2j}. \tag{18}\] The solution for \(C_{0}\) is \[C_{0}(\nu)=C_{0}\left(\frac{m_{\pi}}{M_{N}}\right)\left(\frac{\alpha(M_{N}\nu ^{2})}{\alpha(m_{\pi}^{2}/M_{N})}\right)^{3/4}\,. \tag{19}\] Figure 1: \(O(\alpha/v)\) diagrams that contribute to the anomalous dimension of the potential. For \(C_{2}\) we find \[C_{2}(\nu) =C_{2}\left(\frac{m_{\pi}}{M_{N}}\right)\] \[-\frac{3}{2}\left(\frac{M_{N}}{4\pi}\right)^{2}C_{0}^{3}\left( \frac{m_{\pi}}{M_{N}}\right)\left[\left(\frac{\alpha(M_{N}\nu^{2})}{\alpha(m_{ \pi}^{2}/M_{N})}\right)^{9/4}-1\right]\,. \tag{20}\] For \(C_{4}\) we find \[C_{4}(\nu) =C_{4}\left(\frac{m_{\pi}}{M_{N}}\right)\] \[+\left(\frac{M_{N}}{4\pi}\right)^{4}C_{0}^{5}\left(\frac{m_{\pi}} {M_{N}}\right)\left[\left(\frac{\alpha(M_{N}\nu^{2})}{\alpha(m_{\pi}^{2}/M_{N })}\right)^{15/4}-1\right]\,. \tag{21}\] In Fig. 2, we show the running of the potential LECs normalized as \[\dot{C}_{2j}(\nu)=\frac{C_{2j}(\nu)}{C_{2j}(m_{\pi}/M_{N})}\,, \tag{22}\] where the normalization condition is discussed below in Eqs. (27) through Eq. (32). The zero-derivative potential runs very slowly while \(\dot{C}_{2}\) differs by several percent at from its value at the hard scale when \(\nu<0.6\). The running of \(C_{4}\) is significantly faster; it changes by nearly 50% when \(\nu\sim 0.6\) _Impact in the deuteron--_The two-point correlation function for the deuteron is given by [52] \[G(\bar{E})=\frac{\Sigma(\bar{E})}{1+iC_{0}\Sigma(\bar{E})}\,, \tag{23}\] where \(\Sigma\) is the self-energy of the deuteron and consists of irreducible diagrams in the sense that they do not fall apart when cut at a \(C_{0}\) vertex. The self-energy is expanded as \[\Sigma(\bar{E})=\sum_{j=1,k=0}^{\infty}\Sigma_{j,k}(\bar{E})\,, \tag{24}\] where \(j\) tracks the order in the velocity expansion and \(k\) tracks the order in the \(\alpha\) expansion, and \(\bar{E}\) is the center-of-mass energy. Corrections to the deuteron binding energy are calculated by expanding the two-point function as \[G(\bar{E})=\frac{\sum_{j=1,k=0}\Sigma_{j,k}}{1+iC_{0}\Sigma_{1,0}(\bar{E})} \left[1-\frac{iC_{0}\sum_{j=2,k=0}\Sigma_{j,k}}{1+iC_{0}\Sigma_{1,0}(\bar{E}) }+\cdots\right]\,. \tag{25}\] The perturbative corrections to the binding momentum, \(\gamma=\sqrt{M_{N}B}\) where \(B\) is the deuteron binding energy, are then given by the term proportional to \(i\left(4\pi/M_{N}\right)\left(1+iC_{0}\Sigma_{1,0}(\bar{E})\right)^{-2}\) (see for instance Ref. [53]). At N\({}^{2}\)LO, the binding momentum is \[\gamma=\frac{4\pi}{M_{N}C_{0}}+\left(\frac{4\pi}{M_{N}}\right)^{ 3}\frac{C_{2}}{C_{0}^{4}}+\left(\frac{4\pi}{M_{N}}\right)^{5}\left(\frac{C_{4 }}{C_{0}^{6}}+\frac{2C_{2}}{C_{0}^{7}}\right)\,. \tag{26}\] We calculate this shift using both fixed-order and renormalization group improved perturbation theory. We use as the boundary value (i.e. at \(\nu=m_{\pi}/M_{N}\)) of the vRG equations the scattering length and effective range of the AV18 potential [40] without the electromagnetic interaction. Electromagnetic corrections to the shape parameter \(P\) are also expected to be small, so we use the Nijmegen value [54]. In the deuteron channel, these are \[a_{np} =5.402\ \text{fm}\,, \tag{27}\] \[r_{np} =1.752\ \text{fm}\,,\] (28) \[P_{np} =0.040\ \text{fm}^{-3}\,. \tag{29}\] The LECs at \(\nu=m_{\pi}/M_{N}\) are given in terms of these parameters according to \[C_{0}(m_{\pi}/M_{N}) =\frac{4\pi a_{np}}{M_{N}}\,, \tag{30}\] \[C_{2}(m_{\pi}/M_{N}) =\frac{4\pi}{M_{N}}\frac{a_{np}^{2}r_{np}}{2}\,,\] (31) \[C_{4}(m_{\pi}/M_{N}) =\frac{4\pi}{M_{N}}a_{np}^{3}\left(\frac{1}{4}r_{np}^{2}+\frac{P _{np}}{a_{np}}\right)\,. \tag{32}\] The result for the deuteron binding momentum at NLO and N\({}^{2}\)LO is shown in Fig. 3. When the subtraction velocity is in the range \(\nu\in[0.04,0.07]\) (corresponding to momenta roughly in the range \([37,65]\) MeV), there is a shift in the binding energy of about \(1.8\%-3.3\%\) at NLO from radiative corrections. At N\({}^{2}\)LO, the corrections shift the binding energy by about \(1.3\%-2.3\%\). Figure 2: The running of the potential coefficients. The blue line is the running of \(\dot{C}_{0}\), the orange line is the running of \(\dot{C}_{2}\), and the green line is the running of \(\dot{C}_{4}\). Clearly, the corrections at N\({}^{2}\)LO are slightly smaller, but they are still at the few-percent level. Moreover, the corrections at N\({}^{2}\)LO cause the predicted binding energy to intersect the experimental value \(B=2.224575\) MeV around \(\nu\approx 0.06245\). _Summary_--In this work, we have performed the first analysis of explicit radiative corrections in the \(N\!N\) system. Using EFT techniques helps to organize the role of different strong and electromagnetic effects in a systematic expansion. Additionally, we performed the first direct application of the vRG in a nuclear EFT. This allows us to sum logarithms generated by renormalization into the potential coefficients. We then provided evidence that the vRG generates a percent level shift in the binding energy of the deuteron. It is possible that similar corrections will play an important role in other light nuclei. This prediction will be more robust when reliable \(N\!N\) observables can be calculated in lattice QCD at the physical pion mass in order to match the couplings of this EFT. The ultrasoft renormalization of the leading order potential in chiral EFT can be analyzed with similar to techniques. First, the one-pion exchange potential is written as a four-fermion operator where the LEC is determined by the axial coupling \(g_{A}\) and the pion decay constant \(F_{\pi}\) at the breakdown scale of chiral EFT in the absence of electroweak effects. Then the tree-level potential is dressed with an ultrasoft photon that leads to an anomalous dimension similar to Eq. (14), only \(C_{0}\) is replaced by \((g_{A}/F_{\pi})^{2}\) up to a factor of 2. Also, the contact potential proportional to \(C_{0}\) will acquire a nonzero soft anomalous dimension driven by pion exchange. Renormalizing the potential at higher orders will be significantly more difficult. The running couplings obtained in this work can also be incorporated into other EFT\({}_{\not{\pi}}\) calculations or in _ab initio_ methods for nuclear physics that make use of EFT\({}_{\not{\pi}}\) potentials derived with dimensional regularization. In this way, this renormalization group study can impact a variety of theoretical work relevant for ongoing experiments including \(\beta\)-decay, \(\mu\)-capture, and muonic atom spectroscopy. _Acknowledgements_.-- We would like to thank Sonia Bacca, Wouter Dekens, and Aneesh Manohar for interesting discussions. This work was supported in part by the Deutsche Forschungsgemeinschaft (DFG) through the Cluster of Excellence "Precision Physics, Fundamental Interactions, and Structure of Matter" (PRISMA\({}^{+}\) EXC 2118/1) funded by the DFG within the German Excellence Strategy (Project ID 39083149).
2309.11165
Assessment of Pre-Trained Models Across Languages and Grammars
We present an approach for assessing how multilingual large language models (LLMs) learn syntax in terms of multi-formalism syntactic structures. We aim to recover constituent and dependency structures by casting parsing as sequence labeling. To do so, we select a few LLMs and study them on 13 diverse UD treebanks for dependency parsing and 10 treebanks for constituent parsing. Our results show that: (i) the framework is consistent across encodings, (ii) pre-trained word vectors do not favor constituency representations of syntax over dependencies, (iii) sub-word tokenization is needed to represent syntax, in contrast to character-based models, and (iv) occurrence of a language in the pretraining data is more important than the amount of task data when recovering syntax from the word vectors.
Alberto Muñoz-Ortiz, David Vilares, Carlos Gómez-Rodríguez
2023-09-20T09:23:36Z
http://arxiv.org/abs/2309.11165v1
# Assessment of Pre-Trained Models Across Languages and Grammars ###### Abstract We present an approach for assessing how multilingual large language models (LLMs) learn syntax in terms of multi-formalism syntactic structures. We aim to recover constituent and dependency structures by casting parsing as sequence labeling. To do so, we select a few LLMs and study them on 13 diverse UD treebanks for dependency parsing and 10 treebanks for constituent parsing. Our results show that: (i) the framework is consistent across encodings, (ii) pre-trained word vectors do not favor constituency representations of syntax over dependencies, (iii) sub-word tokenization is needed to represent syntax, in contrast to character-based models, and (iv) occurrence of a language in the pretraining data is more important than the amount of task data when recovering syntax from the word vectors. ## 1 Introduction Large Language Models (LLMs) are the backbone for most NLP architectures. Their performance has not yet reached a plateau, and factors such as scale, language objective, token segmentation or amount of pre-training time - among many others - play a role in their capabilities. To shed light on what is being learned, work on interpretability explains what these models encode in their representational space. Authors have explored whether these models exhibit stereotypical biases (Nadeem et al., 2021), encode facts (Poerner et al., 2020) or capture structural knowledge in multi-modal environments (Milewski et al., 2022). Whether LLMs encode syntaxin their latent space has also been studied. In this respect, different _probing frameworks_(Kulmizev and Nivre, 2022; Belinkov, 2022) have been introduced to measure the syntactic capability of models, although authors such as Maudslay and Cotterell (2021) point out that we need to take this concept with caution, since they might not be completely isolating syntax. Still, interpretability work on parsing focuses on either multilingual and mono-paradigm setups, or English and multi-paradigm setups. But we are not aware of _multi-dimensional_ work. This relates to the problem of square one bias in NLP research (Ruder et al., 2022), that states that most work expands the current knowledge along just one dimension (e.g., a single language, or a single task). Related to our work, Kulmizev et al. (2020) study if LLMs showed preferences across two annotation _styles_: deep syntactic and surface-syntactic universal dependencies, but both schemes were dependency-based. Vilares et al. (2020) did study two different syntactic formalisms, dependencies and constituents, and used a sequence-labeling-like recovery framework, relying on the pretraining architectures to associate output vectors with syntactic labels. We will build on top of this framework. Yet, they only studied English, and their analysis focused on static vectors and early LLMs; apart from other limitations that we discuss later. ContributionWe move from square one bias in syntax assessment, and propose the first multi-paradigm, multilingual, recovery framework for dependency and constituent structures learned by LLMs. We select representative LLMs that vary in scale, language pretraining objectives, and token representation formats. We then study their capability to retrieve syntax information from the pretrained representations on a diverse set of constituent and dependency treebanks, that vary in factors such as language family or size, as well as the presence or absence of their languages among the pretraining data of the LLMs. The code is available at [https://github.com/amunozo/multilingual-assessment](https://github.com/amunozo/multilingual-assessment). ## 2 Related work There is a long-standing effort in the NLP community to model syntax, either as a final goal or as a way to model compositionality. Yet, the ways in which this has been pursued have evolved with time. Modeling syntax in the pre-neural times.Learning grammars through corpus-based approaches (Marcus et al., 1993; Collins, 1996; Charniak, 1997; Petrov and Klein, 2007) has been the dominating approach in the last decades. However, early models required extensive feature engineering to obtain competitive parsers. This suggested that support vector machines (SVMs) had severe limitations understanding language structure, and needed the help of parsing algorithms (Nivre, 2008; Martins et al., 2010), language-dependent features (Ballesteros and Nivre, 2012), or tree-kernels (Lin et al., 2014; Zhang and Li, 2009) to model syntax properly. Modeling syntax in neural times.With the rise of word vectors (Mikolov et al., 2013), LSTMs (Hochreiter and Schmidhuber, 1997) and Transformers (Vaswani et al., 2017), modeling structure has become less relevant to obtain a good performance, both for parsing and downstream tasks. For instance, while the classic parser by Zhang and Nivre (2011) used a rich set of features (including third-order, distance, and valency features, among others) to be competitive, the parser by Chen and Manning (2014) only needed 18 word and PoS tag features (and 6 dependency features) to obtain strong results, which was possible thanks to their reliance on pre-trained word vectors and neural networks. The need for feature engineering was reduced further with bidirectional LSTMs, e.g., Kiperwasser and Goldberg (2016) showed that four vectors corresponding to elements in the buffer and the stack sufficed to obtain state-of-the-art performance, while Shi et al. (2017) showed that competitive accuracies were possible with only two features. Modeling syntax in the era of language models.In the context of these (almost) end-to-end parsers performing very competitively without the need of explicitly modeling syntactic linguistic features, recent efforts have been dedicated to interpret to what extent syntax is encoded in the representational space of neural networks, and in particular of LLMs. Tenney et al. (2019) and Liu et al. (2019) proposed probing frameworks for partial parsing, in the sense that they tried to demonstrate that certain syntactic information, such as dependency types, was encoded in pre-trained models. Vilares et al. (2020) defined a probing framework for full dependency and constituent parsing. They cast dependency and constituent parsing as sequence labeling and associated output vectors with syntactic labels by freezing their models. Hewitt and Manning (2019) proposed a structural probing framework and identified that pre-trained models encoded a linear transformation that indicates the distance between words in a dependency tree. The framework was later upgraded to extract directed and labeled trees, while using fewer parameters (Muller-Eberstein et al., 2022). Hewitt and Liang (2019) pointed out that we need to be careful with probing frameworks, since the probe might be learning the linguistic task itself, instead of demonstrating the presence of the target linguistic property. For that, they recommend to use control experiments, and relied on control tasks, i.e., learning a random task with the same dimensional output space. Maudslay and Cotterell (2021) showed that semantic cues in the data might guide the probe and therefore they might not isolate syntax, although their experiments still outperformed the baselines. Muller-Eberstein et al. (2022) found the most suitable pre-trained LLMs to plug into a dependency parser for a given treebank. Particularly, they proposed to rank frozen encoder representations by determining the percentage of trees that are recoverable from them, and based on that ranking choose which LLM to plug. Focused on morphology, Stanczak et al. (2022) showed that subsets of neurons model morphosyntax across a variety of languages in multilingual LLMs. ## 3 Multilingual probing frameworks Let \(w\) = \([w_{1},w_{2},...,w_{n}]\) be an input sentence. We are interested in linear probing frameworks that can associate a sequence of word vectors \(\vec{w}\) = \([\vec{w}_{1},\vec{w}_{2},...,\vec{w}_{n}]\) to a given linguistic property \([p_{1},p_{2},...,p_{n}]\). For some properties, the mapping can be quite direct, such as for instance the case of part-of-speech (PoS) tagging (by putting a linear layer on top of \(w\) and outputting the PoS tag category), or lexical semantics (e.g. computing word vector similarity). We want an analogous mapping, but for multiple syntactic formalisms. In this case, the association is not trivial since syntactic parsing is a tree-based structured prediction problem. Also, we are interested in multilingual pre-trained models, which have gained interest in recent years. Then, the goal is to associate their word vectors to an estimate of to what extent characteristics of a given formalism are encoded in their representational space, and whether this can differ across dimensions such as tested models, formalisms, and treebanks. Linear probing framework for parsingWe take the study by Vilares et al. (2020) as our starting point. However, we first identify some weaknesses in their work: (i) it is limited to English, (ii) they do not give specific estimates of the amount of trees recoverable with respect to control experiments, and (iii) they only test one type of tree linearization. For the latter, the main motivation, in particular for the case of dependency parsing, was that the chosen linearization had performed the best in previous work (Strzyz et al., 2019) when training from scratch a transducer without pre-training. However, later work suggests that that is debatable: for instance, Munoz-Ortiz et al. (2021) show that different tree linearizations might be better suited to different languages, and Vacareanu et al. (2020)'s results indicate that other encodings worked better when pre-trained language models are used. To recover dependency and constituent structures, we will represent the trees using existing encodings for parsing as sequence labeling (Gomez-Rodriguez and Vilares, 2018; Strzyz et al., 2019). Under this configuration, the interaction between learning a model and searching for linguistic properties is now direct. We can use probing architectures that rely entirely on the pretrained representations, and simply add a linear layer on top to map continuous vectors to discrete labels. We can expect that the capabilities of the output layer are not enough to learn the syntactic tasks at hand by themselves, so it must rely on the quality of the pretrained representations. Yet, we also will include control baselines that we will discuss later. Research questionsWe want to answer two questions: (i) how much syntax is recoverable from different LLMs? and (ii) how is it affected by aspects such as the models, the type of formalism, and the pretraining and assessment data? In what follows, we describe the sequence labeling encodings, both for dependency and constituent paradigms (SS3.1), and the specifics of the probing setup used for our experiments (SS3.2). ### Sequence labeling encodings of syntax Parsing as sequence labeling can be defined as learning a function \(f_{n}:V^{n}\to L^{n}\) to map a sequence of words into a sequence of linearized labels that can be decoded to fully recover a constituent or dependency tree. Here we are not interested in the parsers _per se_, but in whether the sequence-labeling encodings defined for them provide a simple, lossless representation of dependency and constituent trees that is useful for probing. In what follows, we briefly describe these representations. #### 3.1.1 Dependency parsing Dependencies between tokens can be encoded using labels of the form \((x_{i},l_{i})\), where \(x_{i}\) is a subset of the arcs related to the token \(w_{i}\), and \(l_{i}\) denotes the dependency relation (Strzyz et al., 2019). There are different ways of encoding \({x_{i}}\)1. We compare three families of linearizations (due to brevity, we refer to the references below for the details): Footnote 1: To ensure that the labels produce a valid tree, we apply the postprocessing described in the paper of each encoding. Head-selection(Spoustova and Spousta, 2010; Li et al., 2018; Strzyz et al., 2019). \(x_{i}\) encodes the dependency arc pointing directly to \(w_{i}\). This can be done using an absolute index or a relative offset computing the difference between \(w_{i}\)'s index and its head. We use (r\({}^{\text{h}}\)) encoding where the head of \(w_{i}\) is the \(x_{i}\)th word to the right, if \(x_{i}>0\), and the \(x_{i}\)th word to the left if \(x_{i}<0\).2 Footnote 2: There are other head-selection encodings where the offset depends on some word property, e.g., PoS tags like in (Vilares et al., 2020), but using these encodings can blur the probing, since we need to access such external information. **Bracketing-based**(Yli-Jyra and Gomez-Rodriguez, 2017; Strzyz et al., 2020). \(x_{i}\) encodes the arcs using strings of brackets to represent a subset of the incoming and outgoing arcs of \(w_{i}\) and its direct neighbors. We use a 2-planar bracketing Figure 1: Example of a dependency tree linearization. Dependency types are omitted. For 2p\({}^{\text{b}}\), the dot indicates no bracket in the first and/or second plane. encoding (2p\({}^{\text{b}}\)) that uses two independent planes of brackets to encode non-projective trees. Transition-based(Gomez-Rodriguez et al., 2020). \(x_{i}\) encodes a sub-sequence of the transitions that are generated by a left-to-right transition-based parser. Given a transition list \(t=t_{1},...,t_{m}\) with \(n\) read transitions, \(t\) is split into \(n\) sub-sequences such that the \(i\)th sub-sequence is assigned to \(w_{i}\). We use a mapping from the arc-hybrid algorithm (ah\({}^{\text{tb}}\)) (Kuhlmann et al., 2011). These mappings are implicit and often perform worse than more direct encodings, but they are learnable. These encodings produce labels with different information. Following Figure 1, for \(w_{2}\) (painting), the \(2p^{b}\) encoding states that the previous word \(w_{1}\) has one incoming arc from the right ("\(<\)" symbol, but it does not say from where, as that information is encoded in other labels) and that \(w_{2}\) has one outgoing arc to the left (") symbol, but it does not specify where). For the transition-based encoding, the mapping is less straightforward across words, but still connected to them. For instance, for \(w_{1}\) ('This') the label indicates that the \(w_{1}\) has no connection to \(w_{0}\), that it is a dependent of \(w_{1}\), and that it has no children. The motivation to compare encodings is to test: (i) the consistency of the framework, i.e., if trends across LLMs remain, and (ii) to see what information is easier to recover when the LLM weights are frozen. #### 3.1.2 Constituent parsing We here use the encoding approach by Gomez-Rodriguez and Vilares (2018), which encodes common levels in the tree between pairs of tokens.3 The labels are of the form \((n_{i},c_{i},u_{i})\). The element \(n_{i}\) encodes the number of tree levels that are common between \(w_{i}\) and \(w_{i+1}\), computed as the difference with respect to \(n_{i-1}\). The element \(c_{i}\) encodes the lowest non-terminal symbol that is shared between those two words. \(u_{i}\) encodes the leaf unary branch located at \(w_{i}\), if it exists. An example is shown in Figure 2. Footnote 3: To our knowledge, when we did the experiments, this encoding (together with variants) was the only available family of sequence-labeling encodings for constituency parsing. Contemporaryaus to the end of this work, another family of encodings – based on the tetra-tagging (Kitaev and Klein, 2020) - has been proposed and implemented as a pure tagging approach (Amini and Cotterell, 2022). ### Probing architecture We use a 1-layered feed-forward network on top of the LLMs to predict the labels. We propose three setups (training hyperparameters are detailed in Appendix A): Frozen weights (frz)The LLM weights are frozen and only the weights of the linear output layer are updated during fine-tuning. Random weights (rnd)Only the weights of the linear classifier layer are updated, but the weights of the encoders are randomized. We aim to prevent misleading conclusions in the hypothetical case that the linear layer can learn the mapping itself, i.e., we use this setup as a lower bound baseline. It is also a control experiment, as the difference between the results of this setup and the frz setup would be the measure we are looking for to estimate the amount of syntax information encoded in the representational space of pre-trained LLMs. Fine-tuned weights (ftd)A fine-tuned LLM where all weights are updated, i.e., this setup is used as an upper bound baseline. ### Multilingual Language Models The method here proposed is model-agnostic. Our aim is not to obtain the highest results or to use the largest LLM. We select a few LLMs that are representative and runnable with our resources: mBERT(Devlin et al., 2019) It uses WordPiece tokenization. While subword tokenizers are effective with representative splits, they yield suboptimal subtokens for low-resource languages (Agerri et al., 2020; Virtanen et al., 2019), as wrong subtokens will not encode meaningful information. mBERT is pretrained on 104 languages from the dump of the largest Wikipedias. Figure 2: Example of a constituent tree linearization. xlm-roberta(Conneau et al., 2020) A multilingual LLM trained as RoBERTa Liu et al. (2019). It has the same architecture as BERT, but only pretrained on the masked word prediction task and uses a byte-level BPE for tokenization. It has been pretrained on 2.5TB of filtered CommonCrawl data that contains text in 100 languages (XLM-100), and for longer time than mBERT. canine (-c and -s)Clark et al. (2022) It uses char-based tokenization, which is believed to perform better in languages that are challenging for subword tokenization, such as those with vowel harmony. It eliminates the issue of unknown tokens. It is pre-trained on masked language modeling and next sentence prediction on the same data as mBERT: canine-c is pretrained using a char-level loss, while canine-s includes a previous subword tokenization to predict masked subword tokens. In all models, labels are first broken down into subtokens before being processed by the LLMs to assign them to the \(n\) input tokens. The classifier layer then assigns a label to each subtoken (i.e. subword for mBERT and xlm-roberta and character for canine). Then, we select the label assigned to the first sub-element, which is a common approach. ## 4 Methodology and Experiments Data for dependency parsingFor the assessment of dependency structures, we selected 13 Universal Dependencies (UD 2.9; Nivre et al., 2020) treebanks from different language families and with different amounts of annotated data. Although mBERT, xlm-roberta, and canine have been pre-trained on different (multilingual) crawled datasets, we select treebanks whose languages are either present in all our LLMs' pretraining data or in none of them (although presence proportions might vary in the case of xlm-roberta). For more details, see Table 1. Data sizes have been obtained from Wu and Dredze (2020) for Wiki-100 and Conneau et al. (2020) for XLM-100. Data for constituent parsingWe assess constituent structures on the PTB Marcus et al. (1993), the CTB Xue et al. (2005), and 8 constituent treebanks from the SPMRL shared task Seddah et al. (2014)4, whose languages are shown in Table 2. Footnote 4: We do not have the license for the Arabic treebank. Language disparityWe use (mostly) different languages for each paradigm. For constituent treebanks, we only have access to rich-resource languages, so we prioritize diversity for dependencies. Comparing languages across syntax paradigms is not particularly useful, due to varying metrics, annotation complexity, and treebank comparisons. Instead, we compare error reductions against control models to estimate the recoverability of specific syntactic formalisms by an LLM (see SS5). MetricsFor dependency parsing, we use Labeled Attachment Score (LAS). For constituent parsing, we use the labeled bracketing F1-score. ## 5 Results We present the assessment for dependency structures in SS5.1, and for constituent structures in SS5.2. ### Dependency parsing results We break down the results comparing frozen vs: (i) random, and (ii) fine-tuned weights. clearly surpasses the rnd baseline, i.e., the control experiment. The results suggest that under the frozen setups, mbert is better than xlm-roberta at recovering dependencies, although pre-trained xlm-roberta models are usually better at downstream tasks Liu et al. (2019). The ranking of the LLMs is stable across treebanks. The LAS scores across encodings are in a similar range, and the average LAS across different encodings is very similar too (bottom row in Table 3). On the other hand, the results for canine do not surpass the lower bound baseline in most cases. This is unlikely to be because of a bad fitting, since the random weights baselines perform almost the same across pre-trained models, encodings and treebanks. Also, while canine-s outperforms the random baseline for the highest-resourced languages, canine-c underperforms it for all languages except for Chinese. For a clearer picture, Figure 3 shows the relative LAS error reductions \(\epsilon_{LAS}(\mathsf{rnd},\mathsf{frz})\) for the 2-planar encoding and sorted by the size of the training set used for the probe. Next, we focus on \(2\mathrm{p}^{\mathsf{b}}\) as previous work has demonstrated its robustness across various configurations Muñoz-Ortiz et al. (2021); Strzyz et al. (2019, 2020).5 For larger treebanks, whose languages are supported by LLMs, the error reductions between the frz and rnd setups are large, showing that the LLMs encode to some extent dependency structures in their representational space. For languages that are not supported by the LLMs, the error reductions are clearly smaller. This happens for low-resource treebanks, in which only mBERT is able to obtain improvements over the rnd baseline, but also for high-resource ones, such as Ancient Greek (the largest tested treebank), suggesting that the treebank size is not a key factor for the probes (we discuss this in detail SS5.3). Footnote 5: The trends for the other encodings are similar and they can be seen in Appendix B. Frozen (frz) vs fine-tuned (ftd) setupTable 4 shows the scores for the fine-tuned models. In this case, xlm-roberta sequence labeling parsers obtain a larger average error reduction, while mbert obtains slightly better results for the ftd setup. The results show that even if under the frz setup dependency structures can be recovered, fine-tuning the whole architecture gives significant improvements. Also, the performance across the board for the fine-tuned models is very competitive for all treebanks supported by the LLMs. Note that even if such results lag below the state of the art (not the target of our work), we rely exclusively on multilingual pretraining vectors, without any powerful parser decoder, such as Kitaev and Klein (2018) Figure 3: \(\epsilon_{LAS}(\mathsf{rnd},\mathsf{frz})\) on the dependency treebanks test sets for the \(2\mathrm{p}^{\mathsf{b}}\) encoding. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Treebank} & \multicolumn{4}{c}{mBERT} & \multicolumn{4}{c}{xlm-roberta} & \multicolumn{4}{c}{c-c} & \multicolumn{4}{c}{c-c} & \multicolumn{4}{c}{c-c} & \multicolumn{4}{c}{c-c} \\ & \(2\mathrm{p}^{\mathsf{b}}\) & \(\mathrm{ah}^{\mathsf{b}}\) & \(\mathrm{rb}^{\mathsf{b}}\) & \(\mathrm{2p}^{\mathsf{b}}\) & \(\mathrm{ah}^{\mathsf{b}}\) & \(\mathrm{rb}^{\mathsf{b}}\) & \(\mathrm{2p}^{\mathsf{b}}\) & \(\mathrm{ah}^{\mathsf{b}}\) & \(\mathrm{rb}^{\mathsf{b}}\) & \(\mathrm{2p}^{\mathsf{b}}\) & \(\mathrm{ah}^{\mathsf{b}}\) & \(\mathrm{rb}^{\mathsf{b}}\) & \(\mathrm{2p}^{\mathsf{b}}\) & \(\mathrm{ah}^{\mathsf{b}}\) & \(\mathrm{rb}^{\mathsf{b}}\) & \(\mathrm{rb}^{\mathsf{b}}\) \\ & \(\mathsf{rnd}\) & frz & rnd & frz & rnd & frz & rnd & frz & rnd & frz & rnd & frz & rnd & frz & rnd & frz & rnd & frz & rnd & frz & rnd & frz & rnd & frz \\ \hline _Slot Sami_ & 11.5 & 9.2 & 8.0 & 8.4 & 10.4 & 13.5 & 14.2 & 6.9 & 6.5 & 3.0 & 10.5 & 8.5 & 7.6 & 7.2 & 9.0 & 5.1 & 9.2 & 6.2 & 10.5 & 10.3 & 9.3 & 8.0 & 9.2 & 8.0 \\ _Gaijagura_ & 31.8 & 30.9 & 26.4 & 26.4 & 27.9 & 22.2 & 35.3 & 19.0 & 26.4 & 12.1 & 31.1 & 12.2 & 29.2 & 22.2 & 24.0 & 15.3 & 27.0 & 22.2 & 29.4 & 29.8 & 22.0 & 21.5 & 27.9 & 27.9 \\ _Liguiguia_ & 2.9 & 7.2 & 12.1 & 21.7 & 16.2 & 21.2 & 3.8 & 16.4 & 16.9 & 18.6 & 6.6 & 4.6 & 3.6 & 12.8 & 8.2 & 8.2 & 10.5 & 4.8 & 5.7 & 11.0 & 11.7 & 13.5 & 12.5 \\ _B for constituent parsing, or Dozat et al. (2017) for dependencies. Encoding comparisonResults from Table 3 show that the three encodings are able to recover a similar amount of syntax. It is worth noting that, although \(r^{h}\) performs better for the \(rnd\) setup, this does not translate into a better recovering from \(frz\) representations. It seems also that \(2p^{b}\) recovers more syntactic information in higher-resourced setups (i.e. Bulgarian), while \(r^{h}\) and \(ahtb\) perform better in lower-resourced configurations (i.e Skolt Sami, Ligurian). Dependency displacementsFigure 4 shows the performance across arcs of different length and direction for the \(frz\) models with the \(2p^{b}\) encoding over 4 languages: the one with most left arcs (Turkish), with most right arcs6 (Vietnamese), and two balanced ones (Basque and Welsh). The multilingual LLMs capture the particularities of languages (for the case of the \(Welsh_{CCG}\) treebank, even if it is balanced in terms of the number of left/right arcs, left arcs are on average of a distance of \(1.6_{\pm 1.8}\) units while right arcs are of \(3.9_{\pm 4.9}\) units). Also, the LLMs keep the trends across displacements, i.e., no LLM notably changes their expected performance with respect to the others for a specific subset of dependencies. Footnote 6: Guajajara is excluded due to dataset size limitations. ### Constituent parsing results We break down the results comparing frozen _vs_: (i) random, and (ii) fine-tuned weights. **Frozen (\(frz\)) _vs_ random weights (\(rnd\)) setups_** Table 5 shows the bracketing F1 score across treebanks and the encodings for the two setups. The trend from dependency parsing remains: mBERT outperforms \(xlm\)-roberta for all languages, while canine-s outperforms canine-c. In this case, canine-s improves over the random baseline for all treebanks, while canine-c only outperforms the random baseline for 3 out of 10 models, which suggests the difficulties that these character-level language models have to model syntax, even if they perform well on other downstream tasks. The exceptions are Korean, German and Chinese. Chinese was also an exception in the case of dependency parsing, so an explanation might be that its writing systems encode more information per character than other languages. Chinese characters represent a whole morpheme, being more similar to a subword token, while Korean Hangul encodes a syllable per character, instead of a single sound as alphabets of the other languages tested. Figure 5 shows the error reductions across the board, sorted by the size of the training data used for the probing. In this case, all tested languages are supported by the LLMs, but there are large differences in the size of the training data (e.g., Swedish with \(5\,000\) sentences _vs_ German with \(40\,472\) sentences). However, we do not see an increase in error reduction when the size of training data grows. and fine-tuned setups, and the behaviors are similar to those obtained in the case of dependency parsing, except for what looks like some empirical outlier, e.g., the fine-tuned mBERTt for Hebrew. Hebrew also obtains the lowest error reductions for all LLMs. Span lengthsPlotting the F1-score for each span length is the rough alternative to dependency displacements in the context of constituent parsing. In Figure 6 we again show specific examples for some of the studied languages: the most left-branching language (Korean), two balanced ones (Basque and Hungarian), and the most right-branching one (English). Similarly to the case of dependency parsers, the trends across models persist across different span lengths. They show that, regarding LLMs, mbert obtains the highest F-score for longer spans, while xlm-roberta shows great differences between shorter and longer spans. The canine models perform worse for all lengths. ### Discussion We now discuss the main insights and potential limitations of the proposed assessment framework. Pretraining data _versus_ Assessment dataAn interesting question that arises from multilingual recovery is whether the probe is able to recover the trees due to the size of the training data used for the assessment, although in theory it should be hard to learn by itself by an initially clueless classifier (the random baseline). The experiments show evidence that the size of the training data is not a primary factor to do multilingual, multi-formalism linear probing as sequence labeling. For constituent parsing, we observed that larger treebanks did not come with an increment in the error reductions between the frozen and the random setups, and that the control experiment can thus be used to give an estimate of the amount of structure recoverable from pretrained representations. Similarly, in the context of dependency parsing, we encountered an analogous situation. Despite the existence of treebanks for languages unsupported by the LLMs, spanning both large (Ancient Greek) and small treebanks (Skolt Sami or Bhojuri), we observe that treebank size does not significantly impact the reduction in \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Treebank} & \multicolumn{3}{c|}{mBERT} & xlm-roberta & \multicolumn{3}{c|}{canine-c} & \multicolumn{3}{c}{canine-s} \\ & frz & frd & err & frz & frd & err & frz & frd & err & frz & frd & err \\ \hline Swedish & 56.0 & 79.4 & 53.2 & 42.3 & 79.9 & 65.2 & 22.7 & 29.6 & 8.9 & 27.8 & 47.8 & 65.2 \\ Hebrew & 74.5 & 75.4 & 3.5 & 59.9 & 76.2 & 40.6 & 28.6 & 36.4 & 9.4 & 40.1 & 53.8 & 22.9 \\ Polish & 77.0 & 93.4 & 79.0 & 94.0 & 94.1 & 33.9 & 56.6 & 34.3 & 42.9 & 73.9 & 54.3 \\ Basque & 56.6 & 85.0 & 65.4 & 47.4 & 51.8 & 71.6 & 33.0 & 49.1 & 40.1 & 41.2 & 36.2 \\ Hungarian & 69.8 & 91.5 & 79.6 & 66.0 & 92.1 & 76.8 & 31.5 & 52.7 & 30.9 & 65.8 & 42.0 \\ French & 50.1 & 82.2 & 64.3 & 23.4 & 82.6 & 74.3 & 12.1 & 70.2 & 66.1 & 20.1 & 76.0 & 70.0 \\ Korean & 57.4 & 86.4 & 65.1 & 53.2 & 88.0 & 74.4 & 37.6 & 66.0 & 45.6 & 49.1 & 71.4 & 50.8 \\ English & 57.2 & 91.9 & 81.4 & 70.9 & 92.8 & 87.9 & 9.6 & 82.4 & 80.5 & 75.5 & 86.6 & 83.8 \\ German & 45.4 & 87.3 & 76.7 & 41.1 & 88.4 & 80.3 & 18.5 & 71.7 & 65.3 & 24.0 & 77.3 & 70.1 \\ Chinese & 56.6 & 85.5 & 66.6 & 45.6 & 88.9 & 79.6 & 25.6 & 67.0 & 55.6 & 39.3 & 74.4 & 57.8 \\ \hline Average & 60.1 & 85.8 & 62.2 & 49.6 & 86.8 & 73.2 & 25.4 & 58.2 & 42.1 & 33.7 & 69.0 & 55.3 \\ \hline \hline \end{tabular} \end{table} Table 6: F-score for the test sets of the constituent treebanks, LLMs analyzed for the ftd _vs_ the frz setup. Figure 5: \(\epsilon_{F1}(\mathsf{rnd},\mathsf{frz})\) on the constituent test sets. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{Treebank} & \multicolumn{3}{c|}{mBERT} & xlm-roberta & \multicolumn{3}{c|}{canine-c} & \multicolumn{3}{c}{canine-s} \\ & frz & frd & err & frz & frd & err & frz & frd & err \\ \hline Swedish & 20.8 & 56.0 & 80.1 & 42.3 & 25.5 & 22.7 & 25.5 & 28.7 \\ Hebrew & 41.5 & 74.5 & 43.1 & 60.0 & 40.1 & 33.9 & 40.1 & 44.0 & 22.9 \\ Polish & 42.8 & 77.0 & 41.8 & 68.0 & 40.1 & 33.9 & 40.1 & 42.9 \\ Basque & 32.5 & 56.6 & 33.7 & 47.1 & 66.2 & 33.0 & 36.2 & 41.4 \\ Hungarian & 40.0 & 69.9 & 36.6 & 66.0 & 37.1 & 31.5 & 37.4 & 41.0 \\ French & 14.5 & **50.1** & 15.4 & 32.4 & 14.1 & 12.1 & 13.9 & 20.1 \\ Korean & 32.5 & 57.4 & 32.9 & 53.2 & 33.8 & 37.5 & 33.8 & 42.0 \\ English & 12.5 & 73.4 & 41.0 & 50.9 & 9.6 & 69.0 & 17.5 \\ German & 18.9 & 45.4 & 18.1 & 41.2 & 16.4 & 18.5 & 16.4 & 24.0 \\ Chinese & 16.1 & **56.6** & 8.2 & 45.6 & 16.9 & 25.6 & 17.3 & 39.3 \\ \hline Average & 28.2 & **60.1** & 27.8 & 49.6 & 27.0 & 25.4 & 27.2 & 33.7 \\ \hline \hline \end{tabular} \end{table} Table 5: F-score for the test sets of the constituent treebanks. LLMs analyzed for the frz and rnd setups. Figure 6: Average F1 score for different span lengths and LLMs. errors between the frozen and random setups. Either with big or small data, the error reduction between the random and the frozen models is clearly lower than for the treebanks where the language is supported by the LLMs. Among rich-resource treebanks, the size of the data does not have a great influence on the error reductions between the random and frozen weights setups, suggesting that dataset size does not influence the estimates of the dependency structure that are recoverable from the representations. Language model differencesThe results on the tested LLMs suggest that subword tokenization is necessary to represent syntax, in contrast with token-free models, even if these can later perform well on downstream tasks that require compositionality. Particularly, not only do subword-based models outperform char-based ones, but also canine-s, which is trained using subword loss even though it is a char-level model, performs significantly better than canine-c. It is noteworthy that xlm-roberta generally outperforms mBERT in most downstream tasks, including parsing, as previous studies showed Conneau et al. (2020) and in our fine-tuned results, it performs on par on dependency parsing (Table 4) and outperforms mBERT in constituency parsing (Table 6). Yet, for the frozen weights setup, mBERT's representations recovered slightly but consistently better syntactic representations. This suggests that the improvements in how xlm-roberta was trained with respect to mBERT, e.g., training for longer time, or more data, are not key factors to better encode syntax. Additionally, based on our experiments, it appears that mBERT demonstrates a certain level of proficiency in recovering syntax information for the smallest treebanks, particularly for languages not included in the pretraining data (such as Ligurian et al. (2020), and Kiche). This suggests a capacity to extend its syntactic knowledge to previously unseen languages, albeit to a limited extent, unlike the other models. Syntactic formalismPrevious studies (e.g., Vilares et al. (2020)), hypothesized that pre-trained word vectors might fit better constituent- than dependency-based tasks, since the masked language objective links better with the former formalism, i.e., when a model is learning to unblur a masked token, the constituent structure is to some extent implicit (e.g., an adjective is missing between the determiner and the noun, forming a noun phrase), while dependencies are less obvious. We could not find a clear evidence of this. Although some of the frz models are unable to surpass the rnd baseline in the case of dependencies (while this is not the case for constituents), these instances are languages that are not present in the pretraining data, except for the canine models. ## 6 Conclusion We proposed a sequence-labeling framework to recover multi-formalism syntactic structures from multilingual LLMs. By mapping syntactic trees to labels we associated output word vectors to labels that encode a portion of the tree, while using a single assessment framework for both constituent and dependency structures. We compared three popular multilingual language models. The results show that subword LLMs can recover a percentage of these structures. We evaluated the outcomes by calculating the reduction in errors compared to control models, aiming to gauge the extent to which an LLM can recover specific syntactic structures. The assessment appears reliable and unaffected by variables like the training set's size employed for probing, highlighting that pretraining data is an important factor for recoverability. Last, we found no clear evidence that contextualized vectors encode constituent structures better than dependencies (nor the opposite). ### Limitations Physical resourcesWe did not consider larger language models as we do not have access to the necessary computational resources to run then, hence limiting the scope of our study. We only had access to 2 GeForce RTX 3090, having a total GPU memory of 48 GB, insufficient for fine-tuning many LLMs over different treebanks and formalisms, as in this work. Language diversityThe constituent treebanks used are all from languages that are relatively rich-resource and are present on the pretraining data of the LLMs. To the best of our knowledge there are no available constituent treebanks from lower-resource languages that are also absent in multilingual LLMs. In consequence, we could not test the effect of absence of pretraining data in order to see if the trends obtained in dependency treebanks prevail here. In addition, for dependency parsing, even a large multilingual resource like Universal Dependencies only has data for about 100 languages, a tiny fraction of the 7 000 existing human languages. InterpretationAs mentioned in the introduction, we have to be careful when dealing with probing frameworks. Although we developed solid experiments, and also included control experiments, syntax knowledge is hard to isolate, measure and interpret, so we have tried to be careful with our conclusions. ## Acknowledgments We acknowledge the European Research Council (ERC), which has funded this research under the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), grant FPI 2021 (PID2020-113230RB-C21) funded by MCIN/AEI/10.13039/501100011033, and Centro de Investigacion de Galicia "CITIC", funded by the Xunta de Galicia through the collaboration agreement between the Conselleria de Cultura, Educacion, Formacion Profesional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS).
2309.00068
The critical weighted inequalities of the spherical maximal function
Weighted inequality on the Hardy-Littlewood maximal function is completely understood while it is not well understood for the spherical maximal function. For the power weight $|x|^{\alpha}$, it is known that the spherical maximal operator on $\mathbb{R}^d$ is bounded on $L^p(|x|^{\alpha})$ only if $1-d\leq \alpha<(d-1)(p-1)-d$ and under this condition, it is known to be bounded except $\alpha=1-d$. In this paper, we prove the case of the critical order, $\alpha=1-d$.
Juyoung Lee
2023-08-31T18:14:55Z
http://arxiv.org/abs/2309.00068v1
# The critical weighted inequalities of the spherical maximal function ###### Abstract. Weighted inequality on the Hardy-Littlewood maximal function is completely understood while it is not well understood for the spherical maximal function. For the power weight \(|x|^{\alpha}\), it is known that the spherical maximal operator on \(\mathbb{R}^{d}\) is bounded on \(L^{p}(|x|^{\alpha})\) only if \(1-d\leq\alpha<(d-1)(p-1)-d\) and under this condition, it is known to be bounded except \(\alpha=1-d\). In this paper, we prove the case of the critical order, \(\alpha=1-d\). Key words and phrases:Spheircal maximal function, weighted inequality 2020 Mathematics Subject Classification: Primary 42B25, Secondary 35S30 ## 1. Introduction Let \(d\geq 2\) and \(d\sigma\) be the normalized Lebesgue measure on \(\mathbb{S}^{d-1}\). Then, for \(t>0\), we define the spherical average of \(f\) by \[A_{t}f(x)=\int_{\mathbb{S}^{d-1}}f(x-ty)d\sigma(y).\] It is very clear that \(A_{t}\) is a bounded operator on \(L^{p}\) for any \(1\leq p\leq\infty\). When we take the supremum on \(t>0\), we get the following (global) spherical maximal function: \[Mf(x)=\sup_{t>0}|A_{t}f(x)|.\] Stein [16] proved that \(M\) is bounded on \(L^{p}\) if and only if \(p>d/(d-1)\) when \(d\geq 3\). Later, Bourgain [2] obtained that \(M\) is bounded on \(L^{p}\) if and only if \(p>2\) when \(d=2\). A few years later, Mockenhaupt, Seeger, and Sogge [11] proved the same result using the local smoothing estimate. So far, many researches have been devoted to studying the spherical maximal function in various settings. For example, we can define the local spherical maximal function as follows: \[M_{c}f(x)=\sup_{1<t<2}|A_{t}f(x)|.\] We have so called \(L^{p}\)-improving phenomenon for \(M_{c}\), which means that \(M_{c}\) is bounded from \(L^{p}\) to \(L^{q}\) for some \(p<q\). Schlag, Sogge [15] obtained the sharp range of \(p,q\) except for some boundary points, and S. Lee [10] obtained the result on the boundary lines except the case \((p,q)=(5/2,5)\). In this paper, we are interested in the weighted inequalities of \(M\). This was first considered by Duoandikoetxea, Vega [5]. They proved that \(M\) is bounded on \(L^{p}(|x|^{\alpha})\) if \(p>d/(d-1)\) and \(1-d<\alpha<(d-1)(p-1)-1\). This result is sharp except the critical case, \(\alpha=1-d\). For the critical weight \(|x|^{1-d}\), it is known that \(M\) is bounded on \(L^{p}_{rad}(|x|^{1-d})\) when \(1<p\leq\infty\) for \(d\geq 3\), and when \(2<p\leq\infty\) for \(d=2\) (see [4]). Here, \(L^{p}_{rad}(|x|^{1-d})\) is the subspace of \(L^{p}(|x|^{1-d})\) consists of radial functions. Recently, Nowak, Roncal, Szarek [12] obtained sharp conditions for the spherical mean Radon transform on radial functions to be bounded on weighted spaces. Meanwhile, Lacey [8] proved that sparse bounds on the spherical maximal function implies some weighted inequalities. In [8], the author tried to characterize the class of weight which makes the spherical maximal function bounded, while it is mentioned that \(A_{p}\)-weight is not a correct tool to characterize such weight (see also [3]). The bilinear weighted inequalities of the spherical maximal function is also considered by Roncal, Shrivastava, Shuin [13]. The following is the main theorem which says that \(M\) is bound on the critical weighted space \(L^{p}(|x|^{1-d})\). **Theorem 1.1**.: 1. _Let_ \(d=2\) _Then,_ \(M\) _is bounded on_ \(L^{p}(|x|^{-1})\) _if and only if_ \(p>2\)_._ 2. _Let_ \(d\geq 3\)_. Then,_ \(M\) _is bounded on_ \(L^{p}(|x|^{1-d})\) _when_ \(p\geq 2\)_._ We believe that when \(d\geq 3\), \(M\) is also bounded on \(L^{p}(|x|^{1-d})\) when \(d/(d-1)<p<2\). To prove Theorem 1.1, we first consider \(M_{c}\) and decompose it using the Littlewood-Paley decomposition. To recover the estimate for \(M\), we use a modification of the standard argument (see, for example, [2], [14]). For the purpose, we define the Littlewood-Paley decomposition. Let \(\chi\) be a smooth function such that \(\operatorname{supp}\chi\subset(1-10^{-2},2+10^{-2})\) and \(1=\sum_{j\in\mathbb{Z}}\chi(s/2^{j})\) for all \(s\neq 0\). We define the projection \(\mathcal{P}_{j}\) by \(\widehat{\mathcal{P}_{j}f}(\xi)=\widehat{f}(\xi)\chi_{j}(|\xi|)\) where \(\chi_{j}(x)=\chi(s/2^{j})\). Then, we define \[M_{j}f(x)=M_{c}\mathcal{P}_{j}f(x).\] Also, we denote \(\tilde{\chi}:\mathbb{R}^{d}\to\mathbb{R}\) by a smooth function which is supported on \(\{x:1/100<|x|<100\}\) and equals to \(1\) when \(1/10<|x|<10\), and then \(\tilde{\chi}_{j}(x)=\tilde{\chi}(x/2^{j})\). We obtain sharp estimates of \(M_{j}\) as follows. **Proposition 1.2**.: _Let \(d\geq 2\), \(j,k\geq 0\), and \(d/(d-1)<p<\infty\). Then, for \(\delta=\delta(d,p)>0\), we have_ \[\|\chi_{-k}(|\cdot|)M_{j}f\|_{L^{p}(|x|^{1-d})}^{p}\] \[\lesssim 2^{-\delta|j-k|}\|\tilde{\chi}\mathcal{P}_{j}f\|_{L^{p}(| x|^{1-d})}^{p}+\sum_{m\in\mathbb{Z}}2^{-Nj-(d-1)|m|-\delta k}\|\chi_{m} \mathcal{P}_{j}f\|_{L^{p}(|x|^{1-d})}^{p} \tag{1.1}\] _for any \(N>0\)._ _Remark 1_.: In the above proposition, the main term is obviously the first term on the right hand side of (1.1), and the other terms come from the Schwartz tail. Throughout the paper, we will encounter similar issues several times. Since those terms have sufficiently nice decay, they are always harmless. Nevertheless, we rigorously handle all those situations. The reader who believes that the Schwartz tail is harmless may ignore these parts, precisely, every term containing \(2^{-Nj}\). We briefly explain the novelty of this paper. Consider a suitable bounded function \(F_{j}\) which is supported in an annulus \(\{x:||x|-1|<2^{-j}\}\) and the Fourier transform of \(F_{j}\) is supported on \(\{\xi:|\xi|\sim 2^{j}\}\). This example says that \(\|M_{j}F_{j}\|_{L^{p}(|x|^{1-d})}\gtrsim 2^{-j/p}\approx\|F_{j}\|_{L^{p}(|x|^{1-d})}\) since \(|M_{j}F_{j}(x)|\gtrsim 1\) when \(|x|\lesssim 2^{-j}\). To prove that the spherical maximal function is bounded, it is essential that \(\|M_{j}f\|_{L^{p}}\lesssim 2^{-cj}\|f\|_{L^{p}}\) for a positive number \(c>0\). However, on the weighted space \(L^{p}(|x|^{1-d})\), we cannot expect this exponential decay as we can see at the above. To overcome this difficulty, we decompose \(x\) dyadically. Proposition 1.2 tells us that the main contribution of \(M_{j}\) on \(L^{p}(|x|^{1-d})\) occurs when \(|x|\sim 2^{-j}\). This gives us the orthogonality and allows us to prove our main theorem. The proof of Proposition 1.2 has two parts, \(j>k\) and \(j\leq k\). In the former case, we use the local smoothing estimate of the wave operator. However, since we are considering the critical weight, \(\epsilon\)-loss on the order of smoothing is not allowed. When \(d\geq 4\), we have the local smoothing estimates without the \(\epsilon\)-loss (see [7]) while we do not have such estimates when \(d=2,3\). However, when we consider \(L^{p}\)-\(L^{q}\) estimate, we have the critical local smoothing estimate. Thus, we use the \(L^{p}\)-\(L^{q}\) local smoothing estimates and Holder's inequality to recover the \(L^{p}\)-estimate. Since \(j>k\), the singularity of \(|x|^{1-d}\) is harmless. In the latter case \(j\leq k\), the blow-up of \(|x|^{1-d}\) is no more allowable, so we handle this case using the spherical harmonic expansion. The key idea is that we handled the \(t\)-integration of the wave operator \(\mathcal{W}_{t}\) using the Plancherel theorem, as a Fourier transform of \(|\xi|\), the modulus of the frequency variable. ### Notations 1. We denote \(A\lesssim B\) when there is an implicit constant \(C>0\) which is independent of \(A\) and \(B\). In the context, \(C\) may depend on \(p,d,N\). Sometimes, we explicitly denote the implicit constant \(C_{a,b}\) which depends on \(a,b\). 2. We abuse several notations for the \(L^{p}\)-spaces. \(L^{p}(w)\) denotes the \(L^{p}\)-space on a suitable domain with the weight \(w\). The domain will depend on the context, but it will be clear. Also, we denote \(L^{p}_{x,t}(\mathbb{R}^{d}\times I)\) by the \(L^{p}\)-space with variable \((x,t)\in\mathbb{R}^{d}\times I\). Finally, we also denote \(L^{p}_{x}(w;D)\) by the \(L^{p}\)-space with the variable \(x\), the domain \(D\) and the weight \(w\). ### Acknowledgement The author would like to thank Kalachand Shuin for a careful reading of the first version of the draft and finding some typos. ## 2. Preliminaries ### Basic properties We first review some basic properties which are widely used in various literature. For a rectangle \(R\) in \(\mathbb{R}^{d}\), we denote \(R^{*}\) by the dual rectangle of \(\mathbb{R}^{d}\) centered at the origin. Then, we can find a smooth function \(\varphi_{R}\) such that \(|\varphi_{R}(x)|\sim 1\) on \(R\), rapidly decreasing outside of \(R\), and \(\widehat{\varphi_{R}}\) is supported on \(R^{*}\). For example, when \(R\) is a cube centered at the origin with sidelength \(r\), then \(\varphi_{R}\) satisfies \[|\varphi_{R}(x)|\lesssim(1+\frac{|x|}{r})^{-N}.\] Then, we have the local orthogonality on \(R\). **Lemma 2.1**.: _For a rectangle \(R\) and a family of functions \(\{h_{i}\}\), we assume that \(\{\operatorname{supp}\widehat{h_{i}}+R^{*}\}\) is a family of finitely overlapping annulus of the form \(\{\xi:|\xi|\sim 2^{i}\}\). Then we have_ \[\int_{R}\sum_{i}|h_{i}(x)|^{p}dx\lesssim\int|\sum_{i}h_{i}(x)\varphi_{R}(x)|^{ p}dx\] _when \(p\geq 2\)._ This lemma is straightforward from \(|\varphi_{R}(x)|\sim 1\) and the Littlewood-Paley theory. The following lemma tells us that we may consider the averaging operator to control the maximal function. **Lemma 2.2** ([10], see also [9]).: _Let \(1\leq p\leq\infty\) and \(F:\mathbb{R}^{d}\times[1,2]\to\mathbb{R}\) be a smooth function. Then, for any \(\lambda>0\) and \(x\in\mathbb{R}^{d}\), we have_ \[\sup_{t\in[1,2]}|F(x,t)|\lesssim\lambda^{\frac{1}{p}}\|F(x,\cdot)\|_{L^{p}([1, 2])}+\lambda^{-\frac{1}{p^{\prime}}}\|\partial_{t}F(x,\cdot)\|_{L^{p}([1,2])}.\] Next, we introduce the asymptotic expansion of \(\widehat{d\sigma}\) using the asymptotic formula of the Bessel function (see, for example, [17]): \[\widehat{d\sigma}(\xi)=\sum_{\pm,\,0\leq k\leq N}C_{j}^{\pm}|\xi|^{-\frac{d-1 }{2}-k}e^{\pm i|\xi|}+E_{N}(|\xi|),\quad|\xi|\geq 1, \tag{2.1}\] where \(E_{N}\) is a smooth function satisfying \(|(d/dr)^{l}E_{N}(r)|\leq Cr^{-l-\frac{N+1}{4}}\), \(0\leq l\leq(N+1)/4\) for \(r\geq 1\) and a constant \(C>0\). By the Fourier inversion, we have \[A_{t}\mathcal{P}_{j}f(x)=\int e^{ix\cdot\xi}\chi_{j}(|\xi|)\widehat{f}(\xi) \widehat{d\sigma}(t\xi)d\xi.\] By fixing sufficiently large \(N>0\), the contribution from \(E_{N}\) is very small. Also, from (2.1), it suffices to handle the contribution from \(|\xi|^{-(d-1)/2}e^{i|\xi|}\) since the others are similarly, and easily handled. ### Local smoothing estimates for the wave operator The local smoothing estimate is one of the common tool in the study of maximal function. The wave operator defined by \[\mathcal{W}_{t}f(x)=\int e^{i(x\cdot\xi+t|\xi|)}\widehat{f}(\xi)d\xi\] is deeply related to the spherical maximal function. As we have seen in Section 2.1, the main contribution of \(A_{t}\mathcal{P}_{j}\) comes from \(\mathcal{W}_{t}[|D|^{-1/2}\mathcal{P}_{j}f]\). We denote \[\mathbb{A}_{\lambda}=\{\eta\in\mathbb{R}^{2}:2^{-1}\lambda\leq|\eta|\leq 2\lambda\}\] for \(\lambda>0\). The sharp \(L^{p}\)-estimates of \(\mathcal{W}\) is obtained by Guth, Wang, and Zhang [6]. By interpolation with the trivial \(L^{1}\)-\(L^{\infty}\) estimate, we get the following sharp \(L^{p}\)-\(L^{q}\) estimates with \(\epsilon\)-loss. **Proposition 2.3** ([6], see also [15], [9]).: _Let \(2\leq p\leq q\), \(1/p+3/q\leq 1\), and \(\lambda\geq 1\). Then, the estimate_ \[\big{\|}\mathcal{W}_{t}g\big{\|}_{L^{q}_{x,t}(\mathbb{R}^{2}\times[1,2])}\leq C _{\epsilon,p,q}\lambda^{(\frac{1}{2}+\frac{1}{p}-\frac{3}{q})+\epsilon}\|g\|_{ L^{p}} \tag{2.2}\] _holds for a constant \(C_{\epsilon,p,q}>0\) for any \(\epsilon>0\) whenever \(\operatorname{supp}\widehat{g}\subset\mathbb{A}_{\lambda}\)._ However, our problem concerns the weight of the critical order, so the \(\epsilon\)-loss is not allowable. when \(d=2\), S. Lee [10] obtained the critical \(L^{p}\)-\(L^{q}\) local smoothing estimate for some \(p<q\) using Wolff's sharp bilinear cone restriction estimate (see [19]). Using the same method, we can obtain the critical \(L^{p}\)-\(L^{q}\) local smoothing estimate for some \(p<q\) when \(d\geq 3\) (without \(\epsilon\)-loss). However, for our purpose, when \(d\geq 3\), estimates from the Strichartz estimate which is also mentioned in [10] is enough (see (1.10) in [10]). **Proposition 2.4** ([10]).: _Let \(2\leq p\leq q\), \(\frac{1}{p}+\frac{d+1}{(d-1)q}\leq 1\), and \(\lambda\geq 1\). Also, we assume \(q>14/3\) when \(d=2\) and \(q\geq\frac{2(d+1)}{d-1}\) when \(d\geq 3\). Then, for a constant \(C_{p,q,d}>0\), we have the following estimates for any \(g\) such that \(\operatorname{supp}\widehat{g}\subset\mathbb{A}_{\lambda}\):_ \[\|\mathcal{W}_{t}g\|_{L^{q}_{x,t}(\mathbb{R}^{d}\times[1,2])}\leq C_{p,q,d} \lambda^{\frac{d-1}{2}+\frac{1}{p}-\frac{d+1}{q}}\|g\|_{L^{p}}. \tag{2.3}\] When \(d\geq 4\), Heo, Nazarov, Seeger [7] obtained the critical local smoothing estimates of \(\mathcal{W}\) for \(p>2(d-1)/(d-3)\). **Proposition 2.5**.: _Let \(d\geq 4\) and \(p>2(d-1)/(d-3)\). Then, we have the following estimates_ \[\|\mathcal{W}_{t}g\|_{L^{p}_{x,t}(\mathbb{R}^{d}\times[1,2])}\leq C_{p,d} \lambda^{\frac{d-1}{2}-\frac{d}{p}}\|g\|_{L^{p}}\] _for some constant \(C_{p,d}>0\) whenever \(\operatorname{supp}\widehat{g}\subset\mathbb{A}_{\lambda}\)._ We finally remark that allowing the \(\epsilon\)-loss, the sharp local smoothing estimates beyond the bilinear method is obtained by Beltran and Saari [1]. ### Spherical harmonics In this section we review some basic properties of spherical harmonics. We refer Chapter 4 of [18] for details. Let \(\mathcal{H}_{k}\) be the space of spherical harmonics of degree \(k\). This means that any element of \(\mathcal{H}_{k}\) is a restriction of a degree \(k\) harmonic polynomial on \(\mathbb{S}^{d-1}\). Then, it is well known that the collection of all finite linear combinations of elements of \(\bigcup_{k=0}^{\infty}\mathcal{H}_{k}\) is dense in \(L^{2}(\mathbb{S}^{d-1})\). We fix an orthonormal basis of \(\mathcal{H}_{k}\) by \[\mathcal{B}_{k}=\{Y_{i}^{k}:1\leq i\leq d_{k}\}\] where \(d_{k}\) is the dimension of \(\mathcal{H}_{k}\). Then, \(\bigcup_{k=0}^{\infty}\mathcal{B}_{k}\) forms an orthonormal basis of \(L^{2}(\mathbb{S}^{d-1})\). Also, for any function \(f\in L^{2}(\mathbb{R}^{d})\), we may find the unique expansion \[f(x)=\sum_{k=0}^{\infty}\sum_{i=1}^{d_{k}}a_{i}^{k}(r)Y_{i}^{k}(\theta) \tag{2.4}\] where \(x=r\theta\) for \(r>0\), \(\theta\in\mathbb{S}^{d-1}\). The following is an interesting property of spherical harmonics which is crucial in our argument. **Lemma 2.6** (Corollary 2.9, Lemma 2.18, [18]).: _Suppose \(f(x)=f_{0}(r)Y(\theta)\) where \(x=r\theta\), \(r>0\), \(\theta\in\mathbb{S}^{d-1}\), and \(Y\in\mathcal{H}_{k}\) for \(k\geq 0\). Then, we have_ \[\widehat{f}(\xi)=\left\{\int_{0}^{\infty}f_{0}(r)\phi_{k}(r\rho)r^{d-1}dr \right\}Y(\omega)\] _for a function \(\phi_{k}\) where \(\xi=\rho\omega\) for \(\rho>0\) and \(\omega\in\mathbb{S}^{d-1}\). Also, for a constant \(C(d)>0\), we have \(|\phi_{k}(s)|\leq C(d)\) for any \(k\geq 0\), \(s\in\mathbb{R}\)._ Indeed, Lemma 2.6 is not stated as above in [18], but it is contained in the proof. Also, we note that Lemma 2.6 implies \[\int_{\mathbb{S}^{d-1}}Y(\theta)e^{-ir\rho\theta\cdot\omega}d\theta=\phi_{k}(r \rho)Y(\omega) \tag{2.5}\] for any \(Y\in\mathcal{H}_{k}\). We may find the expansion for \(\widehat{f}\) as follows: \[\widehat{f}(\xi)=\sum_{k=0}^{\infty}\sum_{i=1}^{d_{k}}b_{i}^{k}(\rho)Y_{i}^{k }(\omega) \tag{2.6}\] where \(\xi=\rho\omega\) for \(\rho>0\), \(\omega\in\mathbb{S}^{d-1}\). Then, by the orthogonality, we see that \[\|f\|_{L^{2}}=(\sum_{k=0}^{\infty}\sum_{i=1}^{d_{k}}\int_{0}^{\infty}|b_{i}^{ k}(\rho)|^{2}\rho^{d-1}d\rho)^{\frac{1}{2}}\] holds. ## 3. Proof of the main theorem In this section, we prove Theorem 1.1 assuming Proposition 1.2. For \(n\in\mathbb{Z}\), we define \(\mathcal{P}_{<n}\) by \((\mathcal{P}_{<n}f)^{\wedge}(\xi)=\sum_{j<n}\chi_{j}(|\xi|)\widehat{f}(\xi)\). Also, we denote \(\chi_{<n}(s)=\sum_{j<n}\chi_{j}(s)\). Before the proof of the main theorem, we introduce a technical lemma handling Schwartz tails. **Lemma 3.1**.: _Suppose \(p>d/(d-1)\). For \(k,j\geq 0\) and any \(N>0\),_ \[\int_{|x|\sim 2^{k}}|M_{j}f(x)|^{p}|x|^{1-d}dx\] \[\lesssim 2^{-\beta j}\int_{|x|\sim 2^{k}}|\mathcal{P}_{j}f(x)|^{p} |x|^{1-d}dx+\sum_{m\in\mathbb{Z}}2^{-N(j+k)-(d-1)|m|}\int_{|x|\sim 2^{m}}| \mathcal{P}_{j}f(x)|^{p}|x|^{1-d}dx \tag{3.1}\] _holds for some \(\beta=\beta(d,p)>0\)._ Heuristically, since \(M_{c}\) is a local maximal operator, the contribution of \(M_{j}f\) on the annulus \(\mathbb{A}_{2^{k}}\) comes from the values of \(\mathcal{P}_{j}f\) on an \(O(1)\)-neighborhood of the same annulus. However, the restriction of \(\mathcal{P}_{j}f\) on the annulus is no more localized on the Fourier side. Therefore, we are obliged to handle the Schwartz tail which is the second term on the right hand side of (3.1). Proof of lemma 3.1.: As above, we obtain that \[\int_{|x|\sim 2^{k}}|M_{c}[(1-\tilde{\chi}_{k})\mathcal{P}_{j}f](x)|^{p}|x|^ {1-d}dx=0.\] Also, from [2] and [16] (see also [14]), we have \[\int|M_{j}g(x)|^{p}dx\lesssim 2^{-\alpha j}\int|\mathcal{P}_{j}g(x)|^{p}dx \lesssim 2^{-\alpha j}\int|g(x)|^{p}dx \tag{3.2}\] for some \(\alpha>0\) depending on \(d,p\) when \(p>d/(d-1)\). We denote \(\mathcal{P}_{<j-10}=\mathcal{Q}_{j}^{1}\), and \(\sum_{j-10\leq j^{\prime}\leq j+10}\mathcal{P}_{j^{\prime}}=\mathcal{Q}_{j}^{2}\). Then, by the triangle inequality, we have \[\int_{|x|\sim 2^{k}}|M_{j}f(x)|^{p}|x|^{1-d}dx\] \[\lesssim 2^{k(1-d)}(\sum_{i=1,2}\|M_{c}[\mathcal{Q}_{j}^{i}\tilde{ \chi}_{k}\cdot\mathcal{P}_{j}f]\|_{L^{p}}+\sum_{j^{\prime}>j+10}\|M_{c}[ \mathcal{P}_{j^{\prime}}\tilde{\chi}_{k}\cdot\mathcal{P}_{j}f]\|_{L^{p}})^{p}. \tag{3.3}\] Note that \((\mathcal{Q}_{j}^{1}\tilde{\chi}_{k}\cdot\mathcal{P}_{j}f)^{\wedge}(\xi)\neq 0\) only if \(|\xi|\sim 2^{j}\) while for \(j^{\prime}>j+10\), \((\mathcal{P}_{j^{\prime}}\tilde{\chi}_{k}\cdot\mathcal{P}_{j}f)^{\wedge}(\xi)\neq 0\) only if \(|\xi|\sim 2^{j^{\prime}}\). Thus, by (3.2) and the boundedness of \(M_{c}\), we obtain that the left hand side of (3.3) is bounded by the sum of the following three quantities. \[2^{k(1-d)-\alpha^{\prime}j}\int|\mathcal{Q}_{j}^{1}\tilde{\chi}_{k}\cdot \mathcal{P}_{j}f(x)|^{p}dx, \tag{3.4}\] \[2^{k(1-d)}\int|\mathcal{Q}_{j}^{2}\tilde{\chi}_{k}\cdot\mathcal{P}_{j}f(x)|^{p }dx, \tag{3.5}\] \[2^{k(1-d)-\alpha^{\prime}j^{\prime}}\sum_{j^{\prime}>j+10}\int|\mathcal{P}_{j^ {\prime}}\tilde{\chi}_{k}\cdot\mathcal{P}_{j}f(x)|^{p}dx. \tag{3.6}\] Here, \(\alpha^{\prime}>0\) is a suitable number which is smaller than \(\alpha\). From the spirit of the uncertainty principle, \(\widehat{\chi}_{k}\) is essentially supported in a ball of radius \(2^{-k}\) centered at the origin. Thus, we may expect that (3.4) will be the main term and the others are error terms. We first see (3.4). By definition, we have \[\mathcal{Q}^{1}_{j}\tilde{\chi}_{k}(x) =\int\widehat{\chi_{k}}(\xi)\chi_{<j-10}(\xi)e^{ix\cdot\xi}d\xi\] \[=\int\tilde{\chi}_{k}(y)\big{[}\int e^{i(x-y)\cdot\xi}\chi_{<j-10 }(\xi)d\xi\big{]}dy.\] This implies that when \(|x|\sim 2^{m}\) for \(m\in\mathbb{Z}\), \[|\mathcal{Q}^{1}_{j}\tilde{\chi}_{k}(x)|\lesssim\begin{cases}2^{-N(j+max\{m.k \})},&|m-k|\geq 10,\\ 1&|m-k|<10\end{cases} \tag{3.7}\] holds for any \(N>0\). Thus, (3.4) is bounded by the right hand side of (3.1) as desired. Next, we see (3.5). Similarly as before, we obtain that when \(|x|\sim 2^{m}\) for \(m\in\mathbb{Z}\), \[|\mathcal{Q}^{2}_{j}\tilde{\chi}_{k}(x)|\lesssim\begin{cases}2^{-N(j+max\{m.k \})},&|m-k|\geq 10,\\ 2^{-N(j+k)}&|m-k|<10\end{cases} \tag{3.8}\] holds for any \(N>0\). Again, this implies (3.5) is bounded by the right hand side of (3.1), as desired. Finally, we see (3.6). The proof is almost similar as before. By the same argument, we obtain that when \(|x|\sim 2^{m}\) for \(m\in\mathbb{Z}\), \[|\mathcal{P}_{j^{\prime}}\tilde{\chi}_{k}(x)|\lesssim\begin{cases}2^{-N(j^{ \prime}+\max\{m,k\})},&|m-k|\geq 10,\\ 2^{-N(j^{\prime}+k)},&|m-k|<10\end{cases} \tag{3.9}\] holds for any \(N>0\). Since \(j^{\prime}>j+10\), by taking summation in \(j^{\prime}\), we obtain that (3.6) is bounded by the right hand side of (3.1), which completes the proof. Proof of Theorem 1.1.: By a scaling argument, it suffices to consider \[\tilde{M}f(x)=\sup_{0<t<1}|A_{t}f(x)|.\] For each \(n\geq 0\), we can easily check that \[\sup_{t\sim 2^{-n}}|A_{t}\mathcal{P}_{<n}f(x)|\lesssim M_{HL}f(x)\] holds independent of \(n\), where \(M_{HL}\) is the Hardy-Littlewood maximal operator on \(\mathbb{R}^{d}\). Therefore, we have \[\int|\tilde{M}f(x)|^{p}|x|^{1-d}dx\] \[\lesssim\int\big{(}\sup_{n\geq 0}\sup_{t\sim 2^{-n}}|A_{t}\mathcal{P} _{<n}f(x)|^{p}+\sup_{n\geq 0}\sup_{t\sim 2^{-n}}|\sum_{j\geq n}A_{t}\mathcal{P} _{j}f(x)|^{p}\big{)}|x|^{1-d}dx\] \[\lesssim\int|M_{HL}f(x)|^{p}|x|^{1-d}dx+\sum_{k\in\mathbb{Z},n\geq 0 }\int_{|x|\sim 2^{-k}}\sup_{t\sim 2^{-n}}|\sum_{j\geq n}A_{t}\mathcal{P}_{j}f(x)|^{p} |x|^{1-d}dx. \tag{3.10}\] It is well known that \[\int|M_{HL}f(x)|^{p}|x|^{1-d}dx\lesssim\int|f(x)|^{p}|x|^{1-d}dx\] since \(|x|^{1-d}\) is an \(A_{p}\)-weight. Thus, we only consider the second part of the right handed side of (3.10). For simplicity, we denote \(f_{n}(x)=f(x/2^{n})\) for a function \(f\). Notice that \([\mathcal{P}_{j}f]_{n}=\mathcal{P}_{j-n}f_{n}\). Then, we have \[\int_{|x|\sim 2^{-k}}\sup_{t\sim 2^{-n}}|\sum_{j\geq n}A_{t}\mathcal{P}_{j}f(x )|^{p}|x|^{1-d}dx=2^{-n}\int_{|x|\sim 2^{-k+n}}|\sum_{j\geq n}M_{j-n}f_{n}(x)|^{p}|x| ^{1-d}dx.\] Now there are two cases, \(k<n\) and \(k\geq n\). We first handle the case \(k<n\). In this case, we have \[\sum_{n\geq 0}\sum_{k<n}2^{-n}\int_{|x|\sim 2^{-k+n}}|\sum_{j\geq n }M_{j-n}f_{n}(x)|^{p}|x|^{1-d}dx\] \[=\sum_{n\geq 0}2^{-n}\sum_{k>0}\int_{|x|\sim 2^{k}}|\sum_{j\geq 0 }M_{j}f_{n}(x)|^{p}|x|^{1-d}dx\] \[\leq\sum_{n\geq 0}2^{-n}\sum_{k>0}\big{(}\sum_{j\geq 0}(\int_{|x| \sim 2^{k}}|M_{j}f_{n}(x)|^{p}|x|^{1-d}dx)^{\frac{1}{p}}\big{)}^{p} \tag{3.11}\] by the triangle inequality. By Lemma 3.1, \([\mathcal{P}_{j}f]_{n}=\mathcal{P}_{j-n}f_{n}\), and Holer's inequality, we have \[\lesssim\sum_{n,k,j\geq 0}2^{-\beta^{\prime}j}\int_{|x|\sim 2^{k-n}} |\mathcal{P}_{j+n}f(x)|^{p}|x|^{1-d}dx\] \[\quad+\sum_{n,k,j\geq 0,\,m\in\mathbb{Z}}2^{-N(j+k)-(d-1)|m|} \int_{|x|\sim 2^{m-n}}|\mathcal{P}_{j+n}f(x)|^{p}|x|^{1-d}dx\] \[=: I_{1}+I_{2} \tag{3.11}\] where \(\beta^{\prime}>0\) is a suitable number smaller than \(\beta\) in Lemma 3.1. By changing summations, we get \[I_{1}=\sum_{j\geq 0,k\in\mathbb{Z}}2^{-\beta^{\prime}j}\int_{|x|\sim 2^{k}} \sum_{n\geq\max\{0,-k\}}|\mathcal{P}_{j+n}f(x)|^{p}|x|^{1-d}dx.\] By covering \(\{x:|x|\sim 2^{k}\}\) by cubes with sidelength \(\sim 2^{k}\) which are finitely overlapping and contained in a bigger annulus \(\{x:|x|\sim 2^{k}\}\), we obtain \[\int_{|x|\sim 2^{k}}\sum_{n\geq\max\{0,-k\}}|\mathcal{P}_{j+n}f(x)|^{p}dx \lesssim\int|\mathcal{P}_{>\max\{0,-k\}}f(x)|^{p}(1+2^{-k}|x|)^{-N}dx\] by Lemma 2.1, for a sufficiently large \(N>0\). We claim that \[\int|\mathcal{P}_{>-k}f(x)|^{p}(1+2^{-k}|x|)^{-N}dx\lesssim\int|f(x)|^{p}(1+2^ {-k}|x|)^{-N}dx \tag{3.12}\] for any \(k\in\mathbb{Z}\). Indeed, since \(\mathcal{P}_{<-k}f=f*\psi_{k}\) where \(|\psi_{k}(x)|\lesssim 2^{-dk}(1+2^{-k}|x|)^{-N}\) for any \(N>0\), (3.12) comes from the following simple inequality: \[2^{-dk}\int(1+2^{-k}|x-y|)^{-N}(1+2^{-k}|x|)^{-N}dx\lesssim(1+2^{-k}|y|)^{-N}.\] It suffices to check this for \(k=0\) by scaling, and it is straightforward. Therefore, \[I_{1} \lesssim\sum_{j\geq 0}2^{-\beta^{\prime}j}\int(|\mathcal{P}_{>0}f(x)| ^{p}+|f(x)|^{p})\sum_{k\in\mathbb{Z}}2^{k(1-d)}(1+2^{-k}|x|)^{-N}dx\] \[\lesssim\int(|\mathcal{P}_{>0}f(x)|^{p}+|f(x)|^{p})|x|^{1-d}dx\] \[\lesssim\int|f(x)|^{p}|x|^{1-d}dx \tag{3.13}\] as desired, since \(|x|^{1-d}\) is an \(A_{p}\)-weight. For the later use, we remark the following inequality: \[\sum_{n\in\mathbb{Z}}2^{n(d-1)}(1+2^{n}|x|)^{-N}\lesssim|x|^{1-d}. \tag{3.14}\] Similarly, we now control \(I_{2}\). Since \[\sum_{n,k,j,m\geq 0}2^{-N(j+k)-(d-1)|m|}\int_{|x|\sim 2^{m-n}}|\mathcal{P}_{j+n} f(x)|^{p}|x|^{1-d}dx\lesssim I_{1},\] we have that \[I_{2}\lesssim I_{1}+\sum_{n,j\geq 0}2^{-Nj+n(d-1)}\int_{|x|\lesssim 2^{-n}}| \mathcal{P}_{j+n}f(x)|^{p}dx\] holds. As before, we use the local orthogonality, Lemma 2.1 with parameter \(j\). Then, we obtain \[\sum_{j\geq 0}\int_{|x|\lesssim 2^{-n}}|\mathcal{P}_{j+n}f(x)|^{p}dx\lesssim \int|\mathcal{P}_{>n}f(x)|^{p}(1+2^{n}|x|)^{-N}dx\] for any \(N>0\). By (3.14), we finally obtain \[I_{2}\lesssim\int|f(x)|^{p}|x|^{1-d}dx\] as desired, by (3.12). Now we consider the other case, \(k\geq n\). In this case, we have \[\sum_{n\geq 0}\sum_{k\geq n}2^{-n}\int_{|x|\sim 2^{-k+n}}|\sum_{j \geq n}M_{j-n}f_{n}(x)|^{p}|x|^{1-d}dx\] \[=\sum_{n\geq 0}2^{-n}\sum_{k\geq 0}\int_{|x|\sim 2^{-k}}|\sum_{j \geq 0}M_{j}f_{n}(x)|^{p}|x|^{1-d}dx. \tag{3.15}\] By Holder's inequality, \[|\sum_{j\geq 0}M_{j}f_{n}(x)|^{p} \leq(\sum_{j\geq 0}|M_{j}f_{n}(x)|^{p}2^{\delta_{0}|j-k|})(\sum_{j \geq 0}2^{-\delta_{0}|j-k|/(p-1)})^{p-1}\] \[\lesssim\sum_{j\geq 0}|M_{j}f_{n}(x)|^{p}2^{\delta_{0}|j-k|} \tag{3.16}\] holds for \(\delta_{0}>0\). Putting (3.16) into (3.15), we obtain \[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: By Proposition 1.2, we obtain \[\int_{|x|\sim 2^{-k}}|M_{j}f_{n}(x)|^{p}|x|^{1-d}dx\] \[\lesssim 2^{-\delta|j-k|}\int_{|x|\sim 1}|\mathcal{P}_{j}f_{n}(x)|^{p }|x|^{1-d}dx+\sum_{m\in\mathbb{Z}}2^{-Nj-(d-1)|m|-\delta k}\int_{|x|\sim 2^{m}}| \mathcal{P}_{j}f_{n}(x)|^{p}|x|^{1-d}dx\] \[=:I_{3}+I_{4} \tag{3.17}\] when \(p=2\) for \(d\geq 3\), and when \(p>2\) for \(d=2\). We first see the contribution from \(I_{3}\). We choose \(\delta_{0}=\delta/2\) so that we get exponential decay. From the relation \([\mathcal{P}_{j+n}f]_{n}=\mathcal{P}_{j}f_{n}\), we have \[\sum_{n\geq 0}2^{-n}\sum_{k,j\geq 0}2^{\delta_{0}|j-k|}I_{3} \lesssim\sum_{n,k,j\geq 0}2^{-\delta|j-k|/2}\int_{|x|\sim 2^{-n}}| \mathcal{P}_{j+n}f(x)|^{p}|x|^{1-d}dx\] \[\lesssim\sum_{n,j\geq 0}\int_{|x|\sim 2^{-n}}|\mathcal{P}_{j+n}f(x )|^{p}|x|^{1-d}dx.\] For each fixed \(n\geq 0\), we use the local orthogonality. As above, by covering \(\{x:|x|\sim 2^{-n}\}\) by cubes with sidelength \(\sim 2^{-n}\) which are finitely overlapping and contained in a bigger annulus \(\{x:|x|\sim 2^{-n}\}\), we have \[\sum_{j\geq 0}\int_{|x|\sim 2^{-n}}|\mathcal{P}_{j+n}f(x)|^{p}|x|^{ 1-d}dx \lesssim 2^{n(d-1)}\int|\mathcal{P}_{>n}f(x)|^{p}(1+2^{n}|x|)^{-N}dx\] \[\lesssim 2^{n(d-1)}\int|f(x)|^{p}(1+2^{n}|x|)^{-N}dx.\] By (3.14), the contribution from \(I_{3}\) is as desired. Now we see the contribution from \(I_{4}\). Choosing sufficiently large \(N\), \(\delta_{0}=\delta/2\), and the relation \([\mathcal{P}_{j+n}f]_{n}=\mathcal{P}_{j}f_{n}\), we get \[\sum_{n\geq 0}2^{-n}\sum_{k,j\geq 0}2^{\delta_{0}|j-k|}I_{4}\] \[\lesssim\sum_{n,k,j\geq 0,\,m\in\mathbb{Z}}2^{-Nj-(d-1)|m|- \delta k/2}\int_{|x|\sim 2^{m-n}}|\mathcal{P}_{j+n}f(x)|^{p}|x|^{1-d}dx.\] Splitting the summation in \(m\in\mathbb{Z}\) by \(m\geq 0\) and \(m<0\), this is bounded by the sum of the following two quantities: \[I_{5}=\sum_{n,j\geq 0}2^{-Nj+n(d-1)}\int_{|x|\lesssim 2^{-n}}|\mathcal{P}_{j+n}f( x)|^{p}dx,\] \[I_{6}=\sum_{n,j,m\geq 0}2^{-Nj-m(d-1)}\int_{|x|\sim 2^{m-n}}|\mathcal{P}_{j+n}f( x)|^{p}|x|^{1-d}dx.\] Indeed, these are already bounded by \(I_{2}\) and thus, we obtain \[\sum_{n\geq 0}2^{-n}\sum_{k,j\geq 0}2^{\delta_{0}|j-k|}I_{4}\lesssim\int|f(x)|^{ p}|x|^{1-d}dx\] and this concludes the proof. ## 4. Proof of Proposition 1.2 The proof of Proposition 1.2 is different depending on whether \(j\leq k\) or \(j>k\). In the former case, we use Lemma 2.6, the property of spherical harmonics. In the latter case, we use the argument in [5]. Proof of Proposition 1.2 when \(j\leq k\).: We first claim the following inequality for some \(\delta(d,p)>0\) when \(d/(d-1)<p<\infty\): \[\|\chi_{-k}(|\cdot|)M_{j}f\|_{L^{p}(|x|^{1-d})}\lesssim 2^{-\delta(k-j)}\| \mathcal{P}_{j}f\|_{L^{p}}. \tag{4.1}\] Note that the estimate does not depend on whether \(k\) is bigger than \(j\) or not, and we have the \(L^{p}\)-norm without the weight on the right hand side. We prove (4.1) with \(\delta=0\) when \(p=d/(d-1)\), and \(\delta(d,2)>0\) when \(p=2\). Also, we have a trivial estimate with \(\delta=0\) when \(p\infty\). Then, (4.1) follows by interpolation. When \(p=d/(d-1)\), the estimate is already proved in [5]. Indeed, inequality (3) in the proof of Theorem 10 in [5] has \(2^{j\epsilon}\)-loss. However, authors provided sharp estimates without \(\epsilon\)-loss restricting \(x\) in a small annulus with width \(2^{-k}\). Thus, we only focus on \(p=2\). Let \[\widehat{f}(\xi)=\sum_{k=0}^{\infty}\sum_{i=1}^{d_{k}}b_{i}^{k}(\rho)Y_{i}^{k }(\omega) \tag{4.2}\] for \(\xi=\rho\omega\). By (2.1) and Lemma 2.2, it suffices to prove the following: \[\|\chi_{-k}(|x|)\int e^{i(x\cdot\xi+t|\xi|)}\chi_{j}(|\xi|)\widehat{f}(\xi)d \xi\|_{L^{2}_{x,t}(|x|^{1-d};\mathbb{R}^{d}\times[1,2])}\lesssim 2^{\frac{d-2}{2}j- \delta(k-j)}\|\mathcal{P}_{j}f\|_{L^{2}}. \tag{4.3}\] Putting (4.2) into the square of the left hand side of (4.3), we get \[\int\Big{|}\chi_{-k}(r)\int e^{i(r\rho\theta\cdot\omega+t\rho)} \chi_{j}(\rho)\sum_{k,i}b_{i}^{k}(\rho)Y_{i}^{k}(\omega)\rho^{d-1}d\rho d \omega\Big{|}^{2}drd\theta dt\] \[= \sum_{k,i}\int\Big{|}\chi_{-k}(r)\int e^{it\rho}\chi_{j}(\rho)b_ {i}^{k}(\rho)\phi_{k}(-r\rho)\rho^{d-1}d\rho\Big{|}^{2}drdt\] by (2.5) and the orthogonality. For each \(k,i\), Plancherel's theorem implies \[\int\Big{|}\chi_{-k}(r)\int e^{it\rho}\chi_{j}(\rho)b_{i}^{k}( \rho)\phi_{k}(-r\rho)\rho^{d-1}d\rho\Big{|}^{2}drdt\] \[\leq \int\Big{|}\chi_{-k}(r)\chi_{j}(\rho)b_{i}^{k}(\rho)\phi_{k}(-r \rho)\rho^{d-1}\Big{|}^{2}drd\rho\] \[\approx \int|\chi_{j}(\rho)b_{i}^{k}(\rho)|^{2}\rho^{d-1}\Big{\{}\int| \phi_{k}(-r)\chi(2^{k-j}r)|^{2}\rho^{d-2}dr\Big{\}}d\rho.\] By Lemma 2.6, we obtain \[\int|\phi_{k}(-r)\chi(2^{k-j}r)|^{2}\rho^{d-2}dr\lesssim 2^{(d-2)j+j-k}\] when \(\rho\sim 2^{j}\), and the implicit constants are independent of \(k,i\). Thus, we can deduce (4.3) when \(p=2\), and thus, completes the proof of (4.1). Now we need to use localization arguments as in the proof of Lemma 3.1 to reach the goal. Since \(M_{c}\) is a local maximal operator, we obtain \[\int_{|x|\sim 2^{-k}}|M_{c}[(1-\tilde{\chi})\mathcal{P}_{j}f](x)|^{p}|x|^{1-d}dx=0. \tag{4.4}\] As before, by the triangle inequality, \[\int_{|x|\sim 2^{-k}}|M_{j}f(x)|^{p}|x|^{1-d}dx\] \[\lesssim(\sum_{i=1,2}\|\chi_{-k}M_{c}[\mathcal{Q}_{j}^{i}\tilde{ \chi}\cdot\mathcal{P}_{j}f]\|_{L^{p}(|x|^{1-d})}+\sum_{j^{\prime}>j+10}\|\chi_{- k}M_{c}[\mathcal{P}_{j^{\prime}}\tilde{\chi}\cdot\mathcal{P}_{j}f]\|_{L^{p}(|x|^{1-d} )})^{p}. \tag{4.5}\] Note that \([\mathcal{Q}_{j}^{i}\tilde{\chi}\cdot\mathcal{P}_{j}f]^{\wedge}\) is supported on \(\{\xi:|\xi|\sim 2^{j}\}\), and \([\mathcal{P}_{j^{\prime}}\tilde{\chi}\cdot\mathcal{P}_{j}f]^{\wedge}\) is supported on \(\{\xi:|\xi|\sim 2^{j^{\prime}}\}\) when \(j^{\prime}>j+10\). Thus, we can apply (4.1). As we have seen already, the main term will come from \(i=1\), and the other terms will be error terms. In the proof of Lemma 3.1, we obtained estimates for \(\mathcal{Q}_{j}^{i}\tilde{\chi}_{k}\) and \(\mathcal{P}_{j^{\prime}}\tilde{\chi}_{k}\), (3.7) and (3.9). We use these estimates for \(k=0\). The calculation is almost the same with that of the proof of Lemma 3.1. Thus, the proposition is concluded if we can handle the term with \(i=2\). When \(i=2\), we have \[\|\chi_{-k}M_{c}[\mathcal{Q}_{j}^{2}\tilde{\chi}\cdot\mathcal{P} _{j}f]\|_{L^{p}(|x|^{1-d})}\] \[\leq\|\chi_{-k}M_{c}\mathcal{P}_{<0}[\mathcal{Q}_{j}^{2}\tilde{ \chi}\cdot\mathcal{P}_{j}f]\|_{L^{p}(|x|^{1-d})}+\sum_{0\leq m\leq j}\|\chi_{- k}M_{c}\mathcal{P}_{m}[\mathcal{Q}_{j}^{2}\tilde{\chi}\cdot\mathcal{P}_{j}f]\|_{L^{p} (|x|^{1-d})}.\] We momentarily denote \(g(x)=\mathcal{Q}_{j}^{2}\tilde{\chi}\cdot\mathcal{P}_{j}f(x)\). The first term on the right hand side is bounded by \[2^{\frac{k(d-1)}{p}}\Big{(}\int_{|x|\sim 2^{-k}}\Big{|}\int\frac{|g(x-z)|}{(1+|z| )^{N}}dz\Big{|}^{p}dx\Big{)}^{\frac{1}{p}}\lesssim 2^{-\frac{k}{p}}\Big{(}\int \frac{|g(x)|^{p}}{(1+|x|)^{N/2}}dx\Big{)}^{\frac{1}{p}}\] for any \(N>0\). Also, the second term is bounded by \[\sum_{0\leq m<j}2^{-\delta(k-m)}\|g\|_{L^{p}}\lesssim 2^{-\delta(k-j)}\|g\|_{L ^{p}}.\] Therefore, by the estimate (3.8), the proof is concluded. Proof of Proposition 1.2 when \(j>k\).: Indeed, it is essentially proved in [5]. But we provide an alternative proof. We first claim the following inequality for some \(\delta(d,p)>0\) when \(d/(d-1)<p<\infty\): \[\|\chi_{-k}(|\cdot|)M_{j}f\|_{L^{p}(|x|^{1-d})}\lesssim 2^{-\delta(j-k)}\| \mathcal{P}_{j}f\|_{L^{p}}. \tag{4.6}\] Again, (4.6) with \(\delta=0\) when \(p=d/(d-1),\infty\) is already known. Thus, we choose a suitable \(d/(d-1)<p<\infty\) and prove (4.6) with \(\delta(d,p)>0\). By (2.1) and Lemma 2.2, we may replace the left hand side of (4.6) by \[2^{j(-\frac{(d-1)}{2}+\frac{1}{p})+\frac{(d-1)k}{p}}\|\chi_{-k}(|x|)\mathcal{WP }_{j}f\|_{L^{p}_{x,t}(\mathbb{R}^{d}\times[1,2])}. \tag{4.7}\] We use the critical local smoothing estimate Proposition 2.5 when \(d\geq 4\). Precisely, we have \[\|\mathcal{WP}_{j}f\|_{L^{p}_{x,t}(\mathbb{R}^{d}\times[1,2])}\lesssim 2^{j( \frac{d-1}{2}-\frac{d}{p})}\|\mathcal{P}_{j}f\|_{L^{p}}\] when \(p>2(d-1)/(d-3)\). This implies (4.7) is bounded by \(2^{-(d-1)(j-k)/p}\|\mathcal{P}_{j}f\|_{L^{p}}\). Therefore, (4.6) for the case \(d\geq 4\) is proved. When \(d=2,3\), it is not known that the critical local smoothing estimate holds for \(p\neq 2,\infty\). Instead, we use the critical \(L^{p}\)-\(L^{q}\) local smoothing estimate, Proposition 2.4. Indeed, we have \[\|\mathcal{W}\mathcal{P}_{j}f\|_{L^{q}_{x,t}(\mathbb{R}^{d}\times[1,2])}\lesssim 2 ^{j(\frac{d-1}{2}+\frac{1}{p}-\frac{d+1}{q})}\|\mathcal{P}_{j}f\|_{L^{p}}\] when \(\frac{1}{p}+\frac{d+1}{(d-1)q}\leq 1\), \(q>14/3\) for \(d=2\) and \(q>4\) for \(d=3\). This implies \[\|\chi_{-k}(|\cdot|)M_{j}f\|_{L^{q}}\lesssim 2^{j(\frac{1}{p}-\frac{d}{q})}\| \mathcal{P}_{j}f\|_{L^{p}}.\] By Holder's inequality, we have \[\|\chi_{-k}(|\cdot|)M_{j}f\|_{L^{p}(|x|^{1-d})} \lesssim\|\chi_{-k}(|\cdot|)M_{j}f\|_{L^{q}_{x,t}(|x|^{-d+\frac{q}{ p}})}\] \[\lesssim 2^{-(j-k)(\frac{d}{q}-\frac{1}{p})}\|\mathcal{P}_{j}f\|_{L ^{p}}.\] Therefore, we get (4.6) when \(d/q>1/p\). By Proposition 2.4, we can choose such \(p,q\). Now we use the localization argument to prove (1.1). We recall (4.4) and (4.5). As before, we use (4.6) to \(\mathcal{Q}^{1}_{j}\tilde{\chi}\cdot\mathcal{P}_{j}f\) and \(\mathcal{P}_{j^{\prime}}\tilde{\chi}\cdot\mathcal{P}_{j}f\) when \(j^{\prime}>j+10\). Using (3.7) and (3.9), the proof is completely the same. When \(i=2\), the proof is simpler than the case of \(j\geq k\). We just use the boundedness of \(M_{c}\) and obtain \[\|\chi_{-k}M_{c}[\mathcal{Q}^{2}_{j}\tilde{\chi}\cdot\mathcal{P}_{j}f]\|_{L^{ p}(|x|^{1-d})}\lesssim 2^{\frac{k(d-1)}{p}}\|\mathcal{Q}^{2}_{j}\tilde{ \chi}\cdot\mathcal{P}_{j}f\|_{L^{p}(1;B_{100})}\] where \(B_{100}\) is the ball centered at the origin and radius \(100\). The proof is concluded by (3.8) since \(j>k\) by choosing a sufficiently large \(N>0\).
2309.03561
Trinary Decision Trees for handling missing data
This paper introduces the Trinary decision tree, an algorithm designed to improve the handling of missing data in decision tree regressors and classifiers. Unlike other approaches, the Trinary decision tree does not assume that missing values contain any information about the response. Both theoretical calculations on estimator bias and numerical illustrations using real data sets are presented to compare its performance with established algorithms in different missing data scenarios (Missing Completely at Random (MCAR), and Informative Missingness (IM)). Notably, the Trinary tree outperforms its peers in MCAR settings, especially when data is only missing out-of-sample, while lacking behind in IM settings. A hybrid model, the TrinaryMIA tree, which combines the Trinary tree and the Missing In Attributes (MIA) approach, shows robust performance in all types of missingness. Despite the potential drawback of slower training speed, the Trinary tree offers a promising and more accurate method of handling missing data in decision tree algorithms.
Henning Zakrisson
2023-09-07T08:44:25Z
http://arxiv.org/abs/2309.03561v2
# Primary decision trees for missing value handling ###### Abstract. This paper introduces the Trinary decision tree, an algorithm designed to improve the handling of missing data in decision tree regressors and classifiers. Unlike other approaches, the Trinary decision tree does not assume that missing values contain any information about the response. Both theoretical calculations on estimator bias and numerical illustrations using real data sets are presented to compare its performance with established algorithms in different missing data scenarios (Missing Completely at Random (MCAR), and Informative Missingness (IM)). Notably, the Trinary tree outperforms its peers in MCAR settings, especially when data is only missing out-of-sample, while lacking behind in IM settings. A hybrid model, the TrinaryMIA tree, which combines the Trinary tree and the Missing In Attributes (MIA) approach, shows robust performance in all types of missingness. Despite the potential drawback of slower training speed, the Trinary tree offers a promising and more accurate method of handling missing data in decision tree algorithms. **Keywords:** Missing data, Decision trees, Regularization ## 1. Introduction Missing values are prevalent in real data. As noted by e.g. Nijman et al. (2022), this is often not handled or mentioned in machine learning applications in a satisfactory way. Classification and Regression Trees (CART), as defined by Breiman et al. (1984) provide numerous ways to handle missing values in covariates. Since CARTs are are the foundation of many increasingly popular machine learning algorithms such as Gradient Boosting Machines (GBMs) (Friedman, 2001), Random Forests (Ho, 1995), and XGBoost (Chen et al., 2015), they are still relevant today. Yet, the proposed methods of handling missing data come with drawbacks. The simplest way to handle missing values when training a tree is to simply ignore them by discarding data points with any missing feature. This of course means risking losing potentially useful information, and is not an option when predicting on data with missing values. The perhaps second-simplest method is using the _majority rule_, where data points with missing values are assigned to the daughter node with the largest amount of data in training. Another method is presented by Twala et al. (2008), the Missing In Attributes (MIA) algorithm. MIA splits data points with missing covariates in the way that minimizes the loss for the training data. This is similar, but not identical, to assigning missing values an own category in a categorical feature. Quinlan (1993) introduces the C4.5 algorithm for decision trees, and with that a weighted probabilistic strategy for missing value-handling, henceforth referred to as Fractional Case - FC. In FC, a data point with a missing value in a split is assigned a weight of membership in both daughter nodes depending on the distribution of the observable data in the node. For out-of-sample data, the weights for all terminal nodes are calculated and the prediction is given as a weighted average. Breiman et al. (1984) proposes using so-called _surrogate splits_ in order to find other covariates on which the data points which lack the relevant observation can be split to form similar splits. This requires that there are no missing values in the surrogate covariate - or that a secondary surrogate variable is found in its place. Cons of thee methods include losing potentially useful information (discarding data, FC), assuming there is always information in missingness (MIA), requiring missing values in the training data to be able to handle missing values in out-of-sample prediction (MIA, surrogate splits) or losing interpretability (FC). The trinary decision tree for missing value handling (henceforth Trinary tree) introduced in this paper has four important attributes: * It does not assume that missing data points contain any information about the response * It can handle missing values in predictions even if it was trained on a data set with no missing data * It maintains the interpretability of a standard decision tree * which the other algorithms do not necessarily do The first three are apparent from the algorithm, which is presented in Section 2, whereas the fourth one is proven in Section 3. In Section 4, the algorithm is tested against its peers with real data sets. ## 2. The Trinary tree Consider a loss function \(\mathcal{L}((y_{i})_{i\in\mathcal{I}},\delta)\), where \(\mathcal{I}\) is an index set and \(\delta\) is a parameter. For regression problems, \(\delta\) is a real number, and for classification problems, \(\delta\) is a probability vector. In this paper, two loss functions will be considered: the sum of squared errors (SSE) for regression and the point-wise cross-entropy for classification. These are defined as \[\mathcal{L}^{\text{SSE}}((y_{i})_{i\in\mathcal{I}},\delta)=\sum_{i\in \mathcal{I}}(y_{i}-\delta)^{2},\qquad\mathcal{L}^{\text{XE}}((y_{i})_{i\in \mathcal{I}},\delta)=\sum_{i\in\mathcal{I}}\log(\delta_{y_{i}}) \tag{1}\] respectively. Let \(\delta_{\mathcal{I}}\) denote the minimizer of \(\mathcal{L}((y_{i})_{i\in\mathcal{I}},\delta)\), i.e. \[\delta_{\mathcal{I}}=\operatorname*{arg\,min}_{\delta}\mathcal{L}((y_{i})_{i \in\mathcal{I}},\delta) \tag{2}\] For both SSE and point-wise cross-entropy, \(\delta_{\mathcal{I}}\) has a closed form solution, namely \[\delta_{\mathcal{I}}^{\text{SSE}}=\frac{1}{|\mathcal{I}|}\sum_{i\in \mathcal{I}}y_{i}\qquad\left(\delta_{\mathcal{I}}^{\text{XE}}\right)_{k}= \frac{1}{|\mathcal{I}|}\sum_{i\in\mathcal{I}}\mathbbm{1}_{\{y_{i}\}=k},\quad k =1,\ldots,K \tag{3}\] where \(K\) is the number of classes in the classification problem. A binary decision tree is generally fitted greedily by minimizing the loss function at each so called _node_ separately, starting with the _root node_ containing all data points. For data set \((y_{i},x_{i})_{i=1}^{n}\), where \(y_{i}\in\mathcal{Y}\) is the response and \(x_{ij}\in\mathcal{X}_{j}\), \(j=1,\ldots,p\), is a covariate, this means finding a greedy solution to \[\min_{\begin{subarray}{c}j\in\{1,\ldots,p\},\\ \mathcal{X}_{j}^{l},\mathcal{X}_{j}^{r}\in\mathcal{X}_{j}\end{subarray}}\left\{ \mathcal{L}((y_{i})_{i\in\mathcal{I}_{jl}},\delta_{\mathcal{I}_{jl}})+ \mathcal{L}((y_{i})_{i\in\mathcal{I}_{jr}},\delta_{\mathcal{I}_{jr}})\right\}, \tag{4}\] where \[\mathcal{I}_{jl}=\{i\in\mathcal{I}:x_{ij}\in\mathcal{X}_{j}^{l}\},\qquad \mathcal{I}_{jr}=\{i\in\mathcal{I}:x_{ij}\in\mathcal{X}_{j}^{r}\}, \tag{5}\] such that \(\mathcal{X}_{j}^{l}\cup\mathcal{X}_{j}^{r}=\mathcal{X}_{j}\). In the case where \(\mathcal{X}\in\mathbb{R}\), \(\mathcal{X}_{j}^{l}\) and \(\mathcal{X}_{j}^{r}\) are constrained to be continuous intervals. After finding the optimal split, the procedure is repeated for the so called _daughter nodes_, i.e. the nodes that contain the data points from the two sides of the split. The procedure is generally continued until reaching some stopping criterion, such as a maximum tree depth. The nodes that are not split are called _terminal nodes_. Data points where the chosen splitting covariate is missing can be handled in a number of ways. The _majority_ strategy assigns them to the daughter node with the largest amount of data (see Algorithm 2 in Appendix A). The _MIA_ strategy assigns them to the daughter node that provides the largest reduction in the loss function (see Algorithm 3 in Appendix A). The _FC_ strategy assigns them to both daughter nodes with weights depending on the distribution of the observable data in the node (see Algorithm 4 in Appendix A). In contrast, the _Trinary_ strategy assigns them to a third daughter node, which changes the minimization problem in (4) to \[\min_{\begin{subarray}{c}j\in\{1,\ldots,p\},\\ \mathcal{X}_{j}^{l},\mathcal{X}_{j}^{r}\in\mathcal{X}_{j}\end{subarray}}\left\{ \mathcal{L}((y_{i})_{i\in\mathcal{I}_{jl}},\delta_{\mathcal{I}_{jl}})+ \mathcal{L}((y_{i})_{i\in\mathcal{I}_{jr}},\delta_{\mathcal{I}_{jr}})+ \mathcal{L}((y_{i})_{i\in\mathcal{I}_{jm}},\delta_{\mathcal{I}})\right\}, \tag{6}\] where \(\mathcal{I}_{jm}=\{i\in\mathcal{I}:x_{ij}\text{ missing}\}\). Note that \(\delta_{\mathcal{I}}\) in the third term is the minimizer of the loss function over the entire data set of the node. This means that the third term evaluates the loss of the points that are not assigned to the left or right nodes as if it retained the parameter estimate of the mother node. After finding a split, the procedure is repeated for all three daughter nodes. For the third node, the entire data set is used for further splitting, but omitting the splitting covariate. Thus, the third node will grow further by first splitting on the second-best covariate, then continue to grow. The third node is considered to be at the same depth level as the mother node, since the data set has not been split. The point of the third node is to avoid making assumptions about the missing data. By not assigning the missing data to either of the standard daughter nodes, the Trainary tree does not contaminate the \(\delta\) estimates for the standard nodes with data points that do not belong there. Instead, the Trainary tree uses the entire data set to estimate \(\delta\) for the third node, as a way to regularize predictions when in uncertainty about important covariates. The Trainary tree training algorithm is summarized in Algorithm 1. A visualization of a Trainary tree with depth 1 is shown in Figure 1. ``` Let \((y_{i},x_{i})_{i\in\mathcal{I}}\), where \(y_{i}\in\mathcal{Y}\), \(x_{ij}\in\mathcal{X}_{j}\), \(j=1,\ldots,p\), be the training data \(\mathcal{L}((y_{i})_{i\in\mathcal{I}},\delta)\) be the loss given parameter \(\delta\) \(\delta_{\mathcal{I}}\) be the parameter that minimizes \(\mathcal{L}\) for responses \((y_{i})_{i\in\mathcal{I}}\) \(\mathcal{I}_{jl}=\{i\in\mathcal{I}:x_{ij}\in\mathcal{X}_{j}^{l}\}\), \(\mathcal{I}_{jr}=\{i\in\mathcal{I}:x_{ij}\in\mathcal{X}_{j}^{r}\}\), and \(\mathcal{I}_{jm}=\{i:x_{ij}\text{ missing}\}\) \(d_{\max}\) be the maximum depth \(n\) \(n\) \(\triangleright\) the minimum number of samples per node Define training function \(\mathcal{T}\left((y_{i},x_{i})_{i\in\mathcal{I}},d\right)\to h(x)\): If\(d=d_{\max}\) or \(|\mathcal{I}|=n\): Output \(h(x)=\delta_{\mathcal{I}}\) Else: Fit Find \(j\), \(\mathcal{X}_{j}^{l}\), and \(\mathcal{X}_{j}^{r}\) that solve \[\min_{j,\mathcal{X}_{j}^{l},\mathcal{X}_{j}^{r}}\left\{\mathcal{L}\left((y_{ i})_{i\in\mathcal{I}_{jl}},\delta_{\mathcal{I}_{jl}}\right)+\mathcal{L}\left((y_{ i})_{i\in\mathcal{I}_{jr}},\delta_{\mathcal{I}_{jr}}\right)+\mathcal{L}\left((y_{ i})_{i\in\mathcal{I}_{jm}},\delta_{\mathcal{I}}\right)\right\}\] such that \(|\mathcal{I}_{l}|\geq n\), \(|\mathcal{I}_{r}|\geq n\), \(\mathcal{X}_{j}^{l}\cup\mathcal{X}_{j}^{r}=\mathcal{X}_{j}\) \(\triangleright\) **Grow** \[h_{l}(x) =\mathcal{T}\left((y_{i},x_{i})_{\mathcal{I}_{l}},d+1\right)\] \[h_{r}(x) =\mathcal{T}\left((y_{i},x_{i})_{\mathcal{I}_{r}},d+1\right)\] \[h_{m}(x) =\mathcal{T}\left((y_{i},x_{i})_{\mathcal{I}},d\right)\] Output \[h(x)=\begin{cases}h_{l}(x),&x_{.j}\in\mathcal{X}_{j}^{l}\\ h_{r}(x),&x_{.j}\in\mathcal{X}_{j}^{r}\\ h_{m}(x),&x_{.j}\text{ missing}\end{cases}\] ``` **Algorithm 1**Trinary tree training algorithm **Remark 1**.: _Most implementations of decision trees use different tricks and approximations to reduce the computational complexity when handling high-cardinality categorical covariates. Two common ones are one-hot encoding and target encoding. In this paper, we do not discuss any of these methods, but the implementation in Section 4 does use the ordering trick described in Section 9.2.4 of Hastie et al. (2009) in cases of binary classification or regression. Note that this is not an approximation but rather a faster way to find appropriate splits. For details, see Hastie et al. (2009)._ ## 3. Tree-fitting estimate bias In order to illustrate how the Trainary tree might be preferable to the other methods, let us examine a simple example where the non-trinary methods estimators are locally biased. Consider the data set \(\mathcal{D}=(X_{i},Y_{i})_{i=1}^{n}\), where \(X_{i}\in\mathcal{X}\) is the covariate and \(Y_{i}\in\mathbb{R}\) is the response. Let the expected value of \(Y\) follow a tree structure, i.e. let \(\mathbb{E}[Y|X=x]=h(x)\) where \[h(x)=\begin{cases}a,&x\in\mathcal{X}^{l}\\ b,&x\in\mathcal{X}^{r}\end{cases}\] where \(\mathcal{X}^{l}\cup\mathcal{X}^{r}=\mathcal{X}\), \(a<b\) and \(\mathbb{P}(X\in\mathcal{X}^{r})=p\), \(0<p<1\). Then, consider a censored dataset \(\widetilde{\mathcal{D}}=(\tilde{X}_{i},Y_{i})_{i=1}^{n}\) where \[\tilde{X}_{i}=\begin{cases}\texttt{nan},&\text{with probability }q\\ X_{i},&\text{with probability }1-q\end{cases}\] where \(\texttt{nan}\) corresponds to a missing value, \(q>0\). Now, consider fitting trees to this for an SSE loss function, using the Majority, MIA and FC algorithms respectively, and examine their estimates of \(a\). For brevity, let \(\mathcal{I}_{l}\) and \(\mathcal{I}_{r}\) be defined as in (5) and introduce index sets \[\mathcal{I}^{o}=\left\{i:\tilde{X}_{i}=X_{i}\right\},\qquad\mathcal{I}^{m}= \left\{i:\tilde{X}_{i}=\texttt{nan}\right\},\] as well as the intersections \[\mathcal{I}^{o}_{l} =\mathcal{I}^{o}\cap\mathcal{I}_{l}, \mathcal{I}^{o}_{r} =\mathcal{I}^{o}\cap\mathcal{I}_{r},\] \[\mathcal{I}^{m}_{l} =\mathcal{I}^{o}\cap\mathcal{I}_{l}, \mathcal{I}^{m}_{r} =\mathcal{I}^{m}\cap\mathcal{I}_{r}.\] For the majority rule algorithm, the estimate of \(a\) is \[\hat{a}_{\text{Maj}}=\begin{cases}\frac{1}{\mathcal{I}_{l}^{c}}\sum\limits_{i \in\mathcal{I}_{l}^{c}}Y_{i},&\text{if }\left|\mathcal{I}_{l}^{o}\right|>\left|\mathcal{I}_{r}^{o}\right|,\\ \frac{1}{\left|\mathcal{I}_{l}^{c}\right|+\left|\mathcal{I}^{m}\right|}\sum \limits_{i\in\mathcal{I}_{l}^{c}\cup\mathcal{I}^{m}}Y_{i},&\text{else}.\end{cases}\] First note that \[\mathbb{E}[\hat{a}_{\text{Maj}}]\left|\mathcal{I}_{l}^{o}\right|\leq\left| \mathcal{I}_{r}^{o}\right|]=a\] Figure 1. Visualization of a Trinary tree with depth 1 for a covariate with \(p=2\) dimensions. Note that since the third node is considered to be at the same depth level as the root node, an additional split is made. Since the best performing split covariate \(j=1\) is no longer available, the second-best split covariate \(j=2\) is used for the second split. Since that covariate could also be missing, the third node has its own third daughter node. Since there are no further covariates to split on, that node is a terminal node. together with that \[\mathbb{E}\left[\hat{a}_{\text{Maj}}\right|\left|\mathcal{I}_{l}^{o} \right|>\left|\mathcal{I}_{r}^{o}\right| =\mathbb{E}\left[\frac{1}{\left|\mathcal{I}_{l}^{o}\right|+\left| \mathcal{I}_{l}^{m}\right|+\left|\mathcal{I}_{r}^{m}\right|}\left(\sum_{i\in \mathcal{I}_{l}^{o}}Y_{i}+\sum_{i\in\mathcal{I}_{l}^{m}}Y_{i}+\sum_{i\in \mathcal{I}_{r}^{m}}Y_{i}\right)\right]\] \[=\mathbb{E}\left[\frac{1}{\left|\mathcal{I}_{l}^{o}\right|+ \left|\mathcal{I}_{l}^{m}\right|+\left|\mathcal{I}_{r}^{m}\right|}\left(\sum_{ \mathcal{I}\in\left\{\mathcal{I}_{l}^{o},\mathcal{I}_{l}^{m},\mathcal{I}_{m}^ {m}\right\}}\sum_{i\in\mathcal{I}}\mathbb{E}\left[Y_{i}|i\in\mathcal{I}\right] \right)\right]\] \[=\mathbb{E}\left[\frac{1}{\left|\mathcal{I}_{l}^{o}\right|+ \left|\mathcal{I}_{l}^{m}\right|+\left|\mathcal{I}_{r}^{m}\right|}\left(a \left|\mathcal{I}_{l}^{o}\right|+a\left|\mathcal{I}_{l}^{m}\right|+b\left| \mathcal{I}_{r}^{m}\right|\right)\right]\] \[\geq\frac{1}{\mathbb{E}\left[\left|\mathcal{I}_{l}^{o}\right|+ \left|\mathcal{I}_{l}^{m}\right|+\left|\mathcal{I}_{r}^{m}\right|\right]} \mathbb{E}\left[a\left|\mathcal{I}_{l}^{o}\right|+a\left|\mathcal{I}_{l}^{m} \right|+b\left|\mathcal{I}_{r}^{m}\right|\right]\] \[=a+\frac{pq}{1-p+pq}\left(b-a\right)\] where the concavity of the function \[g(x,y,z)=\frac{x}{x+y+z}\] is used for the inequality. Then, introduce \(\kappa=\mathbb{P}\left(\left|\mathcal{I}_{l}^{o}\right|\leq\left|\mathcal{I}_ {r}^{o}\right|\right)\), and note that \[\mathbb{E}\left[\hat{a}_{\text{Maj}}\right] =\kappa\mathbb{E}[\hat{a}_{\text{Maj}}]\left|\mathcal{I}_{l}^{o} \right|\leq\left|\mathcal{I}_{r}^{o}\right|+\left(1-\kappa\right)\mathbb{E}[ \hat{a}_{\text{Maj}}]\left|\mathcal{I}_{l}^{o}\right|>\left|\mathcal{I}_{r}^{ o}\right|\] \[\geq\kappa a+\left(1-\kappa\right)\left(a+\frac{pq}{1-p+pq}\left( b-a\right)\right)\] \[=a+\frac{\left(1-\kappa\right)pq}{1-p+pq}\left(b-a\right)\] \[>a\] The proof for the MIA strategy is identical with the only change that \(\kappa\) then means the probability that the loss is lower if the missing values are assigned to the left node. For the Fractional Case strategy the estimate of parameter \(a\) has expected value \[\mathbb{E}\left[\hat{a}_{\text{FC}}\right] =\mathbb{E}\left[\frac{1}{\sum\limits_{i=1}^{n}w_{i}^{l}}\sum \limits_{i=1}^{n}w_{i}^{l}Y_{i}\right]\] \[\geq\frac{1}{\mathbb{E}\left[\sum\limits_{i=1}^{n}w_{i}^{l}\right] }\mathbb{E}\left[\sum\limits_{i=1}^{n}w_{i}^{l}Y_{i}\right]\] \[=a+pq\left(b-a\right)\] \[>a.\] where it can also be shown that in order for \(\mathbb{E}[\hat{Y}_{\text{FC}}]=\mathbb{E}[Y]\) to hold it is required that \(\mathbb{E}[\hat{b}_{\text{FC}}]<b\). Finally, for the Trinary tree, see that \[\mathbb{E}\left[\hat{a}_{\text{Tri}}\right]=\mathbb{E}\left[\frac{1}{\left| \mathcal{I}_{l}^{o}\right|}\sum\limits_{i\in\mathcal{I}_{l}^{o}}Y_{i}\right]=a.\] ## 4. Numerical illustration In order to illustrate the benefits of the Trinary tree, data with increasing missingness is created for the data sets in Table 1. All the data sets are available online. The data sets have been chosen in order to provide a wide array of applications, and varying characteristics of the data. The performance of the individual data sets will not be evaluated specifically in this paper, but rather the performance of the algorithms on the data sets as a whole. The data sets have been minimally pre-processed, e.g. by removing any potential missing values. This is done in order to have full control over the missingness. The tree depth is first tuned by performing 10-fold cross validation on the full data set, with a maximum tree depth set to 5. For the classification problems, the folds are stratified so that the relative class frequency is equal in every fold. The dataset is then censored, i.e. values are replaced with missing values, (with missingness \(q\%\) ranging from \(0\%\) to \(90\%\)) in three different ways. **MCAR**: \(q\%\) of the data is removed from all features in the training and test set, completely at random. **MCARTest**: \(q\%\) of the data is removed from all features in the test set, completely at random. The training set is the full data set. **IM**: \(q\%\) of the data is removed from all features in the training and test set. For numerical features, the largest values are removed first. For categorical ones, they are removed on a category-by-category basis. Thereafter, the four different tree algorithms (Majority, MIA, FC, and Trinary) are trained and evaluated on the 10 folds. Additionally, a fifth tree algorithm, denoted TrinaryMIA is evaluated. This is an amalgamation of Trinary and MIA that in every node evaluates whether a MIA-style or Trinary Tree style split reduces the training loss the most and then picks the one that does. First the total loss is calculated for the entire data set with no missing values, referred to as the _full loss_. Then the total loss is calculated for the data set with increasing missingness, and the excess loss (i.e. the loss divided by the full loss) is calculated. The average excess loss over all data sets is presented in Figure 2. Since the MIA and majority strategies are identical in cases where there is no missing data in the training data, MIA is omitted from the middle figure. The same applies to Trinary and TrinaryMIA trees. As can be seen, the TrinaryMIA tree is the best performer in the MCAR case, followed by the Trinary tree. For higher levels of missingness all algorithms seem to perform almost equally bad. In the MCARTest case, the Trinary tree is the best performer, followed by the FC tree. For the IM case, the TrinaryMIA and MIA trees perform very similarly, and are the best performers. The performance of MIA in this setting is expected, but it seems like the TrinaryMIA tree is able to find the appropriate splits as well. \begin{table} \begin{tabular}{c|l r r} type & Name & size & features \\ \hline \multirow{4}{*}{regression} & AutoMPG & 392 & 8 \\ & Black Friday & 550,068 & 6 \\ & Cement & 1,030 & 9 \\ & Life Expectancy & 138 & 17 \\ \hline \multirow{4}{*}{classification} & Titanic & 712 & 7 \\ & Lymphography & 142 & 19 \\ \cline{1-1} & Boston Housing & 506 & 14 \\ \cline{1-1} & Wheat seeds & 199 & 8 \\ \end{tabular} \end{table} Table 1. Data sets for the numerical illustration Figure 2. Average excess loss per missingness ratio for the tree algorithms in different kinds of missingness on all data sets ## 5. Concluding remarks It is clear both from the bias calculations and the numerical illustration that the Trinary tree has benefits over its peers in MCAR settings. Especially, the performance of the algorithm in the case where data is only missing in the test set is noteworthy. It is however important to remember that assuming that missingness contains no information is an assumption in itself - seen by the less impressive performance in the IM test. This drawback seems to be easily overcome by the TrinaryMIA tree, which maintains performance in all types of missingness. Surprisingly, the TrinaryMIA tree also outperforms the Trinary tree in the MCAR case, however not by a large margin. The potential of using the Trinary tree algorithm as a weak learner in more powerful machine learning algorithms, such as a GBM, is an interesting topic, since the regularization and missing value-handling would then be inherited by the ensemble model. The main drawback for the Trinary tree, if controlling for informative missingness by using the TrinaryMIA method, is that for large data sets (especially covariate data sets with many features and categorical features with high cardinality) or deep trees, the training speed can suffer. For shallow trees and data sets with a limited number of covariates, the speed is however on par with the other methods. The implementation used in the numerical illustration here is not optimized for speed, and since node training can be parallelized, there is further speed potential if computation time is of importance. It should also be noted that the TrinaryMIA test is often slightly faster since it, if there is information in missingness in the training data, will grow fewer nodes than a standard Trinary tree. ## Acknowledgements During the preparation of this work the author used ChatGPT and GitHub Copilot in order to improve language and readability. After using these tools, the author reviewed and edited the content as needed and takes full responsibility for the content of the publication.
2309.13772
Motion Segmentation from a Moving Monocular Camera
Identifying and segmenting moving objects from a moving monocular camera is difficult when there is unknown camera motion, different types of object motions and complex scene structures. To tackle these challenges, we take advantage of two popular branches of monocular motion segmentation approaches: point trajectory based and optical flow based methods, by synergistically fusing these two highly complementary motion cues at object level. By doing this, we are able to model various complex object motions in different scene structures at once, which has not been achieved by existing methods. We first obtain object-specific point trajectories and optical flow mask for each common object in the video, by leveraging the recent foundational models in object recognition, segmentation and tracking. We then construct two robust affinity matrices representing the pairwise object motion affinities throughout the whole video using epipolar geometry and the motion information provided by optical flow. Finally, co-regularized multi-view spectral clustering is used to fuse the two affinity matrices and obtain the final clustering. Our method shows state-of-the-art performance on the KT3DMoSeg dataset, which contains complex motions and scene structures. Being able to identify moving objects allows us to remove them for map building when using visual SLAM or SFM.
Yuxiang Huang, John Zelek
2023-09-24T22:59:05Z
http://arxiv.org/abs/2309.13772v1
# Motion Segmentation from a Moving Monocular Camera ###### Abstract Identifying and segmenting moving objects from a moving monocular camera is difficult when there is unknown camera motion, different types of object motions and complex scene structures. To tackle these challenges, we take advantage of two popular branches of monocular motion segmentation approaches: point trajectory based and optical flow based methods, by synergistically fusing these two highly complementary motion cues at object level. By doing this, we are able to model various complex object motions in different scene structures at once, which has not been achieved by existing methods. We first obtain object-specific point trajectories and optical flow mask for each common object in the video, by leveraging the recent foundational models in object recognition, segmentation and tracking. We then construct two robust affinity matrices representing the pairwise object motion affinities throughout the whole video using epipolar geometry and the motion information provided by optical flow. Finally, co-regularized multi-view spectral clustering is used to fuse the two affinity matrices and obtain the final clustering. Our method shows state-of-the-art performance on the KT3DMoSeg dataset, which contains complex motions and scene structures. Being able to identify moving objects allows us to remove them for map building when using visual SLAM or SFM. ## I Introduction The objective of motion segmentation is to divide a video frame into regions segmented by common motions. Motion segmentation from a moving camera is particularly important as it has various applications in areas like autonomous navigation, robotics and SLAM. In a dynamic scene, the video camera is moving at an unknown velocity with respect to the environment. Such scenarios pose many challenges to motion segmentation methods such as motion degeneracy, motion parallax and motion on the epipolar plane [1]. Existing monocular motion segmentation methods often fail when facing these challenges [2, 3, 4], or need specific assumptions on the scene structure, object classes or motion types [5, 6, 7]. In order to overcome these limitations and achieve high quality instance motion segmentation results regardless of scene structures and motion types, we draw our inspiration from two branches of well studied motion segmentation approaches: optical flow based methods and point trajectory based methods. These two types motion cues are not only complementary in nature (long-term vs instantaneous motion), but they can also be used to derive highly complementary geometric and motion models for different motion types and scene structures: points trajectory based methods, when analyzed using epipolar geometry, will fail if the motion is mainly on the epipolar plane, but it is robust to depth variations, perspective effects and motion parallax; on the other hand, optical flow based methods will fail on these challenges, but it is robust to motions on the epipolar plane. We propose to combine these two complimentary motion cues at object level to obtain a robust and comprehensive motion representation of the scene. By using the state-of-the-art methods for object recognition, detection, segmentation and tracking, we can easily obtain an objectiveness prior (i.e., an initial grouping) of all motion data. This approach enables us to analyze motions of each individual object, which is crucial for both robots and humans to build their situational awareness and scene understanding capabilities [8, 9]. To build our motion segmentation framework, we first leverage state-of-the-art deep learning models [10, 11, 12, 13] to recognize, detect, segment and track any common objects throughout the video. Then, for every object in the video, we obtain (1) a set of sparse point trajectories for each object and (2) a dense optical flow mask for each object on each frame. By using object-specific sparse point trajectories and optical flow masks, we are able to derive object-specific geometric models (i.e. fundamental matrices based on epipolar geometry) and instantaneous motion models respectively on every frame pair where the object is visible. By fusing these two highly complementary models, we are theoretically able to model the vast majority of motions even in complex scenes. Our experiments show significant improvements on motion segmentation results in challenging scenarios, highlighting the approach's potential in real-world applications. In summary, the key contributions of our paper are as follows: 1. We combine the well-studied fundamental matrix motion model and the optical flow based instantaneous motion model using multi-view spectral clustering to model multiple complex motions in challenging scenes. 2. We show how to model different motions at object level by incorporating per-frame objectiveness prior obtained from recent computer vision foundational models. 3. We achieve state-of-the-art result on the challenging KT3DMoSeg dataset [4] in terms of both producing high-quality point trajectory clustering and pixel-level masks for individual moving instances. ## II Related Work Monocular motion segmentation can be broadly categorized into three groups: (1) Intensity based methods [14, 15, 16, 17, 18], (2) sparse correspondence based methods [19, 20, 21, 22, 23, 24, 25, 26] and (3) deep learning methods [5, 6, 7, 27, 28, 29, 30, 31, 32]. ### _Intensity Based Methods_ Intensity based methods can be further categorized into indirect and direct methods. Indirect methods [15, 16, 17] rely on pixel-wise correspondences as input, and produce a pixel-wise segmentation mask indicating different motion groups. Such input is usually optical flow, which assumes the brightness or color intensity of every point in the scene remains the same throughout the whole sequence. In contrast, direct methods [14] do not require optical flow - they combine the two processes of optimizing for the brightness consistency constraint and estimating the motion models together and directly take a pair of images as input. Most recent works on intensity based methods use optical flow based indirect methods, possibly due to the fast advance in optical flow estimation [33, 34]. Such methods usually adopt an iterative optimization approach to estimate the motion model and motion region simultaneously. Intensity-based methods tend to perform well on scenes without strong depth variations or motion parallax as the motion flow vectors projected to a 2D image from the 3D space are determined by both the depth and the screw motion of objects [35]. Therefore, if the scene contains strong depth variation (e.g. road scenes), intensity-based methods will fail to distinguish if a part of the image is moving independently or is just at a different depth from its surroundings. ### _Sparse Correspondence Based Methods_ Sparse correspondence based methods can be further categorized into two-frame and multi-frame methods. Two frame methods [19, 20] usually recover motion parameters by solving an iterative energy minimization problem of finding a certain number of geometric models (e.g., fundamental matrices) on a set of matched feature points, to minimize an energy function that evaluates the quality of the overall clustering of correspondences. Multi-frame methods often operate on manually corrected trajectory points obtained from a dense optical flow tracker. Such methods usually perform clustering on affinity matrices constructed using the results of geometric model fitting [25, 4, 26] or pairwise affinities derived from spatio-temporal motion cues and appearance cues [22, 24]. Point trajectories, when analyzed using epipolar geometry, are robust to depth variations in the scene, but are prone to motions on the epipolar plane. [26] uses trifocal tensor as a more robust model to analyze point trajectories. Trifocal tensor is more robust to noise and motions on the epipolar plane, but it is harder to optimize and prone to failure when the three cameras are close to being colinear [1], which can often happen on road scenes. [2, 4] produce promising results by combining multiple geometric models, but still fails to produce a coherent and consistent segmentation on some scenes. Methods using spatio-temporal information and appearance cues [22, 24] suffer from similar issues. In addition, although they tend to perform a bit better than geometric methods on motions with less rigidity [36], they perform worse on scenes with motion parallax or strong camera motions. ### _Deep Learning Based Methods_ Deep learning based methods usually takes a pair or a sequence of input frames as input and directly produces a either a binary segmentation mask of moving vs static objects [27, 6, 28], or a multi-label segmentation mask showing different objects of different motions [29, 5, 30, 2, 7, 31]. Most deep learning methods use CNNs amd their network architecture can be broadly summarized to have the following main components: (1) a module to extract the appearance information from consecutive frames, (2) a module to extract motion information from the same frames, (3) a module to fuse the appearance and motion information, and (4) a decoder to generate the final segmentation. These methods are usually fully-supervised and require a large amount of training data. Besides, these methods are often only able to perform binary motion detection (moving vs. static) [28, 7, 31], or they are only able to detect instance motions for specific scenes and limited number of object types they are trained on [29, 30] ## III Methodology ### _Object Recognition, Detection, Segmentation and Tracking_ Figure 1 shows a diagram of our motion segmentation pipeline. In order to identify all motions in a video sequence at object level, we first identify every common object in the video and track their movements throughout the video. We achieve this by using the most recent foundational models in object recognition (Recognize Anything Model) [12], detection (Grounding DINO model) [11] and segmentation (Segment Anything Model) [10], and a state-of-the-art object tracker (DeAOT) [13]. We adapt our preprocessing pipeline from Segment and Track Anything (SAMTrack) [37], which is an object segmentation and tracking framework made of the Grounding DINO model, Segment Anything Model (SAM) and the DeAOT tracker. SAMTrack allows the user to segment and track any specific objects in the video with a text prompt. To make our system fully automatic and universal to all motions in most scenes, we avoid using the user-defined text prompt by adding RAM at the beginning of our pipeline to automatically recognize any common objects in the video. In summary, our whole preprocessing pipeline consists of the following main steps: 1) Use RAM to recognize any common objects in the first frame of the video; 2) Feed the output of RAM as a text prompt to the Grounding DINO model to obtain object bounding boxes; 3) Feed these bounding boxes to SAM to obtain an instance segmentation mask of the first frame. Non-max suppression was used to remove objects with an IoU score \(>\) 0.5 and instances whose bounding boxes are larger than half the image are also removed; 4) Use the DeAOT tracker to track each object's mask throughout the entire video. In order to detect new objects entering the scene in the middle of the video, we run step 1) every \(l\) frames to check if there are new objects. If so, we then run steps 2) to 4) to segment and track the new objects together with the existing objects in the frame. The number \(l\) is video-specific, for example, more dynamic videos with more objects entering and leaving the scene would benefit from a smaller \(l\). ### _Obtaining Point Trajectories and Optical Flow Masks_ Once we have the instance segmentation mask for every frame of the video, we then need to obtain a set sparse of point trajectories and a dense optical flow mask on every object at every frame where it's visible. Essentially, we aim to We use one of the state-of-the-art methods [38] to obtain optical flow for all frames, then label each flow vector with the object label of the pixel. Ideally, point trajectories need to be sampled from each object and automatically tracked at each frame using a point tracker (e.g., [39, 40]), however, for benchmarking purposes, we use the manually corrected point trajectories provided by the KT3DMoSeg dataset. We assign every point trajectory with an initial object label using the per-frame instance segmentation masks. If a trajectory does not belong to any object, we label it as the background. Due to inaccuracy in instance segmentation and tracking, a point trajectory can be identified to be on different objects or background in different frames. In such case, we assign its object label to be the most frequent label it is identified as. In the future, we will use a point tracker as well as robust point sampling and occlusion handling techniques to automatically generate point trajectories for all detected objects. ### _Model Fitting_ After obtaining instance point trajectories and optical flow masks for each frame, we compute the motion models of each object to model its motion throughout the video. To compute the epipolar geometry based motion models using point trajectories, we compute a fundamental matrix of each object between every \(f\) frames by solving \(p^{\prime}TFp=0\) using the eight-point algorithm. If a degenerate case is encountered for the fundamental matrix, we do not use it. For the optical flow based motion model, we use a full quadratic motion model with 12 parameters to model the instantaneous object screw motion: \[\begin{split} f(x,y)=&(a+bx+cy+dx^{2}+cxy+fy^{2},\\ & g+hx+iy+jx^{2}+kxy+ly^{2})\end{split} \tag{1}\] where _(x, y)_ is the 2D coordinates of the pixels relative to the image center. Since we already have the instance optical flow field, we can obtain the following equation: \[f(x,y)=(u,v) \tag{2}\] where \((u,v)\) is the optical flow vector of the pixel. We fit the function 1 above to the the optical flow vectors of every object and solve for the 12 parameters representing the object motion model by optimizing the mean squared error. We use this specific motion model as it's a simplified version of the classic Longuet-Higgins and Pruzdny model equations [41], which model's the instantaneous screw motion of rigid objects at arbitrary depth. Since it's not possible to solve for the depth of each pixel, this motion model assumes the objects' depths are only slightly different. It was shown to perform well on scenes with limited motion parallax [32], nevertheless, it often fails when there is strong motion parallax and depth variations. ### _Constructing Affinity Matrices_ After all fundamental matrices and optical flow motion models are computed, each object will have a fundamental matrix between every \(f\) frames and an optical flow motion model between every two frames. By fitting every object's trajectory points and optical flow vectors to every other object's fundamental matrix and optical flow motion model on the same frame pair, we can obtain the residuals of every object to all other objects' motion models respectively. We use Sampson distance as the residual for fundamental matrix [1] and mean squared error for optical flow motion model. Assuming there are \(k\) objects in the scene, for the _i_-th object at the _m_-th frame pair, we obtain the following residual vectors under the fundamental matrix and optical flow motion models: \[\begin{split}\boldsymbol{r}_{o_{i}}^{m}=&[r_{o_{i },1}^{m},r_{o_{i},2}^{m},...,r_{o_{i},k}^{m}],\\ \boldsymbol{r}_{f_{i}}^{m}=&[r_{f_{i},1}^{m},r_{f_{i },2}^{m},...,r_{f_{i},k}^{m}]\end{split}\] where \(r_{o_{i},k}^{m}\) is the mean residual for fitting the parametric motion model of object \(i\) on the optical flow vectors of object \(k\) between frames \(m\) and \(m\) + 1, and \(r_{f_{i},k}^{m}\) is the mean Sampson error for fitting the fundamental matrix of object \(i\) on the trajectory points of object \(k\) between frames \(m\) and \(m\) + \(f\). We construct two affinity matrices encapsulating the pairwise motion affinities between each pair of objects using the ordered residual kernal (ORK) [42]. More specifically, for Fig. 1: Motion Segmentation Pipeline each object, we sort its residual vectors in ascending order and define a threshold to select the smallest \(t\)-th residual as infiers. We define \(\mathbf{c}_{i}=\{0,1\}^{K}\) as the inlier mask to denote if an object i is an inlier for each of the \(K\) motion models, and the pairwise motion affinity between objects \(i\) and \(j\) can be computed as \(\mathbf{a}_{ij}=\mathbf{c}_{i}^{\intercal}\mathbf{c}_{j}\), which denotes the co-occurrence between two objects as an inlier of all motion models. ORK is robust to outliers and makes the affinity matrix more adaptive to different scenes by reducing the need to set scene specific inlier thresholds. ### _Co-Regularized Multi-view Spectral Clustering_ After constructing the affinity matrices, we use the epsilon-neighborhood scheme [25] to sparsify the affinity matrices. We adapt co-regularized multi-view spectral clustering [43] to fuse the two affinity matrices together to obtain the final clustering of object motions and trajectory points. Co-regularized multi-view spectral clustering uses an regularization term to encourage consensus between different views and is shown to perform well on fusing multiple geometric models for a consistent representation of data [4]. ## IV Preliminary Results & Conclusions We tested our method on the KT3DMoSeg dataset since it is the only existing dataset involving challenging scenes and multiple complex motions. Since KT3DMoSeg uses pre-defined point trajectories, it could occur that an object segmented by our preprocessing pipeline has fewer than 7 point trajectories. In this case, it's not possible to compute fundamental matrices for the object, but we can still obtain its residual vectors by fitting its trajectory points (if there is any) and the optical flow vectors on the motion models of other objects to compute its pairwise motion affinity scores. Figure 2 shows a qualitative comparison between the segmentation masks generated by our method and a baseline we created using a state-of-the-art method [4] whose code is publicly available. To establish the baseline, we prompt SAM using the clustered trajectories of [4] (top left) to produce the segmentation mask. Even though [4] achieves very low clustering error rate on this sequence, the quality of the generated segmentation mask is still worse than ours since a few wrongly labeled trajectory points can easily mislead SAM. It's also not able to recognize which motion is background. Figure 3 shows qualitative ablation study and comparison to state-of-the-art methods. Table I shows quantitative results comparing to existing methods. Our method outperforms the state-of-the-art both qualitatively and quantitatively. We also compare the total running time (measured on an Intel 13900K CPU and an NVIDIA RTX 4090 GPU) of our method with [4] for reference. Our method is more than twice as slow since it's not yet optimized. \begin{table} \begin{tabular}{l c c} \hline \hline **Methods** & **Avg. Error Rate (\%)** & **Running Time (s)** \\ \hline LSA [44] & 38.30 & - \\ GPCA [45] & 34.60 & - \\ ALC [46] & 24.31 & - \\ SSC [23] & 33.88 & - \\ LRR [47] & 33.67 & - \\ MVC [4] & 10.99 & 1409.6 \\ CMRO [2] & 6.73 & - \\ Ours & **5.78** & 3230.1 \\ \hline \hline \end{tabular} \end{table} TABLE I: Quantitative results in terms of average classification error (%) and total running time (s). Lower is better Fig. 3: Qualitative ablation study and comparison with state-of-the-art methods on the Seq38_Clip02 sequence. From top to bottom: fundamental matrix only, optical flow only, fundamental matrix + optical flow, MVC [4], CMRO [2] Fig. 2: Trajectory clustering results and the generated segmentation masks by [4] (top) and our method (bottom)
2309.15465
Cross-Dataset Experimental Study of Radar-Camera Fusion in Bird's-Eye View
By exploiting complementary sensor information, radar and camera fusion systems have the potential to provide a highly robust and reliable perception system for advanced driver assistance systems and automated driving functions. Recent advances in camera-based object detection offer new radar-camera fusion possibilities with bird's eye view feature maps. In this work, we propose a novel and flexible fusion network and evaluate its performance on two datasets: nuScenes and View-of-Delft. Our experiments reveal that while the camera branch needs large and diverse training data, the radar branch benefits more from a high-performance radar. Using transfer learning, we improve the camera's performance on the smaller dataset. Our results further demonstrate that the radar-camera fusion approach significantly outperforms the camera-only and radar-only baselines.
Lukas Stäcker, Philipp Heidenreich, Jason Rambach, Didier Stricker
2023-09-27T08:02:58Z
http://arxiv.org/abs/2309.15465v1
# Cross-Dataset Experimental Study of Radar-Camera Fusion in Bird's-Eye View ###### Abstract By exploiting complementary sensor information, radar and camera fusion systems have the potential to provide a highly robust and reliable perception system for advanced driver assistance systems and automated driving functions. Recent advances in camera-based object detection offer new radar-camera fusion possibilities with bird's eye view feature maps. In this work, we propose a novel and flexible fusion network and evaluate its performance on two datasets: nuScenes and View-of-Delft. Our experiments reveal that while the camera branch needs large and diverse training data, the radar branch benefits more from a high-performance radar. Using transfer learning, we improve the camera's performance on the smaller dataset. Our results further demonstrate that the radar-camera fusion approach significantly outperforms the camera-only and radar-only baselines. Autonomous Driving, Machine Learning Methods, Datasets, Object Detection, Automotive Radar, Computer Vision, Fusion ## I Introduction Automotive radar technology has become an important building block for advanced driver assistance systems and automated driving functions, where the goal is to make driving safer and more comfortable. Radar has several advantages when compared to LiDAR or camera: it is more robust and preferred in adverse weather conditions, it provides a higher range and direct measurements of the relative radial velocity, and often it is more affordable than LiDAR. In particular, radar-camera fusion currently appears to be the most relevant sensor combination in the automotive industry, providing complementary environmental information: radar sensors provide accurate distance and velocity measurements, while cameras provide accurate angular and rich semantic information. Machine learning methods for 3D object detection have advanced significantly in the last years with the availability of large annotated datasets. These are traditionally dominated by LiDAR and camera perception, due to the fact that automotive radar used to have limited performance in terms of angular resolution, point cloud density, and object classification capabilities. However, these limitations are gradually dissolving with the development towards high-performance radars [1] and the emergence of corresponding datasets [2]. In this work, we select two suitable radar datasets for our experimental study: the nuScenes [3] and the View-of-Delft [4] dataset. A recent trend in 3D object detection is to transform camera features into a common bird's eye view (BEV) representation, which offers a flexible architecture for fusion, either among multiple cameras or with a ranging sensor. In this work, we extend the BEVFusion [5] method, that was originally developed for LiDAR-camera fusion, to perform radar-camera fusion. We train and evaluate our presented fusion method with the selected radar datasets. In several experiments, we discuss strengths and weaknesses of each dataset. Finally, we apply transfer learning to achieve further improvements. ### _Radar Datasets_ Over the last few years, a large number of different automotive radar datasets for machine learning has been released. A recent overview is presented in [2]1, which includes traditional radar with a limited angular resolution, high-performance radars with denser point clouds, and non-automotive type scanning radar. Note that traditional radars provide a 2+1D point cloud by measuring the range and azimuth angle, and the relative radial velocity in addition, whereas high-performance radars also measure the elevation angle and provide a 3+1D point cloud. There are datasets with different measurement data such as radar point clouds or pre-CFAR radar data, and different annotations such as 3D bounding boxes, semantic point cloud segmentation, or localization information. In this work, we focus on datasets with radar point clouds and 3D bounding boxes. Among the traditional radar datasets, we find the widely used nuScenes [3] dataset, RadarScenes [6] for semantic point cloud segmentation, and the recent aiMotive [7] dataset with focus on long range. Among the high-performance radar datasets, we find the small Astyx [8] dataset, the View-of-Delft [4] dataset with focus on vulnerable road users, and very recently TJ4DRadSet [9], and K-Radar [10]. Footnote 1: The corresponding author of [2] regularly updates a curated list of radar datasets for detection, tracking and fusion methods on his github. For our experimental study, a reasonable number of annotated frames and objects is necessary. After careful examination of available datasets, we have selected the nuScenes and View-of-Delft dataset as suitable. For comparability, only the front-view of nuScenes is evaluated and only classes pedestrian, cyclist, and car are considered. The nuScenes dataset covers 40k annotated frames, sampled at 2 Hz. Each frame has measurements from six cameras with \(70^{\circ}\) field of view (FoV), one Velodyne HDL-32E LiDAR, and five Continental ARS408 2+1D radars, for which only a limited number of radar detections is reported. To this end, multiple radar sweeps are accumulated to increase the point cloud density. The View-of-Delft dataset comes with 8.7k annotated frames, sampled at 10 Hz. Each frame has measurements from one frontal camera with \(64^{\circ}\) FoV, one Velodyne HDL-64E LiDAR, and one ZF FR-Gen21 3+1D radar, which provides a dense point cloud. For both datasets, the 3D bounding box annotations have been obtained with the help of the respective reference LiDAR. ### _Related Work_ Approaches for fusion of cameras and ranging sensors such as radar or LiDAR have to solve the problem that the sensor data is available in different representations. While cameras provide images on the perspective plane, ranging sensors provide point clouds in 3D or BEV, making it difficult to associate features from the different modalities. Some approaches project the point cloud to the image to augment it with depth information [11]. However, this projection is geometrically lossy, since only a small part of the image can be augmented. Other approaches do a reverse projection and decorate the point cloud with features extracted from the image [12]. Especially for sparse radar point clouds, this approach is semantically lossy. An alternative is to take object proposals from the camera and associate matching points from the point cloud to refine the proposals [13]. However, this limits the potential of the ranging sensor to a refinement, instead of enabling it to detect objects missed by the camera. Recent advances in camera-only 3D object detection transform features from the camera into the BEV plane, enabling new possibilities for fusion. A recent technique to obtain BEV features from multiple cameras is the idea from Lift, Splat, Shoot [14]. Here, a depth distribution for each pixel is unprojected into a frustum of features in 3D space, which is subsequently condensed into a rasterized BEV grid. This technique has been improved and extended for 3D object detection in [15]. Based on this work, BEVFusion [5] has been proposed as a network for LiDAR-camera fusion. Here, voxelization and sparse 3D convolutions are used for LiDAR feature extraction before reducing the dimension to BEV. In our work, we further extend BEVFusion by replacing the LiDAR with radar and adapting a pillar feature encoding, similar to the one in [16]. ## II Radar-Camera Fusion in BEV We follow the fusion architecture of [5]. An overview of our network for radar-camera fusion in BEV is provided in Figure 1. Note that the fusion occurs when the camera and radar features in BEV are concatenated. Below, we provide further detail for each block. ### _Camera Encoder and Camera-to-BEV View Transform_ The camera encoder and view transform follow the idea of [15], which is a flexible framework to extract camera BEV features with arbitrary camera extrinsic and intrinsic parameters. First, the tiny version of a Swin Transformer [17] network is used to extract features from each image. Next, we transform the features from the image to the BEV plane using the Lift and Splat step from [14]. To this end, a dense depth prediction is followed by a rule-based block, in which the features are transformed into a pseudo point cloud and rasterized and accumulated into the BEV grid. ### _Radar Pillar Feature Encoder_ This block aims to encode the radar point cloud into BEV features on the same grid as the camera BEV features. To this end, we use the pillar feature encoding technique from [16], in which the point cloud is rasterized into voxels with infinite height, so-called pillars. Here, each point of the radar point cloud is represented by a position \((x,y,z)\), a radar cross section \(\mathrm{RCS}\), a relative radial velocity \(v_{\mathrm{r}}\), compensated by the ego motion, and a timestamp \(t\). To encode geometrical relationships within voxels, we augment each point by the distance to the arithmetic mean of all points in the pillar \((x_{\mathrm{c}},y_{\mathrm{c}},z_{\mathrm{c}})\), and the distance to the pillar center \((x_{\mathrm{p}},y_{\mathrm{p}})\), resulting in a feature vector of size 11. Note that for nuScenes, the radars are only 2+1D, so that there is no \(z\)-component and the feature vector is of size 9. All non-empty pillars are now processed using a simplified PointNet, before the resulting pillar features are populated back to the BEV grid. ### _BEV Encoder_ Similar to [5], the radar and camera BEV feature are fused by concatenation. The fused features are then processed by Fig. 1: Flowgraph of the presented radar-camera fusion in BEV, based on [5]. In the resulting camera image, we include projected radar detections and ground truth bounding boxes. a joint convolutional BEV encoder to enable the network to account for spatial misalignment and use synergies between the different modalities. ### _Detection Head_ We use the CenterPoint [18] detection head to predict heatmaps of object centers for each class. Further regression heads predict the dimension, rotation, and height of objects, as well as the velocity and class attribute for nuScenes. Whereas the heatmaps is trained with Gaussian focal loss, the remaining detection heads are trained with L1 loss. ## III Experiments We use our presented radar-camera fusion in BEV to conduct several experiments on nuScenes and View-of-Delft. For a better comparability between the used datasets, we only use the frontal camera and the front-facing radars in nuScenes and filter out annotated 3D bounding boxes that fall outside the FoV of the frontal camera. For both datasets, we use a common BEV grid on \(x\in[0,51.2]\) and \(y\in[-25.6,25.6]\) with a step size of 0.1 m. We further reduce the considered classes to pedestrian, cyclist and car, since these are the only classes with sufficient annotations in View-of-Delft. To account for potential class imbalances, we use class-balanced grouping and sampling [19]. For both datasets, we aggregate the radar point cloud over 5 sweeps, corresponding to roughly 0.3 s. In addition to the presented radar-camera fusion, we use our object detection network also in radar-only and camera-only configurations, where we remove the encoded camera and radar feature inputs, respectively. Since View-of-Delft has less camera frames and shows a smaller visual variability, we also investigate transfer learning of the camera feature encoding. To this end, we pretrain the camera-only network with nuScenes and fine-tune the camera-only and radar-camera fusion networks with View-of-Delft to improve the performance. ### _Training and Evaluation_ We train and evaluate our networks with the official training and validation splits of nuScenes and View-of-Delft. We use an AdamW optimizer with a learning rate of 2e-4 for batch size 32 and a cyclic learning rate policy. Due to the different dataset sizes, we train nuScenes for 20 epochs and View-of-Delft for 80 epochs, and select the best-performing model on the validation set. To evaluate the 3D object detection performance, we consider the commonly used average precision (AP) and mean AP (mAP) metrics, where the latter corresponds to an average over relevant classes. The AP is defined as the area under the precision-recall curve, where precision equals TP /(TP+FP), recall equals TP / (TP+FN), and TP, FP, and FN correspond to the number of true-positive, false-positive, and false-negative detections, respectively. For reproducability, we follow the suggested metric implementations of the datasets, which approximate the area under the curve at sampling points. For the association of predicted and ground truth bounding boxes, the BEV center point distance is used for nuScenes, and the intersection-over-union ratio in 2D, BEV and 3D is used for View-of-Delft. For nuScenes, some additional metrics are calculated for the true-positive detections, including translation, scale, orientation, velocity, and attribute errors. ### _Quantitative results_ In Table I, we show the AP results on nuScenes for classes pedestrian, cyclist, and car. We observe that the radar-only network is able to detect cars to a certain degree but it can barely detect pedestrians and cyclists, as these often do not have a single radar detection and are difficult to distinguish from the environment. The camera-only network does better across all classes and shows that with a large dataset like nuScenes, monocular 3D object detection is possible. When fusing radar and camera, we see significant improvements for all classes, demonstrating how well the combination of the semantic value from the camera and the geometric value from the radar can work together. In Table II, we show the detailed true-positive metrics on nuScenes for each class. It should be noted that the radar-only metrics for pedestrian and cyclist are not meaningful due to the very low AP values for these classes. Comparing radar-only and camera-only performance for cars, we can see that despite the lower AP, the radar does better in translation, orientation, velocity, and nuScenes attribute estimation. This is deemed due to the direct distance and velocity measurement. The radar-camera fusion further improves all metrics, again showing the synergies of the different modalities. For View-of-Delft, we show the AP results in terms of 2D, BEV, and 3D metrics in Table III. With the high-performance radar, the radar-only approach works quite well for all classes, and even achieves the best results for cyclists, which were difficult to detect in nuScenes. The camera-only network performs worse than radar-only for all classes. On the 2D image plane, the results are more competitive, but we clearly observe the difficulty of the monocular depth estimation in the BEV and 3D metrics, given the limited amount of training data in View-of-Delft. The fusion network once again outperforms the single-modality networks. To provide more variety in the training images used for the camera branch, we pretrain the camera-only network with nuScenes and fine-tune the camera-only and radar-camera fusion networks with View-of-Delft. For camera-only, we observe significant improvements in all metrics. The radar-camera fusion network benefits from the pretraining mostly in the 2D metrics, since the BEV and \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Modality** & **mAP** & **AP** ped & **AP** cyc & **AP** car \\ \hline Radar & 12.5 & 3.1 & 5.4 & 29.0 \\ Camera & 27.7 & 26.7 & 17.3 & 39.1 \\ Radar-camera & 40.6 & 36.6 & 26.4 & 58.9 \\ \hline \end{tabular} \end{table} TABLE I: Experimental validation results for radar-camera fusion in BEV on nuScenes front-view: average precision metrics. 3D metrics are dominated by the high-performance radar in View-of-Delft. ### _Exemplary qualitative results_ To demonstrate the performance of our best-performing radar-camera fusion models, we show some selected qualitative examples for each dataset in Figure 2. In each subfigure, we show the frontal camera image with the projected radar detections and a BEV representation of the same frame with radar detections in red and the LiDAR point cloud in black. Note that the LiDAR point cloud is shown as a geometric reference only. The colored boxes are the projected 3D bounding boxes, predicted by our fusion models, where blue is used for pedestrians, red for cyclists, and orange for cars. The examples in 2a-2c are from nuScenes, whereas the examples in 2d-2f are from View-of-Delft. When looking at the raw data, we can observe the strengths and weaknesses of the two datasets. In particular, nuScenes offers a large variety in the images, since the scenes are recorded in two different countries, and at different daytimes and weather conditions. However, the radar point cloud is very sparse and does not contain elevation angles. In contrast, View-of-Delft has less variety in the images, but a much denser radar point cloud with elevation angles. In all six frames, we can see how well our radar-camera fusion models performs. Even though we selected dense traffic scenes with many cars and pedestrians, and some cyclists, the network is able to detect and localize all the relevant objects in the scenes. Especially in the challenging situation for the camera in 2c, we can see how the radar can help to accurately localize the cars. ## IV Conclusion In this paper, we have used a novel radar-camera fusion network on the BEV plane to study differences in the nuScenes and View-of-Delft datasets. Our results show that camera-only 3D object detection requires a large dataset with reasonable visual variability, as it is available in the nuScenes dataset. In contrast, the radar-only network profits more from the high-performance radar in View-of-Delft and can cope with a smaller dataset. Also, it can also classify vulnerable road users like pedestrians and cyclists. We conclude that the full potential of radar-camera fusion could be achieved when combining the needs for radar and camera perception, with a dense point cloud and a large visual variability, respectively. Until such a dataset is publicly available, we have shown that pretraining the camera-only network with a large dataset like nuScenes can help to improve the performance of camera-only and radar-camera fusion networks in smaller datasets like View-of-Delft. In future work, we want to examine more of the recently introduced radar datasets to support our findings and investigate further transfer learning possibilities. ## Acknowledgment This work is partly funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) and partly financed by the European Union in the frame of NextGenerationEU within the project "Solutions and Technologies for Automated Driving in Town" (FKZ 19A22006P).
2309.05676
MultiCaM-Vis: Visual Exploration of Multi-Classification Model with High Number of Classes
Visual exploration of multi-classification models with large number of classes would help machine learning experts in identifying the root cause of a problem that occurs during learning phase such as miss-classification of instances. Most of the previous visual analytics solutions targeted only a few classes. In this paper, we present our interactive visual analytics tool, called MultiCaM-Vis, that provides \Emph{overview+detail} style parallel coordinate views and a Chord diagram for exploration and inspection of class-level miss-classification of instances. We also present results of a preliminary user study with 12 participants.
Syed Ahsan Ali Dilawer, Shah Rukh Humayoun
2023-09-09T08:55:22Z
http://arxiv.org/abs/2309.05676v1
# MultiCaM-Vis: Visual Exploration of Multi-Classification Model with High Number of Classes ###### Abstract Visual exploration of multi-classification models with large number of classes would help machine learning experts in identifying the root cause of a problem that occurs during learning phase such as miss-classification of instances. Most of the previous visual analytics solutions targeted only a few classes. In this paper, we present our interactive visual analytics tool, called **MultiCaM-Vis**, that provides overview+detail style parallel coordinate views and a Chord diagram for exploration and inspection of class-level miss-classification of instances. We also present results of a preliminary user study with 12 participants. **Index Terms:** Human-centered computing--Visualization--Visualizationization--Visualization application domains--Visual analytics ## 1 Introduction Convolution Neural Network (CNN) approach in Machine Learning (ML) uses combinations of convectional layers and pooling layers to reduce the computational cost and output variations. therefore, it has been applied in different domains from recognizing letters to identify objects in images [3]. However, these all layers typically reside in a black-box, which makes it difficult for ML experts to identify the root cause of problems that may occur during the learning phase such as misclassification of instances. The traditional backtracking solutions are time consuming and hectic for ML practitioners. Targeting this concern, many visual analytics tools have been proposed targeting multi-classification models. Few examples are: Seifert & Lex [8] targeted classification models with 20 distinct classes, whose predictions can be interpreted as probabilities at three levels, i.e., classifier level, class level, and test item level. Paiva et al. [4] developed a point-based visualization system for maximum 10 classes. Alsallakh et al. [1] developed a diagnostic VA system to depict the prediction score distributions, which was efficient for lower than 20 classes. Ren et al. [6] developed Squares tool to support performance diagnosis within a single visualization while showing prediction score distributions at multiple levels of detail, useful for less than 20 classes. Recently, Pomme et al. [5] utilized confusion matrices and color encoding to display the class-wise differences of performances between two classification models having up to 30 classes, while Uwaseki et al. [9] targeted multi-class image classification models having less than 10 classes. From the above-mentioned work, it can be concluded that visual exploration of multi-classification models with large number of classes are not adequately addressed in the literature. Targeting this concern, we present an interactive VA tool, called **MultiCaM-Vis Multi-Classification Model Visualizer**, that allows users to visual explore multi-classification model, targeting 1K classes in ILSVRC image dataset [7], using parallel coordinate views in _overview+detail_ style. Users can explore the misclassification (inbound and outbound) cases to get better understanding of problematic classes in the model. The MultiCaM-Vis also provides a Chord diagram view for in-depth inspection of the incorrect classification cases. Several filtering and sorting options are provided at each level to help users Figure 1: **(a)** MultiCaM-Vis model exploration view using two parallel coordinates in _Overview+detail_ style. **(b)** Detailed view sorted based on prediction value. **(c)** Chord diagram view to inspect incorrect classification cases. **(d)** Mouse hover a particular node (class) in (c) highlights the associated chords and nodes as well as images of associated incorrect classification cases. in model exploration and inspection. We also conducted a preliminary user study with 12 participants to check the effectiveness of the tool as well as to get their feedback. ## 2 The Dataset ImageNet [2] dataset consists of a collection of over 15 million high resolution labelled images. These images belong to an estimated 22k different categories. In our tool, we use ImageNet Large-Scaled Visual Recognition Challenge (ILSVRC) dataset [7], a subset of ImageNet, with images belong to 1k different categories. Overall, ILSVRC contains an estimated 1.2 million training images, 50k validation images and 150k testing images. From ILSVRC subset, we use set of validation images with 500 X 500 color photograph in 1K different object classes. The used network in our case consisted of eight learned layers, five convolutional layers and three fully connected layers. The output of the last fully connected layer (fc8) is a prediction distribution score over the 1000 class labels. ## 3 The MultiCam-Vis Tool Our developed tool, called **Circles**, visualizes results of a multi-classification model targeting 1K classes in ILSVRC [7] dataset. The web-based client side was developed using HTML, CSS, JavaScript, and D3 js library while the server side was developed using Node.js. In order to explore a model's results, MultiCaM-Vis provides two parallel coordinates using _overview+detail_ style to show the class-level prediction distribution across all 1K image classes in ILSVRC dataset (Fig. 1(a)). The bottom _overview_ parallel coordinate view shows all the 1K classes with class-level prediction distribution, while the upper parallel coordinate _detailed_ view initially shows the first 10 classes. MultiCaM-Vis provides an interactive slider on the overview view, where users can select the range of classes to be shown on the detailed view. On the top of each class in the detailed view, MultiCaM-Vis provides a doughnut chart to show the distribution of _correct-classification_ instances in dark blue color, the _incorrect-classification-inbound_ instances in light blue color, and _incorrect-classification-outbound_ instances in orange color. MultiCaM-Vis provides a number of filtering and sorting options. For example, mouse hover a particular image instance line in detailed view shows the associated image with the species name and its hierarchy in the ILSVRC dataset. MultiCaM-Vis provides a filtering and sorting panel on the left-side with a number of options (left-side in Fig. 1(a)). Few examples are: a slider-range to show only those image instances that have a corresponding prediction value within this slider range against one or more classes; sorting of the classes in the both parallel coordinates views order by incorrect-classification (in-bound or out-bound) cases, order by correct-classification cases, or order by instances' highest probability distribution; giving different colors to image instances based on prediction distribution where users can have the option of the range from 1 to 10 colors (in this case, if a user selects 10 colors option, then MultiCaM-Vis associates a color to an instance based on its highest prediction to a class where each color range is 0.1 of total 1 prediction probability) or by a range-slider where instances with having a prediction probability within this range will get a different color than all other instances. To provide in-depth inspection of incorrect-classification (_in-bound_ or _outbound_) cases, MultiCaM-Vis opens another view using a Chord diagram based on demand. In this case, MultiCaM-Vis shows only those classes in the Chord diagram that are selected by users from the detailed view (Fig. 1(c)). The chords in this view represent incorrect-classification cases. MultiCaM-Vis allocates color to each chord as per the correct class (node) color. The width of a chord represents the number of incorrect-classification cases from one class to another class (inbound or outbound). Mouse hover a particular chord shows the associated image instances with correct classes while mouse hover a particular class (node) highlights only this class (node) and the associated classes (nodes) and chords as well as all the image instances with inbound and outbound incorrect-classification cases ((Fig. 1(d)). ## 4 The Preliminary User Study We conducted a preliminary user study with 12 participants (4 females, 7 had ML background), all graduate students (age range: 25 to 30, M = 28.3). We were interested to see the effectiveness of the tool in term of _accuracy_ and to know whether participants from ML and non-ML background have any differences in accuracy. We designed 3 controlled tasks, with subtasks, for the study in a way that we could acquire the reaction about participants' interaction with the tool and to identify whether they are able to find appropriate information or not. The first task targeted towards finding out correct or incorrect classification cases in both parallel coordinates and the Chord diagram, the second task targeted towards using different filtering and sorting options, while the third task targeted the use of color options in the resulting parallel coordinates. The experiment sequence was: filling the demographic form, tool tutorial, tasks execution, a 10 closed-ended questionnaire, and finally getting feedback in general. Each experiment lasted no more than 1 hour. In the case of accuracy, ML participants achieved an overall 94% accuracy in all three tasks compared to 85.67% for non-ML participants (i.e.: **T1** ML: 91%, non-ML: 82%; **T2** ML: 98%, non-ML: 89%; **T3** ML: 93%, non-ML: 86%). These results are quite promising as it shows that even non-ML background participants were quite good in accurately executing all three tasks with little difference compared to ML participants. Table I shows participants' feedback in the closed-ended questionnaire. In the future, we plan to add the inter-model comparison view so users can compare between multiple models of the same dataset. Also, we intend to execute a detailed user study targeting the new inter-model comparison view.
2309.04568
Circular economy meets building automation
This paper demonstrates the concept of reusing discarded smartphones to connect the end-of-life of e-wastes with the start-of-life of smart buildings. Two control-related and one communication-related case studies have been conducted experimentally to evaluate applicability. Diverse controlled systems, control tasks, and algorithms have been considered. In addition, the sufficiency of communication with external agents has been quantified. The proof-of-concept experiments indicate technical feasibility and applicability to typical tasks with satisfactory performance. As smartphones improve over time, higher computing performance and lower communication latency can be expected, enhancing the prospect of the proposed reuse concept.
Hanmin Cai
2023-09-08T19:44:57Z
http://arxiv.org/abs/2309.04568v1
# Circular economy meets building automation ###### Abstract This paper demonstrates the concept of reusing discarded smartphones to connect the end-of-life of e-wastes with the start-of-life of smart buildings. Two control-related and one communication-related case studies have been conducted experimentally to evaluate the applicability. Diverse controlled systems, control tasks, and algorithms have been considered. In addition, the sufficiency of communication with external agents has been quantified. The proof-of-concept experiments indicate the technical feasibility and applicability to typical tasks with satisfactory performance. As the capabilities of smartphones improve over time, higher computing performance and lower communication latency can be expected, which enhances the prospect of the proposed reuse concept. ## 1 Introduction The circularity within the building sector can be enhanced by examining the e-wastes that are largely neglected so far. On the one hand, ongoing digitization in the energy sector intensifies the demand for computing power, i.e., programmable logical controllers (PLCs) [1] are commonly used for automation in buildings. On the other hand, many smartphones are mainly disposed of to extract valuable metals [2]. This paper contributes to bridging this gap by investigating the reuse of discarded smartphones to connect the end-of-use of smartphones with the start-of-life of smart buildings. Currently, many smartphones turn into e-wastes when they are outdated or have battery/screen malfunction [2]. However, their central processing units (CPU) and random-access memory (RAM) may remain intact. Therefore, they are potential neglected resources to perform building energy management tasks. Once connected to the internet and power sources, they can help to avoid manufacturing new micro-controllers and to reduce the overall carbon footprint. Although the reuse of outdated desktops has been investigated in lab facilities [3], the reuse of smartphones for building energy management systems has not been systematically investigated in experiments. Key concerns include the timely execution of control algorithms and effective communication with external devices. While the former impacts occupants' comfort at the building level, the latter impacts agent-based coordination among buildings [4] and attainable ancillary service options [5]. To systematically assess the technical feasibility of such a concept, several proof-of-concept experiments are provided in this paper, considering diverse controlled systems, control tasks, and control algorithms with different levels of computation and communication loads. ## 2 Methodology Two control algorithms are considered to reflect the complexity of control-related case studies, namely model-based predictive control and data-driven predictive control. They differ in terms of communication and computational loads. Both control strategies are key to enable optimal building-level operation and district-level coordinated building energy management. The rest of the section provides a brief summary. The main principle of model predictive control (MPC) is recursively solving an optimal control problem (OCP) over a horizon considering the future response of the system. While the context-specific constraints and cost functions are elaborated in section 3, a generic formulation is given as follows: \[\underset{\mathbf{u},\mathbf{x},\mathcal{Y}}{\text{minimize}} \sum_{k=0}^{N-1}\left(\left\|\mathbf{y}_{k}-\mathbf{r}_{t+k} \right\|_{\mathbf{Q}}^{2}+\left\|\mathbf{u}_{k}\right\|_{\mathbf{Q}^{\prime}} ^{2}\right)\] (1) subject to \[\mathbf{x}_{k+1}=\mathbf{A}\mathbf{x}_{k}+\mathbf{B}\mathbf{u}_{k}, \;\forall k\in\{0,\dots,N-1\},\] \[\mathbf{y}_{k}=\mathbf{C}\mathbf{x}_{k}+\mathbf{D}\mathbf{u}_{k},\;\forall k\in\{0,\dots,N-1\},\] \[\mathbf{x}_{0}=\hat{\mathbf{x}}_{t},\] \[\mathbf{u}_{k}\in\mathcal{U},\;\forall k\in\{0,\dots,N-1\},\] \[\mathbf{y}_{k}\in\mathcal{Y},\;\forall k\in\{0,\dots,N-1\},\] where \(N\) is the time horizon, \(\mathbf{u}\), \(\mathbf{x}\), \(\mathcal{Y}\) are the vectors of decision variables, \(\mathbf{r}\) is the reference for tracking, \(t\) is the time stamp, \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\) denote the weighting matrices for input and output costs, \(\mathcal{U}\) and \(\mathcal{Y}\) are the input and output constraint sets, and \(\hat{\mathbf{x}}_{t}\) is the estimated state at time \(t\). By varying \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\), Equation 1 can be customized to represent constrained energy planning and reference tracking tasks. In addition, to mitigate potential numerical instability when estimating \(\hat{\mathbf{x}}_{t}\), a Kalman filter with Joseph formulation [6] is used, which is given as follows: \[\mathbf{P}^{+}=(\mathbf{I}-\mathbf{K}\mathbf{H})\mathbf{P}^{-}(\mathbf{I}- \mathbf{K}\mathbf{H})^{\mathsf{T}}+\mathbf{K}\mathbf{R}\mathbf{K}^{\mathsf{T}}, \tag{2}\] where \(\mathbf{I}\) is the identity matrix, \(\mathbf{K}\) is the gain, \(\mathbf{H}\) is the measurement mapping matrix, \(\mathbf{R}\) is the measurement noise covariance matrix, and \(\mathbf{P}^{-}\), \(\mathbf{P}^{+}\) are the prior and post measurement update estimation error covariance matrices, respectively. Regarding data-driven control, signal matrix model predictive control (SMM-PC) [7] is considered. In brief, it maximizes the conditional probability of observing the predicted output trajectory and the measured past outputs to improve the combination of offline trajectories. Therefore, it differs from the previously mentioned MPC in terms of real-time data extraction, the need for state estimation and the OCP. ## 3 Case studies To systematically evaluate the applicability of the reuse concept in practice, diverse controlled systems, control tasks, and algorithms were considered in the experiments. This section includes a description of the general experimental setup that supports the case studies, followed by a detailed explanation of each case. ### General experimental setup The overall setup is illustrated in Figure 1 and Figure 2. The open-source package _Termux_[8]2 was used to enable smartphones to function as a Linux machine and host Python installation. Secure shell protocol (SSH) was used for remote/headless access to facilitate reusing discarded smartphones with cracked screens. The communication among devices was enabled by message queuing telemetry transport (MQTT) [9] in the communication latency assessment case study. Real-time information exchange in the control-related case studies was facilitated with the REST API and an OPC server. Footnote 2: The mention of this software does not constitute an endorsement of the product, and the authors are not affiliated with the software developers. ### Applicability to control tasks To reflect the heterogeneous controlled systems in the built environment, both space heating and stationary electric battery were included. Specifically, optimal space heating power scheduling was considered and the reference tracking task was examined for the stationary electric battery, e.g., necessary concerning ancillary service provision [5]. Both the classical MPC and data-driven control SMM-PC presented in the previous section were assessed. This diverse setup is further illustrated in Figure 3. More concretely, _Case 1_ concerns minimizing the heating energy consumption of a bedroom of a three-room apartment while respecting user-defined temperature limits. The input and output constraint sets of the OCP include heating Figure 1: The NEST building in Dübendorf, Switzerland. Copyright © Empa. Figure 3: Control-related case studies composed of diverse controlled systems, control tasks and algorithms. Figure 2: Software and information exchange setup for the experiments. The images of experimental facilities are sourced from [10]. power capacity and room temperature limits. The control decisions concerned heating power, which was dissipated through a ceiling heating system. The thermostat in the room was remotely controlled by manipulating the setpoints, which influenced the valve opening/closing to regulate the heating flow indirectly. Continuous control decisions were translated into sequences of binary valve positions using a pulse-width modulation (PWM) strategy. In _Case 2_, a Lithium-ion electric battery was controlled to track an artificial sinusoidal reference signal with its state-of-charge (SOC). The input was constrained by charging and discharging power limits. In both cases, the sampling time was \(15\) minutes. ### Applicability to communication tasks The latency in information exchange was compared to two alternatives, namely a laptop (HP Compaq Pro 6300 MT) and a Raspberry Pi (4th Gen Model B). Specifically, the round-trip delay of the communication was used as the key performance indicator. That is the combined duration of message transmission and acknowledgement with the network configuration depicted in Figure 4. For example, different devices (i.e., a smartphone, a laptop and a Raspberry Pi) sent a message to the laptop within the network of Empa (152.88.x.x), which acknowledged the receipt of the message. Each device quantified the time interval to receive the acknowledgement. In all cases, only the smartphone was connected to Wi-Fi, reducing communication performance. ## 4 Results The experiments utilized the NEST building located at the Empa campus in Dubendorf [11], discarded smartphones, a private network and the network of Empa. Two smartphones were used in the experiments with key specifications summarized in Table 1. ### Applicability to control tasks Python 3.8 was used together with Mosek [12] as the solver. The results of control-related case studies are summarized in Figure 5. In _Case 1_, it can be observed that the temperature was controlled within the predefined comfort zone most of the time. The shaded areas indicate periods of window opening, which led to substantial temperature drops. These \begin{table} \begin{tabular}{l l l l l l} \hline \hline ID & CPU & RAM & WLAN & Platform & Release year \\ \hline 1 & Octa-core (Kirin 970) & 6GB & Wi-Fi 802.11 & Android & 2018 \\ 2 & Octa-core (Exynos 7904) & 4GB & Wi-Fi 802.11 & Android & 2019 \\ \hline \hline \end{tabular} \end{table} Table 1: List of mobile devices used in the experiments. Figure 4: Setup of case studies for communication delay identification. The figure on the left shows the network configuration of three information exchange paths. The figure on the right illustrates the calculation of round-trip delays. occupants-induced constraint violations were beyond the capability of the heating system and the controller. Therefore, these instances do not suggest inapplicability. In _Case 2_, the mean tracking error and the root-mean-square-error were \(-0.02\) % and \(0.09\) %, respectively. The tracking errors can be attributed to the controller and the actuation precision. Both experiments show satisfactory results and confirm the applicability to control tasks. ### Applicability to communication tasks The communication delays of all three paths depicted in Figure 4 are quantified and compared in Figure 6. It can be observed that _Path 1_ exhibits the highest delays due to its internet connection using Wi-Fi. Nonetheless, the delays mostly fall within 1 second. Due to the large inertia of buildings, typical control time interval ranges from 30 minutes to hours. Therefore, communication delays within 1 second are considered acceptable for building-level energy management. However, such latency may be too large for tracking high-frequency signals when providing ancillary services to the power system. Potential solution includes adding peripherals to enable Ethernet connection for reduced latency and improved stability at the expense of increased cost and carbon footprint. Figure 5: Experiment results of control-related case studies. The figure on the left shows room temperature control results, where the black dashed lines indicate the comfort limits and the blue line shows the realized temperature trajectory. The figure on the right shows battery reference tracking control results, where the black dashed line shows the reference signal and the blue line shows the measured SOC. Figure 6: Boxplots of round-trip delays with various hardware and connection interfaces. ## 5 Conclusion This paper experimentally evaluates the applicability of reusing discarded smartphones to typical control and communication tasks in buildings. Various controlled systems, control tasks, and algorithms have been considered. The results verify technical feasibility and reveal descent performance. As the capabilities of smartphones continue to improve over time, better-performing reusable resources can be expected. The proposed reuse adds to the existing portfolio of circularity concepts and sheds light on enhancing sustainability in the built environment. Several limitations must be noted. First, the security of the chain of open-source tools has not been verified. This is crucial in future studies as the security of the system, which the distributed devices are integrated into, can be significantly affected. Second, the long-term stability, which industrial PLCs excel at, needs to be further examined. The next step includes achieving scalability through software standardization and investigating a simpler chain of tools to ensure stability, security, and efficiency. Additional life-cycle analysis can be performed in the future to quantify the impacts on sustainability. ## Acknowledgments We would like to thank Philipp Heer and Julie Rousseau for their insightful discussions and contribution of electronics used in this work.
2302.14409
An Adaptive Method for Camera Attribution under Complex Radial Distortion Corrections
Radial correction distortion, applied by in-camera or out-camera software/firmware alters the supporting grid of the image so as to hamper PRNU-based camera attribution. Existing solutions to deal with this problem try to invert/estimate the correction using radial transformations parameterized with few variables in order to restrain the computational load; however, with ever more prevalent complex distortion corrections their performance is unsatisfactory. In this paper we propose an adaptive algorithm that by dividing the image into concentric annuli is able to deal with sophisticated corrections like those applied out-camera by third party software like Adobe Lightroom, Photoshop, Gimp and PT-Lens. We also introduce a statistic called cumulative peak of correlation energy (CPCE) that allows for an efficient early stopping strategy. Experiments on a large dataset of in-camera and out-camera radially corrected images show that our solution improves the state of the art in terms of both accuracy and computational cost.
Andrea Montibeller, Fernando Pérez-González
2023-02-28T08:44:00Z
http://arxiv.org/abs/2302.14409v1
# An Adaptive Method for Camera Attribution under Complex Radial Distortion Corrections ###### Abstract Radial correction distortion, applied by in-camera or out-camera software/irmware alters the supporting grid of the image so as to hamper PRNU-based camera attribution. Existing solutions to deal with this problem try to invert/estimate the correction using radial transformations parameterized with few variables in order to restrain the computational load; however, with ever more prevalent complex distortion corrections their performance is unsatisfactory. In this paper we propose an adaptive algorithm that by dividing the image into concentric annuli is able to deal with sophisticated corrections like those applied out-camera by third party software like Adobe Lightroom, Photoshop, Gimp and PT-Lens. We also introduce a statistic called cumulative peak of correlation energy (CPCE) that allows for an efficient early stopping strategy. Experiments on a large dataset of in-camera and out-camera radially corrected images show that our solution improves the state of the art in terms of both accuracy and computational cost. Image forensics, source attribution, PRNU, photo response non-uniformity, radial correction, distortion correction, PCE, adaptive processing. ## I Introduction During the past years, camera fingerprints based on the Photo Response Non-Uniformity (PRNU) have gained broad popularity in forensic applications thanks to their ability to identify the device that captured a certain image. The PRNU is a multiplicative spatial pattern that owes its uniqueness to manufacturing imperfections that cause sensor elements to have minute area and substrate material differences that make them capture different amounts of energy even under a perfectly uniform flat field [1]. Applications of the PRNU in multimedia forensics go beyond camera identification from images [2] or videos [3], as they have also been used in detecting inconsistencies that reflect image manipulations [4]. Unfortunately, the fact that the PRNU can be accurately modeled as a white random process explains its sensitivity to geometric transformations that alter the image coordinates. Unless those transformations are reverted, standard detection statistics will perform poorly as they are roughly based on cross-correlations that yield very small values under grid misalignment. In the literature several methods have been proposed to deal with those spatial transformations, including digital zoom [5], video stabilization [6], high dynamic range (HDR) processing [7], and radial distortion corrections [8, 9]. It is in the context of the latter that we have developed the methodology presented in this work. Radial distortion correction aims at digitally removing the distortion introduced by the camera lens. This kind of processing is becoming more pervasive as devices increase their computing capabilities; in-camera correction is now common in compact models, tablets and smartphones. On the other hand, out-camera corrections can be performed with powerful software like Adobe Lightroom, which are able to invert distortions almost perfectly by matching the model of the lens mounted on the camera. This is not done by applying conventional radial distortion models such as _barrel_ or _pincushion_ but by making use of complex models (i.e., with a large number of parameters). As a consequence, existing methods [8, 9] relying on models with at most two parameters will only partially succeed in dealing with camera attribution under these complex out-camera processing. Increasing the number of model parameters often constitutes an undesirable path because reverting the distortion corrections entails a grid search whose computational load grows exponentially with the number of unknowns. In this work we propose a novel approach to PRNU-based camera attribution under radial corrections that is able to deal with complex models without significantly increasing the computational burden. The main idea is to divide the image under test and the PRNU into a series of concentric annuli that are thin enough to be locally describable with a simple (i.e., linear or cubic) distortion model which allows for an equally simple inverse transformation. The annuli are traversed sequentially by keeping track of the _cumulative peak-to-correlation energy ratio_ (CPCE), which is a statistic introduced in this work and used to decide whether the radially corrected test image contains the reference PRNU. In fact, the sequential nature of the procedure makes it possible to implement an early stopping strategy to declare a match without having to process all the annuli and thus saving computational time. Another key feature of our method is _adaptivity_: instead of carrying out a wide-interval search for the distortion parameters describing each annulus, an adaptive Least-Mean-Squares-like predictor updates the parameters of the previously processed annulus in order to narrow down the current parameter search. This leads to a large computational efficiency without giving up flexibility. In order to steer the search we propose and justify mathematically a new objective function. Different variants of our method are evaluated in terms of accuracy and speed. We compare our method with the state of the art in [8] and [9] on a large dataset composed of images taken with: 1) compact devices and radially corrected in-camera, and 2) a reflex camera and radially corrected out-camera using different software tools. Our results show considerable performance improvements, especially on low-resolution images and in presence of complex radial corrections. The rest of the paper is organized as follows: Sect. II provides the mathematical background and formulates the addressed problem. Sect. III discusses the relevant state of the art. Sect. IV is devoted to discussing the proposed method which in Sect. V is validated and compared with [8] and [9]. Finallt, Sect.VI presents our conclusions. ## II Problem Formulation and Modeling ### _Notation_ In this paper we will consider gray-scale images (the extension to color images being straightforward). Bi-dimensional signals will be denoted with boldface. For every such signal, a domain \(\mathcal{S}\subset\mathbb{Z}^{2}\) will be specified; for instance, a signal \(\mathbf{X}\) with domain \(\mathcal{S}_{X}\) is a collection of values \(X_{i,j}\in\mathbb{R}\) defined for all locations \((i,j)\in\mathcal{S}_{X}\). For the case of images of size \(M\times N\), the original domain is \(\mathcal{I}=\{1,\cdots,M\}\times\{1,\cdots,N\}\subset\mathbb{Z}^{2}\); however, we will often find ourselves working with domains that are subsets of \(\mathcal{I}\). We will denote by \(D_{2}\) half of the diagonal of domain \(\mathcal{I}\) measured in pixels. Notice that the set \(\mathcal{I}\) can be expressed as \(\mathcal{I}=\mathcal{B}\cap\mathbb{Z}^{2}\), with \(\mathcal{B}\subset\mathbb{R}^{2}\) denoting the image bounding box. The inner product of two signals \(\mathbf{X}\) and \(\mathbf{Y}\) with respective domains \(\mathcal{S}_{X}\) and \(\mathcal{S}_{Y}\) can be defined by extending the Frobenius product of matrices as \(\{\mathbf{X},\mathbf{Y}\}\triangleq\sum_{(i,j)\in\mathcal{S}}X_{i,j}Y_{i,j}\), where \(\mathcal{S}=\mathcal{S}_{X}\cap\mathcal{S}_{Y}\) is assumed to be non-empty. The Frobenius norm of \(\mathbf{X}\) with domain \(\mathcal{S}_{X}\) induced by this inner product is \(\|\mathbf{X}\|\doteq(\mathbf{X},\mathbf{X})=\sum_{(i,j)\in\mathcal{S}_{X}}X_{ i,j}^{2}\). The product of signals \(\mathbf{X}\) and \(\mathbf{Y}\), denoted by \(\mathbf{X}\circ\mathbf{Y}\), is the element-wise product, i.e., \((\mathbf{X}\circ\mathbf{Y})_{i,j}=X_{i,j}\cdot Y_{i,j}\) and is defined for all \((i,j)\in\mathcal{S}_{X}\cap\mathcal{S}_{Y}\). The multiplicative inverse of \(\mathbf{X}\) is denoted by \(\mathbf{X}^{\circ-1}\) and is such that \((\mathbf{X}^{\circ-1})_{i,j}=X_{i,j}^{-1}\). For a signal \(\mathbf{X}\) with domain \(\mathcal{S}_{X}\), we denote by \(\bar{\mathbf{X}}\) a constant signal with the same support as \(\mathbf{X}\) and whose value is the sample mean \(\Sigma_{(i,j)\in\mathcal{S}_{X}}X_{i,j}/|\mathcal{S}_{X}|\), where \(|\mathcal{S}_{X}|\) denotes the cardinality of \(\mathcal{S}_{X}\). The normalized cross-correlation (NCC) between \(\mathbf{X}\) and \(\mathbf{Y}\) is defined as \[\rho(\mathbf{X},\mathbf{Y})=\frac{(\mathbf{X}-\bar{\mathbf{X}},\mathbf{Y}- \bar{\mathbf{Y}})}{\|\mathbf{X}-\bar{\mathbf{X}}\|\cdot\|\mathbf{Y}-\bar{ \mathbf{Y}}\|}, \tag{1}\] with the inner product and norms defined as above. Given a signal \(\mathbf{X}\) with rectangular domain \(\mathcal{I}\) and a vector \(\mathbf{s}=(s_{1},s_{2})\in\mathbb{Z}^{2}\), we denote by \(C(\mathbf{X},\mathbf{s})\) the cyclic shift of \(\mathbf{X}\) by vector \(\mathbf{s}\), so that the \((i,j)\)th component of \(C(\mathbf{X},\mathbf{s})\) is \(X_{(i+s_{1})\text{mod}M,(j+s_{2})\text{mod}N}\). Note that the domain of \(C(\mathbf{X},\mathbf{s})\) is also \(\mathcal{I}\). Finally, the all-zeros image is denoted by \(\mathbf{0}\). ### _PRNU estimation_ As previously indicated, the PRNU is a multiplicative noise-like signal that serves as a sensor fingerprint [1],[5]. Because the PRNU is a very weak signal, it is necessary to separate it from both the true image and other noise components. If \(\mathbf{I}_{0}\) denotes the image in absence of noise, and \(\mathbf{K}\) is the PRNU, it is possible to derive the following simplified model [10]: \[\mathbf{I}=\mathbf{I}_{0}+\mathbf{I}_{0}\circ\mathbf{K}+\mathbf{\Theta}, \tag{2}\] where \(\mathbf{\Theta}\) is uncorrelated with both \(\mathbf{I}_{0}\) and \(\mathbf{K}\), and summarizes noise components of different nature, and all signals are defined over \(\mathcal{I}\). The fingerprint \(\mathbf{K}\) of a camera can be extracted from \(L\) images \(\mathbf{I}^{(l)}\), \(l=1,\cdots,L\), taken with the camera under analysis. Let \(\mathbf{W}^{(l)}\) denote the noise _residual_ obtained by applying a generic denoising filter \(F(\cdot)\) to the \(i\)th image \(\mathbf{I}^{(l)}\), as \[\mathbf{W}^{(l)}=\mathbf{I}^{(l)}-F(\mathbf{I}^{(l)}),\ \ l=1,\cdots,L. \tag{3}\] In all our reported experiments, we have used Mihcak's wavelet-based denoiser [11] for it yields an excellent trade-off between performance and complexity. Then, the PRNU can be estimated as follows [10]: \[\hat{\mathbf{K}}=\left(\sum_{i=1}^{L}\mathbf{I}^{(l)}\circ\mathbf{W}^{(l)} \right)\circ\left(\sum_{l=1}^{L}\mathbf{I}^{(l)}\circ\mathbf{I}^{(l)}\right)^{ \circ-1}. \tag{4}\] The estimate so obtained is customarily post-processed to remove some systematic artifacts that are present in most cameras. Here, we will follow [10] and apply a mean-removal operation by columns and rows, and a Wiener filter in the DFT aimed at removing periodic spatial artifacts. For color images, the fingerprints are estimated separately for the RGB channels and then linearly combined into gray-scale as in [12]. Given an image under investigation \(\mathbf{I}\) and its corresponding residual \(\mathbf{W}\doteq\mathbf{I}-F(\mathbf{I})\) a binary hypothesis test can be formulated to decide whether \(\mathbf{I}\) contains a certain PRNU \(\mathbf{K}^{\prime}\) for which an estimate \(\hat{\mathbf{K}}^{\prime}\) is available. We will denote the null hypothesis of this test (i.e., \(\mathbf{I}\) does not contain \(\mathbf{K}^{\prime}\)) by \(H_{0}\) and the alternative (i.e., \(\mathbf{I}\) contains \(\mathbf{K}^{\prime}\)) by \(H_{1}\). The most popular decision statistic for the test is the _Peak-to-Correlation Energy ratio_ (PCE) which computes the peak cross correlation between the test image residual \(\mathbf{W}\) and the estimated PRNU \(\hat{\mathbf{K}}^{\prime}\) from the candidate camera, and normalizes it by an estimate of the correlation noise under \(H_{0}\)[12]. For non-cropped images the PCE simplifies to \[\text{PCE}(\hat{\mathbf{K}}^{\prime},\mathbf{W})=\frac{\text{sgn}(\rho(\hat{ \mathbf{K}}^{\prime},\mathbf{W}))\cdot\rho^{2}(\hat{\mathbf{K}}^{\prime},\mathbf{ W})}{\frac{1}{|\mathcal{I}\setminus\mathcal{S}|}\sum_{\mathbf{s}\in \mathcal{I}\setminus\mathcal{S}}\rho^{2}(\hat{\mathbf{K}}^{\prime},C(\mathbf{W}, \mathbf{s}))}, \tag{5}\] where, following the improvement proposed in [13], we have included the sign of the NCC to exclude negative values that would be never expected under \(H_{1}\). In (5) \(\mathcal{S}\) is a _cyclic exclusion neighborhood_ of \((0,0)\) of small size (e.g., \(11\times 11\) pixels) to avoid contamination from cross-correlation peaks when estimating the cross-correlation noise when \(H_{1}\) holds. Noticing that for every \(\mathbf{s}\), \(\|C(\mathbf{W},\mathbf{s})-\overline{C(\mathbf{W},\mathbf{s})}\|=\|\mathbf{W}- \bar{\mathbf{W}}\|\), and letting \(\bar{\mathbf{W}}\doteq\mathbf{W}-\bar{\mathbf{W}}\), (5) can be alternatively written as \[\text{PCE}(\hat{\mathbf{K}}^{\prime},\mathbf{W})=\frac{\text{ssq}((\hat{ \mathbf{K}}^{\prime},\hat{\mathbf{W}}))}{\frac{1}{|\mathcal{I}\setminus \mathcal{S}|}\sum_{\mathbf{s}\in\mathcal{I}\setminus\mathcal{S}}(\hat{ \mathbf{K}}^{\prime},C(\bar{\mathbf{W}},\mathbf{s}))^{2}}, \tag{6}\] where we have assumed that the mean of \(\hat{\mathbf{K}}^{\prime}\) is zero due to the zero-meaning operation discussed above, and the signed-squared function \(\text{ssq}(\cdot)\) is such that \(\text{ssq}(x)\doteq\text{sgn}(x)\cdot x^{2}\). ### _Lens Distortion Models_ To describe radially symmetric barrel/pincushion distortions we adopt the same models presented in [8] and explained in [14, 15, 16]. If we denote the coordinates before and after the radial distortion by \((x,y)\) and \((x^{\prime},y^{\prime})\), the invertible geometrical mapping \(T_{\alpha}\) is given by \[T_{\alpha}:\mathbb{R}^{2} \rightarrow \mathbb{R}^{2}\] \[(x,y) \mapsto (x^{\prime},y^{\prime}) \tag{7}\] where \[x^{\prime}=x_{p}+(x-x_{p})(1+\alpha r^{2}); \tag{8}\] \[y^{\prime}=y_{p}+(y-y_{p})(1+\alpha r^{2}), \tag{9}\] and \((x_{p},y_{p})\) is the _optical center_ of the image and \(r^{2}\doteq[(x-x_{p})^{2}+(y-y_{p})^{2}]/D_{2}^{2}\) is the normalized squared radial distance from point \((x,y)\) to the optical center. This normalization by \(D_{2}^{2}\) is for convenience, so that \(r=1\) corresponds to half of the image diagonal [8]. Parameter \(\alpha\in\mathbb{R}\) in (8-9) models the type of radial distortion: \(\alpha>0\) for pincushion distortion, and \(\alpha<0\) for barrel distortion. Alternatively, given \((x_{p},y_{p})\) and assuming that \(T_{\alpha}(x_{p},y_{p})=(x_{p},y_{p})\), the transformation can be written in normalized polar coordinates. Since the phase is preserved under \(T_{\alpha}(\cdot)\), with a slight abuse of notation we will drop the phase component and sometimes write the radial transformation as \(T_{\alpha}:\mathbb{R}^{+}\cup\{0\}\rightarrow\mathbb{R}^{+}\cup\{0\}\) such that \[r^{\prime}=T_{\alpha}(r)=r(1+\alpha r^{2}). \tag{10}\] More complex radial corrections [17, 9] can be expressed through an \(n\)th order model: \[r^{\prime}=T_{\boldsymbol{\alpha}}(r)=r\left(1+\sum_{i=1}^{n}\alpha_{i}r^{2i} \right), \tag{11}\] where \(\boldsymbol{\alpha}\doteq[\alpha_{1},\cdots,\alpha_{n}]^{T}\) is a real parameter vector. Again, with some abuse of notation, and following [8], given a signal \(\mathbf{X}\) with domain \(\mathcal{S}_{X}\), the mapping \(\mathbf{Y}=T_{\boldsymbol{\alpha}}(\mathbf{X})\) is produced as follows. Let \(\mathbf{X}^{\prime}\) be the signal with domain \(\mathcal{S}_{X^{\prime}}=T_{\boldsymbol{\alpha}}(\mathcal{S}_{X})\) such that, for every \((u,v)\in\mathcal{S}_{X}\), and with \((u^{\prime},v^{\prime})=T_{\boldsymbol{\alpha}}(u,v)\), \(X^{\prime}_{u^{\prime},v^{\prime}}=X_{u,v}\). Then, given an output domain \(\mathcal{S}_{Y}\), the signal \(\mathbf{Y}=T_{\boldsymbol{\alpha}}(\mathbf{X})\) is obtained by interpolating the signal \(\mathbf{X}^{\prime}\) defined on \(\mathcal{S}_{X}^{\prime}\) at the points in \(\mathcal{S}_{Y}\). Of course, precautions must be taken when specifying \(\mathcal{S}_{Y}\) so that the interpolation is computable at all points in \(\mathcal{S}_{Y}\). This aspect will be made clearer in Sect. IV, when we present our method. ### _Direct and inverse approaches to PCE computation_ When the image under analysis has been subjected to a radial distortion correction, the statistic \(\text{PCE}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})\) is expected to perform poorly under \(H_{1}\) in the hypothesis test, because the grids supporting \(\hat{\mathbf{K}}^{{}^{\prime}}\) and \(\mathbf{W}\) will not coincide (recall that the PRNU has a very narrow spatial autocorrelation function). The approach explored in [8] is to take into account the distortion correction when computing the PCE. If the parameter vector \(\boldsymbol{\alpha}\) of the radial mapping is known, there are essentially two possibilities, which we will term _direct_ and _inverse_. In the direct approach, the candidate PRNU \(\hat{\mathbf{K}}^{{}^{\prime}}\) is transformed in order for its grid to match that of \(\mathbf{W}\). Then, the test statistic becomes \[\text{PCE}_{\text{dir}}(\boldsymbol{\alpha})\doteq\text{PCE}(T_{\boldsymbol{ \alpha}}(\hat{\mathbf{K}}^{{}^{\prime}}),\mathbf{W}), \tag{12}\] where the domain \(\mathcal{I}_{T}\) of \(T_{\boldsymbol{\alpha}}(\hat{\mathbf{K}}^{{}^{\prime}})\) is the largest rectangular subset of \(\mathcal{I}\) for which the interpolation is computable (see discussion at the end of Sect. II-C) and, accordingly, \(\mathcal{I}_{T}\) replaces \(\mathcal{I}\) in the denominator of (5). In the inverse approach \(\mathbf{W}\) is mapped back to the original domain, so that its grid coincides with that of \(\hat{\mathbf{K}}^{{}^{\prime}}\). Then, the test statistic in this case is \[\text{PCE}_{\text{inv}}(\boldsymbol{\alpha})\doteq\text{PCE}(\hat{\mathbf{K}} ^{{}^{\prime}},T_{\boldsymbol{\alpha}}^{-1}(\mathbf{W})). \tag{13}\] where, as above, the domain \(\mathcal{I}_{T}\) of \(T_{\boldsymbol{\alpha}}^{-1}(\mathbf{W})\) is the largest rectangular subset of \(\mathcal{I}\) for which the interpolation is computable and \(\mathcal{I}_{T}\) replaces \(\mathcal{I}\) in the denominator of (6). Since one is interested in finding the best possible match, [8] suggests using the following statistic \[\text{PCE}_{\text{max}}(\boldsymbol{\alpha})\doteq\max\{\text{PCE}_{\text{ dir}}(\boldsymbol{\alpha}),\text{PCE}_{\text{inv}}(\boldsymbol{\alpha})\}. \tag{14}\] When the parameter vector \(\boldsymbol{\alpha}\) is not known, which is often the case in practice, it must be estimated. In [8] this is done by maximizing the test statistic in (12-13), which makes sense from a maximum likelihood point of view. Let \(\mathcal{A}\in\mathbb{R}^{n}\) be the set of feasible vectors \(\boldsymbol{\alpha}\); then, the statistic used in the hypothesis test is \[\text{PCE}_{\text{max}}^{*}\doteq\max_{\boldsymbol{\alpha}\in\mathcal{A}} \text{PCE}_{\text{max}}(\boldsymbol{\alpha}). \tag{15}\] For the case of scalar \(\boldsymbol{\alpha}\) in (10) the inverse radial correction \(T_{\boldsymbol{\alpha}}^{-1}(\mathbf{W})\) needed in (13) can be approximated via the Lagrange Inversion Theorem [18, 3.6.6.] which yields \[r=T_{\alpha}^{-1}(r^{\prime})=r^{\prime}(1-\alpha r^{\prime 2}+3\alpha^{2}r^{ \prime 4}+O(r^{\prime 6})). \tag{16}\] Using the approach described above, the radial correction can be approximately inverted in many practical cases by finding the optimal value of \(\alpha\)[8]. However, when more complex radial corrections as in (11) have been applied, a single parameter \(\alpha\) may be not sufficient. To illustrate this fact, we consider the example of an image of size \(3456\times 5184\) taken with a Canon 1200D camera using a Canon EF-S 10-18mm as lens and radially corrected with Adobe Lightroom (with settings for the mounted lens, using the strongest correction). We partitioned the image into non-overlapping annuli of width 64 pixels and found for each annulus--through exhaustive search--the value of \(\alpha\) that maximizes \(\text{PCE}_{\text{inv}}(\alpha)\) in (13), where inversion is done via (16). The result is plotted in Fig. 1 as a function of the inner radius of the annulus. As it is quite apparent, there is a dependence of \(\alpha\) with \(r\) that indicates that one parameter alone is not sufficient to describe the radial transformation and that a more intricate relationship--even if parametric--must be sought. ## III State of the Art The PCE is very sensitive to the correct alignment of the locations corresponding to the estimated PRNU and the residual; this means that unless a value of \(\boldsymbol{\alpha}\) very close to the true one is used in the mappings in (12) or (13), the resulting PCE will be very small, and hypothesis \(H_{1}\) is likely to be rejected when it is in force. To illustrate this phenomenon, in Fig. 2 we show the function \(\text{PCE}_{\text{max}}(\alpha)\) for an image taken with a Panasonic DMC-ZS7 camera, shutter speed: 1/400 s, aperture: f4.4, focal length: 19.5 mm, and ISO 100. The stepsize in \(\alpha\) is \(2\cdot 10^{-3}\). As we can observe, under \(H_{1}\) the function is very spiky, with the consequence that a sufficiently dense grid must be used; otherwise, it is easy to miss the peak. In addition, this spikiness precludes the use of gradient-based algorithms, because they would only work in the very close vicinity of the peak. Therefore, any search grid in the parameter space has to be fine enough to be able to locate the maximum. The method in [8] considers that the transformations (both the direct and the inverse) are parameterized by a scalar \(\alpha\) and starts by selecting a search interval \([-A,A]\) which is progressively made finer so that at each iteration \(k\), with \(k=1,\cdots,k_{\text{max}}\), a grid with \(2^{k}+1\) points is generated. Note that at the \(k+1\)-th iteration only \(2^{k}\) new points are produced. A threshold \(\tau_{1}\) is set so that if, after all \(k_{\text{max}}\) iterations, no \(\alpha\) exists in the grid such that \(\text{PCE}_{\text{max}}(\alpha)>\tau_{1}\), then the search is stopped and a mismatch is declared (i.e., \(H_{0}\) is decided). At every iteration, \(\text{PCE}_{\text{max}}\) is maximized over all grid points; this requires computing it only for the new points. Let \(\alpha^{\circ}\) denote the grid point for which the maximum is obtained; if at some iteration \(\text{PCE}_{\text{max}}(\alpha^{\circ})>\tau_{1}\), then the search stops and the algorithm proceeds to the second stage in order to refine the value of \(\alpha^{\circ}\). However, in order to speed up the process, the maximization skips the exhaustive enumeration of all grid points provided that \(k>4\) whenever \(\alpha^{\dagger}\) is found such that \(\text{PCE}_{\text{max}}(\alpha^{\dagger})>\tau_{2}\) (with \(\tau_{2}>\tau_{1}\)). In this case, the algorithm proceeds to the second stage by searching around \(\alpha^{\dagger}\). The second stage takes the value of \(\alpha\) with which the first stage was exited and constructs an interval with its two neighboring points in the grid. If \(k^{*}\) is the exit value of \(k\) for the first stage, then this interval has width \(A/2^{k^{*}-1}\). Next, a golden section search is performed until the width of the interval is approximately \(1/(8D_{2})\), with \(D_{2}\) the half-diagonal of the image. Let \(\alpha^{*}\) be the value found with the golden section search; then, if \(\text{PCE}_{\text{max}}(\alpha^{*})>\tau_{3}\) hypothesis \(H_{1}\) is accepted, else, \(H_{0}\) is declared. The thresholds suggested in [8] are \(\tau_{1}=15\) and \(\tau_{2}=\tau_{3}=75\), and \(k_{\text{max}}=7\). To reduce the computational load [8] downsamples the signals by a factor of two in each dimension; since this has an impact on accuracy in some cases, in the experimental section, we will consider both the downsampled (DS) and non-downsampled versions. The method in [9] takes a different approach to perform the inversion of radially-corrected barrel distortions by employing the so-called linear patterns that are present in the residuals and are due to artifacts of the capturing device. These patterns are typically removed towards source attribution, but when kept, they serve as pilot signals that may be used to infer the radial correction distortion. The feature that is used to steer the parameter estimation is the energy of the linear pattern, defined for a given residual \(\mathbf{W}\) as \(E(\mathbf{W})\doteq\|\mathbf{c}\|^{2}+\|\mathbf{r}\|^{2}\), where \(\mathbf{c}\) and \(\mathbf{r}\) are vectors containing respectively the column and row averages of \(\mathbf{W}\). Then, considering the set of fourth-order transformations \(T_{\boldsymbol{\alpha}}(r)=r(1+\alpha_{2}r^{2}+\alpha_{4}r^{4})\), where \(\boldsymbol{\alpha}=(\alpha_{2},\alpha_{4})\), the method in [9] seeks to maximize \(E(T_{\boldsymbol{\alpha}}^{-1}(\mathbf{W}))\) with respect to \(\boldsymbol{\alpha}\), with the rationale that when the correct inverse transformation is applied, the linear pattern is recovered; otherwise, the column and row averages will be expected to produce low values. The fact that the transformation is now parameterized by two variables \(\alpha_{2}\) and \(\alpha_{4}\) gives more flexibility in inverting the transformation, but potentially incurs a larger computational cost. To make the optimization more manageable, a first stage consists in fitting a second-degree polynomial on variable \(\alpha_{2}\) to values of \(E(T_{\boldsymbol{\alpha}}^{-1}(\mathbf{W}))\) sampled on a grid for \(\alpha_{2}\in[\alpha_{\min},\alpha_{\max}],\alpha_{\min}>0\), and \(\alpha_{4}=0\). The reason for this choice of \(\alpha_{4}\) is that in practice the contribution of \(\alpha_{4}\) to \(T_{\boldsymbol{\alpha}}(r)\) is only significant for large \(r\), that is, far from the image center. This first stage yields the value \(\alpha_{2}^{(1)}\) of \(\alpha_{2}\) that maximizes the difference from the energy of the linear pattern and its polynomial fit. The second stage employs a Nelder-Mead optimization (using the linear pattern energy as cost function) that is initialized with three points derived from \(\alpha_{2}^{(1)}\). This produces the two optimal radial correction parameters \((\alpha_{2}^{*},\alpha_{4}^{*})\). Due to noise, the previous procedure will yield an optimum \(\alpha_{2}\neq 0\) regardless of whether radial correction was applied. Then, the decision is confirmed only Figure 1: Values of \(\alpha\) maximizing \(\text{PCE}_{\text{inv}}(\alpha)\) vs inner radius of the annulus. Values are linearly interpolated. Canon 1200D camera with EF-S 10-18mm lens, corrected with Adobe Lightroom. Focal length: 10mm. Shutter speed: 1/100 sec. Aperture: f7.1. ISO 800. The PRNU is estimated with 20 natural images all taken with those settings. Figure 2: \(\text{PCE}_{\text{max}}\) as a function of \(\alpha\) for a Panasonic DMC-ZS7 camera. if the cost function evaluated in a neighborhood of \((\alpha_{2}^{*},\alpha_{4}^{*})\) corroborates the existence of a significant peak; otherwise, the image is deemed to be not radially corrected. Even though, as we will see in Sect. V, the performance of the two methods outlined above is rather good, they have two main intrinsic limitations that we aim at overcoming with our work: 1) their corresponding first stages employ an exhaustive search on a _fixed_ grid. This fact, together with the high sensitivity of the PCE with respect to changes in the parameter vector \(\alpha\) about the correct one that results in a very spiky objective function, advise the use of a relatively tight grid to minimize the risk of missing the optimum. Unfortunately, this tightness entails a significant computational cost. 2) Again, due to the computational cost of an exhaustive search, the transformations \(T_{\alpha}\) and \(T_{\alpha}^{-1}\) use a small number of parameters: one in [8], and two in [9]. Therefore, these parameterization are unable to capture more complex radial corrections, such as those employed by editing programs, a trend that is likely to increase, as the capabilities of out-of-camera processing improve. ## IV Proposed Method In order to motivate the method proposed in this paper, we will rely on an example generated with the popular photo editing software _Adobe Lightroom_ that will give us the necessary clues. Images were taken with a Canon 1200D camera and then radially corrected with Lightroom. In Fig. 3 we superimpose two PCE\({}_{\text{inv}}\) maps (corresponding to \(\alpha=-0.01\) and \(\alpha=0.05\)) in which PCE\({}_{\text{inv}}(\alpha)\) is computed using (13) and (16) for non-overlapping blocks of size \(64\times 64\). For mere illustrative purposes, and in order to enhance the visibility, the (radially corrected) image under analysis (from which \(\mathbf{W}\) is computed) is one of the 20 flat-field images used to extract \(\hat{\mathbf{K}}^{\prime}\). As we can see, the region where the PCE is significant is an annulus, and the position of the annulus depends on \(\alpha\). This shows that if \(L(r)\) denotes the radial correction induced by the software and \(L^{-1}(r)\) its inverse, then for a given \(\alpha=\alpha_{0}\), \(T_{\alpha_{0}}^{-1}(r)\approx L^{-1}(r)\) only in a small neighborhood of some \(r=r^{*}\). This experiment clearly indicates that for complex radial corrections, an approach like (16) will not work. However, the fact that the inversion works locally suggests breaking the problem into non-overlapping concentric annuli as shown in Fig. 4, and solving each separately. ### _Set partitioning and transform computation_ Let \(\mathcal{R}_{k}\), \(k=1,\cdots,L\), be the \(k\)th annulus described by an inner radius \(r_{k}\) (recall that radii are scaled by \(D_{2}\) so that \(r=1\) corresponds to half of the image diagonal) and a width \(\Delta_{k}\) as follows: \[\mathcal{R}_{k}\doteq\{(u,v)\in\mathbb{R}^{2}:r_{k}^{2}\leq u^{2}+v^{2}<(r_{k }+\Delta_{k})^{2}\}. \tag{17}\] The inner radii are generated as \(r_{k+1}=r_{k}+\Delta_{k}\), with \(r_{1}=0\), and the inner radius of the last annulus \(r_{L}\) is such that \(r_{L}<1<r_{L}+\Delta_{L}\) (see Fig. 4). This definition implies that the first annulus degenerates into a disk and the image is fully covered by annuli. Except for this degenerate annulus, in this work we will assume that \(\Delta_{k}=\Delta\) for all \(k\). The experiment shown in Fig. 1 (obtained applying a brute force search for each annulus) suggests that a good modeling of the radial correction can be obtained by allowing \(\alpha\) to vary with \(r\), so (10) in this case becomes \[r^{\prime}=T_{\alpha(r)}(r)=r(1+\alpha(r)\cdot r^{2}). \tag{18}\] The idea is that by allowing \(\alpha\) to be a function of \(r\), we achieve much more flexibility in modeling complex distortions. Moreover, as long as the annuli are thin enough, the zero-th order approximation \(\alpha(r)\approx\alpha(r_{k}+\Delta_{k}/2)\doteq\alpha_{k}\) will be reasonably good for all \(r\in\mathcal{R}_{k}\). This local approximation will allow us to use (16) for the inverse transform. However, since we are allowing \(\alpha\) to vary with \(r\), instead of a locally cubic dependence, as in (18), it also makes sense to consider a locally linear one, i.e., \(r^{\prime}=T_{\alpha(r)}(r)=r(1+\alpha(r))\). Even though for the generic mappings we will keep using \(T_{\alpha_{k}}(r)\) and \(T_{\alpha_{k}}^{-1}(r)\) for the sake of generality, we specialize them by adding the sub-indices \(c\) to denote cubic, and \(l\) to denote linear. Therefore, on each annulus we write \[T_{\alpha_{k},c}(r) \doteq r(1+\alpha_{k}\cdot r^{2});\] \[T_{\alpha_{k},l}(r) \doteq r(1+\alpha_{k}),\ \ r\in\mathcal{R}_{k}, \tag{19}\] whereas the corresponding inverse mappings are \[T_{\alpha_{k},c}^{-1}(r^{\prime}) \approx r^{\prime}(1-\alpha_{k}\cdot r^{\prime 2}+3\alpha_{k}^{2}r^{ \prime 4}),\ \ r^{\prime}\in T_{\alpha_{k},c}(\mathcal{R}_{k});\] \[T_{\alpha_{k},l}^{-1}(r^{\prime}) = \frac{r^{\prime}}{1+\alpha_{k}},\ \ r^{\prime}\in T_{\alpha_{k},l}( \mathcal{R}_{k}), \tag{20}\] Figure 4: Annular partition used in the proposed method. Note that the ranges of the inverse transforms in (20) may be different because the image of each annulus will differ under the locally cubic and locally linear mappings. Given a collection of annuli \(\mathcal{R}_{k}\), \(k=1,\cdots,L\), one can see the mapping \(T_{\boldsymbol{\alpha}}(r)\) in (12) as a sequence of transformations \(T_{\alpha_{k}}(r)\), \(k=1,\cdots,L\), that is parameterized by a vector \(\boldsymbol{\alpha}=[\alpha_{1},\cdots,\alpha_{L}]^{T}\). Obviously, the maximization of the PCE with respect to \(\boldsymbol{\alpha}\in\mathcal{A}\doteq\mathcal{A}_{1}\times\cdots\mathcal{ A}_{L}\), with \(\mathcal{A}_{k}\) the feasible set for \(\alpha_{k}\), would suffer from a combinatorial explosion due to the \(L\) dimensions involved, so we will be interested in finding efficient alternative ways for performing an approximate maximization. A first step is to treat each annulus separately and find the optimal value of \(\alpha_{k}\) constrained to the \(k\)th annulus. There are several possible approaches at this stage. One would be to find \(\alpha_{k}\) that maximizes the PCE constrained to the \(k\)th annulus; unfortunately, since the total PCE _is not_ the sum of those constrained PCEs, it is quite difficult to work individually with each annulus using such a criterion. Instead, we have opted for a maximum likelihood estimation approach that aims at finding the \(\alpha_{k}\) that has the highest likelihood of producing the observed cross-correlations with the estimated PRNU. Once we describe how the optimal \(\alpha_{k}\) can be found for each annulus in an adaptive way (Sect. IV-C), we proceed by explaining how the PCE can be computed and updated (Sect. IV-D). In the following, we give a formal description of the annuli for the inverse approach (i.e., using \(T_{\alpha_{k}}^{-1}\)) and afterwards indicate how to adapt the discussion to the direct approach. Let \(\mathcal{P}_{k}\) be the set of points of the image grid that are contained in the \(k\)th annulus, i.e., \[\mathcal{P}_{k}\doteq(D_{2}\cdot\mathcal{R}_{k})\cap\mathcal{I},\ \ k=1,\cdots,L, \tag{21}\] where multiplication of \(\mathcal{R}_{k}\) by \(D_{2}\) (i.e., half the diagonal in pixels) is necessary to re-scale the annulus back to integer-valued coordinates (recall that \(r=1\) corresponds to half the diagonal). Given \(\tilde{\mathbf{W}}=\mathbf{W}-\tilde{\mathbf{W}}\) and \(\mathcal{P}_{k}\), computation of \(T_{\alpha_{k}}^{-1}(\tilde{\mathbf{W}})\) proceeds as follows (see Fig. 5). First, the image of the set \(\mathcal{P}_{k}\) under \(T_{\alpha_{k}}^{-1}\) i.e. \(T_{\alpha_{k}}^{-1}(\mathcal{P}_{k})\) is calculated and the transformed points lying outside the image boundaries \(\mathcal{B}\) are discarded, as the subsequent interpolation would not be computable. For the remaining points, \(T_{\alpha_{k}}^{-1}(\tilde{\mathbf{W}})\) is obtained by interpolation from \(\tilde{\mathbf{W}}\). We let \(\mathcal{Q}_{k,\text{inv}}(\alpha_{k})\) be the set of points of \(\mathcal{P}_{k}\) for which their image under \(T_{\alpha_{k}}^{-1}\) exists (the sub-index inv stands for 'inverse approach'). Formally, this set is \[\mathcal{Q}_{k,\text{inv}}(\alpha_{k})=D_{2}\cdot T_{\alpha_{k}}\left(\left[ \left(D_{2}\cdot T_{\alpha_{k}}^{-1}(\mathcal{P}_{k}/D_{2})\right)\cap \mathcal{B}\right]/D_{2}\right). \tag{22}\] Notice that if the set \(\mathcal{P}_{k}\) transformed via \(T_{\alpha_{k}}^{-1}\) does not get out of the image bounds \(\mathcal{B}\), then \(\mathcal{Q}_{k,\text{inv}}(\alpha_{k})=\mathcal{P}_{k}\); otherwise, \(\mathcal{Q}_{k,\text{inv}}(\alpha_{k})\subset\mathcal{P}_{k}\). As a consequence, \(\mathcal{Q}_{k,\text{inv}}(\alpha_{k})\cap\mathcal{P}_{k}=\mathcal{Q}_{k, \text{inv}}(\alpha_{k})\). Also notice that, as explicitly indicated, the set \(\mathcal{Q}_{k,\text{inv}}(\alpha_{k})\) and, in particular, its cardinality, varies with \(\alpha_{k}\). For the direct approach, the considerations are similar. Basically, we have to exchange the roles of \(T_{\alpha_{k}}\) and \(T_{\alpha_{k}}^{-1}\). Recalling that the sub-index \(\text{dir}\) stands for 'direct approach', the set \(\mathcal{Q}_{k,\text{dir}}(\alpha_{k})\) can be formally written as \[\mathcal{Q}_{k,\text{dir}}(\alpha_{k})=D_{2}\cdot T_{\alpha_{k}}^{-1}\left[ \left(\left(D_{2}\cdot T_{\alpha_{k}}(\mathcal{P}_{k}/D_{2})\right)\cap \mathcal{B}\right]/D_{2}\right). \tag{23}\] ### _Optimization with respect to \(\alpha_{k}\)_ Once the annuli have been characterized, in this section we address the problem of finding the optimal values of \(\alpha_{k}\) that parameterize the transformations \(T_{\alpha_{k}}^{-1}\) and \(T_{\alpha_{k}}\) for the \(k\)th annulus. For the sake of compactness, we will find it useful to denote the cross-correlation and the energy of the transformed residual computed over \(\mathcal{Q}_{k,\text{inv}}(\alpha_{k})\) as, respectively, \[\Phi_{k,\text{inv}}(\alpha_{k}) \doteq \sum_{(i,j)\in\mathcal{Q}_{k,\text{inv}}(\alpha_{k})}\hat{K}_{i,j} ^{\prime}\cdot\left[T_{\alpha_{k}}^{-1}(\tilde{\mathbf{W}})\right]_{i,j}, \tag{24}\] \[\mathsf{E}_{k,\text{inv}}(\alpha_{k}) \doteq \sum_{(i,j)\in\mathcal{Q}_{k,\text{inv}}(\alpha_{k})}\left[T_{ \alpha_{k}}^{-1}(\tilde{\mathbf{W}})\right]_{i,j}^{2}, \tag{25}\] by making implicit the use of the inverse transformation \(T_{\alpha_{k}}^{-1}(\cdot)\), and \(\hat{\mathbf{K}}\) and \(\tilde{\mathbf{W}}\). Similarly, we denote by \(\Phi_{k,\text{dir}}(\alpha_{k})\) and \(\mathsf{E}_{k,\text{dir}}(\alpha_{k})\) the cross-correlation and energy for the direct mapping \(T_{\alpha_{k}}(\cdot)\) computed over \(\mathcal{Q}_{k,\text{dir}}(\alpha_{k})\). In Appendix A we derive an estimator of \(\alpha_{k}\) on the \(k\)th annulus. This estimator is rooted in the principle of maximum likelihood applied to the output of a bank of cross-correlations. For the inverse approach, this becomes \[\alpha_{k}^{*}=\arg\max_{\alpha_{k}\in\mathcal{A}_{k}}\varphi_{k,\text{inv}}( \alpha_{k}), \tag{26}\] where \[\varphi_{k,\text{inv}}(\alpha_{k})\doteq\frac{\Phi_{k,\text{inv}}(\alpha_{k})}{ \mathsf{E}_{k,\text{inv}}(\alpha_{k})}. \tag{27}\] For the direct approach, the optimization is carried out after replacing the subindex inv by \(\text{dir}\) in both (26) and (27). We notice the proposed objective function is different from the PCE (constrained to the \(k\)th annulus); besides the theoretical justification in Appendix A, in [19] we provide empirical evidence that optimization of our objective function renders better global performance. ### _Adaptive optimization_ One key observation from Fig. 1 is that the sequence \(\alpha_{k}^{*},k=1,\cdots,L\), changes smoothly for sufficiently small \(\Delta_{k}\). This hints at the possibility of reducing the computational complexity of the exhaustive search by using an adaptive predictor. In our case, we will show experimentally that a linear predictor \(\mathbf{u}\) with length \(U\) suffices to achieve excellent results. In the following, we explain this adaptive procedure. As above, we will give the details for the inverse approach, as the direct one is methodologically identical. First, we need to select an initial index that we will denote by \(k_{0}\). To this end, we look for the annulus that gives the best results under no transformations (i.e., when \(\alpha_{k}=0\)). Formally, this implies that \[k_{0}=\arg\max_{k=1,\cdots,L}\varphi_{k,\text{inv}}(0). \tag{28}\] Once this initial point is found, the optimal value of \(\alpha_{k_{0}}\) is found by exhaustive search in a discrete set around \(\alpha_{k_{0}}=0\). Let \(\mathcal{A}_{k_{0}}\) be such a neighborhood, then following (26), \(\alpha_{k_{0}}^{*}=\arg\max_{\alpha_{k_{0}}\in\mathcal{A}_{k_{0}}}\varphi_{k_ {0},\text{inv}}(\alpha_{k_{0}})\). We will find it useful to define an auxiliary sequence \(\{\beta_{k}\}\) that is initialized as \(\beta_{k}=\alpha_{k_{0}}^{*}\cdot\delta_{k-k_{0}}\), where \(\delta_{k}\) is Kronecker's delta.1 This sequence is used to store the regressor values. Since the starting point is \(k=k_{0}\), there are two possible directions for the prediction: forward (i.e., \(k>k_{0}\)), and backward (i.e, \(k<k_{0}\)).2 We will describe how the former is carried out, and then indicate the modifications needed for the latter. We define the forward regressor at index \(k\) as \(\mathbf{\beta}_{k}^{T}\triangleq\left[\beta_{k-U+1},\cdots,\beta_{k-1},\beta_{k}\right]\), where \(U\) is the length. Notice that as a consequence of initializing the auxiliary sequence, \(\mathbf{\beta}_{k_{0}}^{T}=[0,\cdots,0,\alpha_{k_{0}}^{*}]\). We also need a vector of weights at index \(k\) that will be denoted by \(\mathbf{u}_{k}\); this vector of length \(U\) is initialized as \(\mathbf{u}_{k_{0}}^{T}=[0,\cdots,0,1]\). Then, for \(k>k_{0}\) the output of the predictor at index \(k\) will be computed as Footnote 1: Although from a notational point of view, it would be more correct to define a sequence for every iteration of the algorithm, we allow replacing values in this sequence in order to avoid overcomplicating the notation. Footnote 2: Degenerate cases arise when \(k_{0}=L\) or \(k_{0}=1\), for which the forward and backward predictions, respectively, are not needed. \[\hat{\alpha}_{k}=\mathbf{u}_{k-1}^{T}\mathbf{\beta}_{k-1}, \tag{29}\] for \(k=k_{0}+1,\cdots,L\). This predicted value is refined by exhaustive search in a discrete neighborhood of \(\hat{\alpha}_{k}\). Let \(\mathcal{A}_{k}\) denote such a neighborhood; then \(\alpha_{k}^{*}\) is obtained as in (26). The details on how the neighborhood \(\mathcal{A}_{k}\) is constructed are given below. Before that, we explain the updating procedure for \(\mathbf{u}_{k}\) and \(\mathbf{\beta}_{k}\). To that end, we define the _a posteriori error_ at index \(k\) as \[e_{k}\triangleq\alpha_{k}^{*}-\hat{\alpha}_{k}, \tag{30}\] for \(k=k_{0}+1,\cdots,L\). This error is used to drive the adaptive algorithm. It is easy to show that the gradient vector of \(|e_{k}|^{2}\) with respect to the weights vector \(\mathbf{u}_{k-1}\) is equal to \(-2e_{k}\mathbf{\beta}_{k-1}\). Then, following the Least Mean Squares algorithm [20], we propose to update the weights by taking a step in the direction of the negative gradient, that is, \[\mathbf{u}_{k}=\mathbf{u}_{k-1}+\mu e_{k}\mathbf{\beta}_{k-1},\ \ k=k_{0}+1,\cdots,L, \tag{31}\] where \(\mu\) is the so-called step-size. The update of the sequence \(\{\beta_{k}\}\) containing the regressor is done by making \(\beta_{k}=\alpha_{k}^{*}\); the forward regressor vector \(\mathbf{\beta}_{k}\) is updated accordingly. This iterative procedure is then repeated by going back to (29) and proceeding until the sequence \(\alpha_{k_{0}+1}^{*},\alpha_{k_{0}+2}^{*},\cdots,\alpha_{L}^{*}\) is produced. The backward prediction proceeds in a similar way, but now vector \(\mathbf{\beta}_{k}\) is defined as \(\mathbf{\beta}_{k}^{T}\triangleq[\beta_{k},\beta_{k+1},\cdots,\beta_{k+U-1}]\); this means that at the backward initialization, vector \(\mathbf{\beta}_{k_{0}}\) will take advantage of the availability of values of \(\alpha_{k}^{*}\) that have been already computed, i.e., \(\mathbf{\beta}_{k_{0}}=[\alpha_{k_{0}}^{*},\alpha_{k_{0}+1}^{*},\cdots,\alpha_{k_ {0}+U-1}^{*}]^{T}\). The weights vector for the backward prediction \(\mathbf{u}_{k_{0}}\) is initialized as \(\mathbf{u}_{k_{0}}=[1,0,\cdots,0]\). Now this weights vector is updated in the reverse direction: \[\mathbf{u}_{k}=\mathbf{u}_{k+1}+\mu e_{k}\mathbf{\beta}_{k+1},\ \ k=k_{0}-1,\cdots,1, \tag{32}\] and again the sequence \(\{\beta_{k}\}\) containing the regressor is updated by making \(\beta_{k}=\alpha_{k}^{*}\); the backward regressor vector \(\mathbf{\beta}_{k}\) is updated accordingly. The algorithm thus generates the sequence \(\alpha_{k_{0}-1}^{*},\alpha_{k_{0}-2}^{*},\cdots,\alpha_{1}^{*}\). After both forward and backward predictions are finished, the optimal vector is \(\mathbf{\alpha}_{\text{inv}}^{*}=[\alpha_{1}^{*},\cdots,\alpha_{L}^{*}]^{T}\in \mathcal{A}\), where once again we have added the subindex inv to stress the fact that we are dealing with the inverse approach. The same procedure applied to the direct approach will yield an optimal vector \(\mathbf{\alpha}_{\text{dir}}^{*}\). The pseudo-code for the proposed algorithm is provided in the technical report [19].3 Footnote 3: The code is available at [https://github.com/AMontiB/AdaptivePRNUCameraAttribution](https://github.com/AMontiB/AdaptivePRNUCameraAttribution) One critical point of the algorithm is the refining of \(\hat{\alpha}_{k}\) that produces \(\alpha_{k}^{*}\). While smarter strategies might be possible, here we perform an exhaustive search around \(\hat{\alpha}_{k}\) in a discrete set \(\mathcal{A}_{k}\). Of course, the cardinality of this set must be kept at a small value in order to limit the computational burden. On the other hand, the discrete points must be generated finely enough to output a value that is sufficiently close to the optimal. We thus employ two parameters to describe the set: \(\lambda_{k}\) that controls the resolution, and \(A_{k}\) that is an odd integer that determines the number of points. Then, given \(\hat{\alpha}_{k}\) and these parameters, the search set is constructed as: \[\mathcal{A}_{k}=\{\hat{\alpha}_{k}+\lambda_{k}\cdot n:n\in\mathbb{Z}\cap[-(A_{k }-1)/2,(A_{k}-1)/2]\}. \tag{33}\] Note that this construction guarantees that \(|\mathcal{A}_{k}|=A_{k}\). The parameter \(\lambda_{k}\) is selected to be commensurate with \(|\alpha_{k}^{*}-\alpha_{k-1}^{*}|\) in the forward case (resp. \(|\alpha_{k}^{*}-\alpha_{k+1}^{*}|\) in the backward case), so that the smaller the change in \(\alpha_{k}^{*}\), the finer the grid. In Sect. IV-G we give more details about the rules that were employed to generate \(\lambda_{k}\) for the experiments. Regarding the size of the set \(A_{k}\), this is updated in the same loop as the predictor; for the forward predictor, the rule is as follows: if for index \(k\) the maximum \(\alpha_{k}^{*}\) is found at one of the extremes of the set \(\mathcal{A}_{k}\) (i.e., \(\alpha_{k}^{*}=\hat{\alpha}_{k}-\lambda_{k}\cdot(A_{k}-1)/2\) or \(\alpha_{k}^{*}=\hat{\alpha}_{k}+\lambda_{k}\cdot(A_{k}-1)/2\)) then the size of the set is increased at the following iteration, i.e., \(A_{k+1}=A_{k}+2\). Otherwise, if \(A_{k}\) is already small, i.e., \(A_{k}=A_{\text{min}}\) for some minimum size \(A_{\text{min}}\), then \(A_{k+1}=A_{\text{min}}\); else (i.e, if the maximum in \(\mathcal{A}_{k}\) is not found at either of the extremes, and the set is large enough), the size is decreased at the following iteration, i.e., \(A_{k+1}=A_{k}-2\). This update is intended to find a compromise between the size of the set and the objective of capturing the optimal \(\alpha_{k}\). For the backward prediction the reasoning is identical, but updating \(A_{k-1}\) from \(A_{k}\) (see [19] for an example of the evolution of \(A_{k}\)). ### _PCE computation for the optimal \(\boldsymbol{\alpha}\)_ As a result of the adaptive algorithm presented in the previous section, it is possible to compute the PCEs that are required in the hypothesis test, that is, \(\text{PCE}_{\text{inv}}(\boldsymbol{\alpha}_{\text{inv}}^{*})\) and \(\text{PCE}_{\text{dir}}(\boldsymbol{\alpha}_{\text{dir}}^{*})\), see the definitions in (12) and (13). In both cases, the numerator and denominator of the PCE are already available, as they are required for the optimization. The only additional computations are simple sums to accumulate the results corresponding to the different annuli. To see how this is so for the inverse approach, notice first that the right hand side of (13) requires computing the difference \(T_{\boldsymbol{\alpha}^{*}}^{-1}(\mathbf{W})-\overline{T_{\boldsymbol{\alpha} ^{*}}^{-1}(\mathbf{W})}\) (cf. the expression of the PCE in (6)), which can be simplified by noticing that: 1) It is reasonable to write \(\overline{T_{\boldsymbol{\alpha}^{*}}^{-1}(\mathbf{W})}\approx T_{\boldsymbol {\alpha}^{*}}^{-1}(\tilde{\mathbf{W}})\) because \(T_{\boldsymbol{\alpha}^{*}}^{-1}\) is a geometrical transformation that will not substantially alter the mean value of the residual.4 2) Due to zero-meaning on the residual, it is possible to write \(\tilde{\mathbf{W}}=\mathbf{0}\). With these considerations, we can write \(T_{\boldsymbol{\alpha}^{*}}^{-1}(\mathbf{W})-\overline{T_{\boldsymbol{\alpha} ^{*}}^{-1}(\mathbf{W})}\approx T_{\boldsymbol{\alpha}^{*}}^{-1}(\tilde{ \mathbf{W}})\), which is simpler to compute. Footnote 4: Strict equality does not hold because \(\tilde{\mathbf{W}}\) and \(T_{\boldsymbol{\alpha}^{*}}^{-1}(\tilde{\mathbf{W}})\) do not have the same support. With this approximation, the numerator of \(\text{PCE}(\hat{\mathbf{K}}^{{}^{\prime}},T_{\boldsymbol{\alpha}^{*}}^{-1}( \tilde{\mathbf{W}}))\) can be expanded as follows \[\text{ssq}((\hat{\mathbf{K}}^{{}^{\prime}},T_{\boldsymbol{\alpha}^{*}}^{-1}( \tilde{\mathbf{W}})))=\sum_{k=1}^{L}\text{ssq}(\Phi_{k,\text{inv}}(\alpha_{k }^{*})), \tag{34}\] Now we can easily identify each of the \(L\) summands in (34) as the numerator of \(\varphi_{k,\text{inv}}(\alpha_{k}^{*})\) in (27) which can be stored during the adaptive optimization process. The denominator of the PCE requires more attention. With the approximation above, this denominator is \(\frac{1}{|\mathcal{I}_{T}(\mathbf{S})|}\sum_{\mathbf{s}\in\mathcal{I}_{T} \backslash\mathcal{S}}(\hat{\mathbf{K}}^{{}^{\prime}},C(T_{\boldsymbol{\alpha }^{*}}^{-1}(\tilde{\mathbf{W}}),\mathbf{s}))^{2}\) which is nothing but a sample estimate of the variance of the cross-correlation of \(\hat{\mathbf{K}}^{{}^{\prime}}\) and \(T_{\boldsymbol{\alpha}^{*}}^{-1}(\tilde{\mathbf{W}})\). In [19, Sect. VIII] we derive and discuss a simpler sample estimate that is more statistically efficient (i.e., has a lower variance). This fully justifies the approximation \[\frac{1}{|\mathcal{I}_{T}\backslash\mathcal{S}|}\sum_{\mathbf{s}\in\mathcal{I }_{T}\backslash\mathcal{S}}(\hat{\mathbf{K}}^{{}^{\prime}},C(T_{\boldsymbol {\alpha}^{*}}^{-1}(\tilde{\mathbf{W}}),\mathbf{s}))^{2}\approx\kappa\cdot\hat{ \sigma}_{K}^{2}\cdot\sum_{k=1}^{L}\mathsf{E}_{k,\text{inv}}(\alpha_{k}^{*}), \tag{35}\] where \(\hat{\sigma}_{K^{\prime}}^{2}\doteq\|\hat{\mathbf{K}}^{{}^{\prime}}\|^{2}/| \mathcal{I}|\) (recall that \(\hat{K}_{i,j}^{{}^{\prime}}\) exists for all \((i,j)\in\mathcal{I}\)), and \(\kappa\) is a factor that takes into account the fact that the cardinalities of \(\mathcal{I}\) and \(\bigcup_{k=1}^{L}\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{*})\) are different. (In practice, \(\kappa\) will be close to \(1\), so it can be dropped.) Once again, the \(L\) summands in (35) are already available as the denominator of (27). ### _Early stopping_ The partition into annuli offers one remarkable byproduct: taking inspiration from [21], it is possible to stop processing annuli (and declare that \(H_{1}\) holds) if a cumulative PCE exceeds a predefined threshold. Following the approximations in the previous subsection, one might be tempted to compute a cumulative PCE by using the numerators and denominators already produced during the optimization. In this way, the optimization would not need to be carried out for all annuli but instead it could be stopped as soon as the PCE computed so far exceeds the threshold. Unfortunately, this approach would be incorrect, because while a fraction with sums in the numerator can be expanded into a sum of fractions, this is not the case when there are sums in the denominator. Therefore, if we want to implement an early stopping mechanism, we need to seek ways to further approximate the denominator of the PCE without actually computing all the elements of \(\boldsymbol{\alpha}^{*}\). To this end, we can ask ourselves how sensitive is the right hand side of (35) to changes in \(\alpha_{k}^{*}\); after all, since each of the \(L\) summands is an estimate of the variance of the transformed residual inside an annulus, one would expect not much variation for realistic values of \(\alpha\). If this were the case, then one might approximate the right hand side of (35) (which corresponds to the optimal vector \(\boldsymbol{\alpha}_{\text{inv}}^{*}\)) by computing it for any reasonable value of \(\boldsymbol{\alpha}_{\text{inv}}\) without involving any optimization. In order to illustrate the feasibility of this approximation, we show in Fig. 6 the values of the sample variance of a transformed residual computed in each annulus, i.e., \(\frac{\mathsf{E}_{k,\text{inv}}(\alpha_{k})}{|\hat{\mathbf{Q}}_{k,\text{inv}}( \alpha_{k})|}\) as a function of \(\alpha_{k}\) for several annuli (i.e., \(k=18,22,33\)) and for cubic inverse mappings, see (20). Fig. 6 also shows the value of the variance estimated from the full-size transformed residual, i.e., \(\frac{1}{|\mathcal{I}_{T}|}\|T_{\alpha_{k}}^{-1}(\tilde{\mathbf{W}})\|^{2}\). Bi-linear interpolation is used in all cases. As we can see, the variance estimate is fairly constant for different values of \(\alpha_{k}\), except in a neighborhood of zero. Moreover, this is similar to the variance estimate obtained from the whole transformed residual, so the latter can be used in place of the variance estimate for a specific annulus. The reason for the spike at \(\alpha_{k}=0\) is that the interpolation that is needed for computing the inverse mapping when Figure 6: Sample variance of the transformed residual for different annuli and different values of \(\alpha\). Camera and parameters are the same as in Fig. 1. \(\alpha_{k}*0\) produces a reduction in the variance of the transformed residual. This reduction depends on the square magnitude of the interpolation filter at different sampling points. In general, the grids before and after the interpolation are not related through rational numbers, but for certain rings and values of \(\alpha\), moire patterns between the sampling grids may appear; this is why in Fig. 6 a ripple near zero is observed for the rings \(k=18,22\). The energy reduction phenomenon has been reported in [22] in a different scenario but related to ours. The invariance discussed in the previous paragraph suggests several ways of approximating the right hand side of (35); for instance, it is possible to pick any value of \(\alpha\), say \(\alpha_{\text{f}}\), sufficiently far from \(\alpha=0\) and for all the annuli use the same transformation \(T_{\alpha_{\text{f}}}^{-1}\) in place of \(T_{\alpha_{\text{f}}}^{-1}\). We remark that the reason why the neighborhood of \(\alpha=0\) should be excluded when selecting \(\alpha_{\text{f}}\) is the fact that inside such a neighborhood the denominator of the PCE is overestimated and, consequently, the PCE underestimated. Another way of approximating the right hand side of (35) which offers a slightly better performance than the former is to use the values of \(\alpha_{k}^{*}\) already available from the optimization to update the approximation. This comes at practically no cost because the corresponding term \(\mathsf{E}_{k,\text{inv}}(\alpha_{k}^{*})\) needs to be computed anyway during the optimization. For those annuli whose \(\alpha_{k}^{*}\) is not available yet, the corresponding term is substituted by its approximation computed at \(\alpha_{k}=\alpha_{\text{f}}\). We explain next how to compute the Cumulative PCE at the \(n\)th iteration which we will denote by \(\text{CPCE}_{n,\text{inv}}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})\). First, we need a mapping \(\xi:\{1,\cdots,L\}\to\{1,\cdots,L\}\), from the natural order to the one induced by the proposed iterative procedure, i.e., \(\xi(1)\mapsto k_{0},\xi(2)\mapsto k_{0}+1,\cdots,\xi(L-k_{0})\mapsto L,\xi(L -k_{0}+1)\mapsto k_{0}-1,\cdots,\xi(L)\mapsto 1\). Then, \[\text{CPCE}_{n,\text{inv}}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})\doteq \frac{\sum_{k=\xi(1)}^{\xi(n)}\text{ssq}(\Phi_{k,\text{inv}}(\alpha_{k}^{*}))} {\hat{\sigma}_{K^{{}^{\prime}}}^{2}\left(\sum_{k=\xi(1)}^{\xi(n)}\mathsf{E}_{ k,\text{inv}}(\alpha_{k}^{*})+\sum_{k=\xi(n+1)}^{\xi(L)}\mathsf{E}_{k,\text{ inv}}(\alpha_{k}^{*})\right)}. \tag{36}\] Thus, the early-stopping algorithm will declare a match and stop if for some \(n=1,\cdots,L\), \(\text{CPCE}_{n,\text{inv}}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})>\tau_{c}\) is satisfied. The value of \(\tau_{c}\) is set experimentally to achieve the desired False Positive Rate (FPR). Given the numerator and denominator of \(\text{CPCE}_{n,\text{inv}}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})\), and once \(\alpha_{\xi(n+1)}^{\star}\) is available, the numerator of \(\text{CPCE}_{n+1,\text{inv}}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})\) is updated by adding \(\Phi_{\xi(n+1),\text{inv}}(\alpha_{\xi(n+1)}^{\star})\), while the update of the denominator requires adding \(\mathsf{E}_{\xi(n+1),\text{inv}}(\alpha_{\xi(n+1)}^{\star})\) and subtracting \(\mathsf{E}_{\xi(n+1),\text{inv}}(\alpha_{\text{f}})\). A similar definition follows for the Cumulative PCE in the direct approach \(\text{CPCE}_{n,\text{dir}}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})\) and the corresponding early stopping criterion. ### _Parameter inheritance_ As we have discussed, the test decision statistic takes the maximum of the PCEs computed through the direct and the inverse approaches. This implies that it is necessary to compute the optimal vector \(\boldsymbol{\alpha}^{*}\) for both approaches, so the computational complexity is roughly doubled. This also holds if the early stopping criterion introduced above is applied. In such a case, the iterations for both the direct and the inverse approaches are made in parallel, so that for every \(k\) both \(\text{CPCE}_{n,\text{dir}}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})\) and \(\text{CPCE}_{n,\text{inv}}(\hat{\mathbf{K}}^{{}^{\prime}},\mathbf{W})\) are checked against the threshold in order to stop as soon as possible. There is one sub-optimal way to alleviate the computational burden due to keeping the two approaches. We term it _parameter inheritance_ and basically consists in using for the direct approach the same vector \(\boldsymbol{\alpha}^{*}\) that was computed for the inverse approach. Of course, the latter is not necessarily optimal for the direct approach, but the rationale is that inside each annulus \(\mathcal{R}_{k}\) the direct and inverse transformations nearly correspond to each other for the same value of \(\alpha_{k}\). Perfect correspondence does not exist because the inverse transformation is only an approximation and due to the fact that the search algorithm is prone to errors due to noise and insufficient resolution. ### _Parameter default values_ In this section we provide the default values for the parameters of our algorithm and discuss some decisions regarding the initialization. These default values were used in the experiments reported in Sect. V. Specifically, the radius of the inner disk \(r_{1}\) is such that \(r_{1}\cdot D_{2}\) equals 250 pixels and the width of each annulus \(\Delta_{k}\) is such that \(\Delta_{k}\cdot D_{2}\) equals 64 pixels. Both values are chosen as a compromise between performance and computational cost. For the linear predictor we set \(U=6\), \(\mu=1\) and \(A_{\text{min}}=7\). The initial search set \(\mathcal{A}_{k_{0}}\) is given by \(\mathcal{A}_{k_{0}}=\{-0.22,-0.21,\cdots,0.21,0.22\}\), which is the same range as used and justified in [8] to cover a variety of barrel and pincushion distortions. However, in our case we apply a coarser resolution for computational reasons and because the adaptive nature of our algorithm automatically adjusts to finer resolutions after a few iterations. We are aware that in [9] a wider range was preferred (even if just to invert pincushion distortions), so we carried out some experiments with images taken with the Canon 1200D camera and radially corrected with Adobe Lightroom using the lens distortion model of a different device (see Section V), since this combination produces some of the strongest and most variable radial corrections of our dataset. In these experiments, the search set was expanded to \(\mathcal{A}_{k_{0}}=\{-0.50,-0,49,\cdots,0.49,0.50\}\). While it is true that this set allows in some cases to get closer to the proper \(\alpha_{k_{0}}\), we found no significant differences in terms of performance with the previous initialization; as mentioned, this is due to our algorithm quickly finding the right range for \(\alpha_{k}\) after few iterations. In contrast, the computational load of using the enlarged search set would be larger; for this reason, we recommend \(\mathcal{A}_{k_{0}}=\{-0.22,-0.21,\cdots,0.21,0.22\}\). For an in-depth complementary discussion on the initial set, please see [19]. After the initial search, for the forward prediction \(\mathcal{A}_{k_{0}+1}\) is given by (33) with \(\lambda_{k_{0}+1}=0.001\) and \(A_{k_{0}+1}=9\). For the following iterations, \[\lambda_{k}=\begin{cases}0.1&\text{if }|\alpha_{k}-\alpha_{k-1}|>0.1,\\ 0.01&\text{if }0.01<|\alpha_{k}-\alpha_{k-1}|\leq 0.1,\\ 0.001&\text{if }|\alpha_{k}-\alpha_{k-1}|\leq 0.001.\end{cases} \tag{37}\] Identical considerations to the previous paragraph are made in regard to the backward prediction, where \(k_{0}+1\) is replaced now by \(k_{0}-1\) and in (37) \(k-1\) is replaced by \(k+1\). ## V Experimental Results In order to measure the performance of the methods presented in Sect. IV and compare them with the state of the art in [8] and [9], we built a test dataset composed of 3645 images, of which 2037 were taken with the following compact cameras and radially corrected "in-camera" (i.e., by the camera software): Canon SX230 HS (188 images), Panasonic ZS7 (170 images), Canon SX40 (57 images), Canon SX210 (82 images), and Nikon S9100 (1540 images). All these images were downloaded from Flickr, as done in [8] and [9]; for this reason, there is an uneven distribution of images per device. 1508 of the remaining images in the test dataset were taken with the Canon 1200D (a reflex camera not applying any type of in-camera post-processing) with the following Canon Zoom Lenses: 1) EF-S 10-18 mm 1:4-5.6 IS STM; 2) EF-S 18-55 mm 1:3.5-5.6; 3) EF 75-300 mm 1:4-5.6, all radially corrected "out-camera" with third-party editing software: Adobe Lightroom Classic CC 2017, Adobe Photoshop CC 2017, PT Lens v2.0 (Macbook) and Gimp 2.10.14. Specifically, 377 images were corrected with each of these tools. With Adobe Lightroom we applied the correction model specific to the lens used to take the picture, thanks to the database of radial correction models Lightroom is equipped with. For the other editing software we applied the strongest radial correction available, as those tools cannot be tuned to a specific lens model. The last 100 images in the test dataset were also taken with the Canon 1200D camera but corrected with Lightroom using models for other lenses (i.e., Nikon, Tamron, Apple, Huawei and DJI, with 20 images each), always applying the strongest radial correction. This latter subset will be labeled as "Lightroom" in the following. Images in the test dataset were JPEG compressed with a QFs in the range 90-98. For each device, the same QF is consistently used; see [19] for details. The reference PRNUs for carrying out the tests were estimated for each device using (4) with \(L=20\) natural images (not used for testing) compressed with matching QFs to the test subset of that device. For the compact devices, since the in-camera corrections depend on the focal length, fixed specific values of the latter were sought in order to estimate the respective PRNUs; whenever enough images were available for a certain device and focal length, a different fingerprint was estimated and the results averaged for each device. In all cases, hypothesis \(H_{1}\) was tested with images taken with focal lengths different from those used to estimate the fingerprints. We refer the reader to [19] for full details. When, under hypothesis \(H_{0}\), the test images and the fingerprints have different sizes, we crop the central part of the largest to match its size to the smallest [7]. Next, we describe the identifiers used to refer to the different variants of our method in the figures and tables in this section. With "Dir" and "Inv" we indicate those cases where \(\text{CPCE}_{n,\text{dir}}(\hat{\mathbf{K}}^{\prime},\mathbf{W})\) and \(\text{CPCE}_{n,\text{inv}}(\hat{\mathbf{K}}^{\prime},\mathbf{W})\) are respectively used as the only test statistics. By "2W" we refer to the "two-way" case in which both the direct and the inverse approaches are used and \(H_{1}\) is decided if either \(\text{CPCE}_{n,\text{dir}}(\hat{\mathbf{K}}^{\prime},\mathbf{W})\) or \(\text{CPCE}_{n,\text{inv}}(\hat{\mathbf{K}}^{\prime},\mathbf{W})\) are above the threshold for any \(n\in\{1,\cdots,L\}\). To alleviate the computational load of the "two-way" parameter optimization, recall that in Sect.IV-F we proposed to inherit the parameters of one approach to the other. We will use the label \(\hat{\mathbf{D}}\) to indicate inheritance of \(\alpha_{n}^{*}\) from the direct approach to the inverse one; and \(\overline{\text{ID}}\) vice versa. On the other hand, with the labels "Cub" and "Lin" we refer to the cubic and the linear radial correction models, respectively; see (20). In all reported cases, the early stopping strategy from Sect. IV-E is imposed. All the tests were run on a server with the following characteristics: 16 Cores, Processors 2xXeon E5-2667v3 3.2 GHz and RAM 192 GB; our implementation requires at most 5GB of RAM. In experimentally comparing the variants of our method with the algorithms proposed in [8] and [9], we noticed that [8] was tested on images of size \(3000\times 4000\) that are, on average, larger than the in-camera corrected images in our dataset (refer to Table II for the image sizes in each subset). This explains the slightly worse performance measured here (with downsampling) compared to that reported in [8]. In Table I we provide the fixed thresholds \(\tau_{e}\) (measured over the entire test dataset) that ensure False Positive Rates (FPR) of \(0.05\) and \(0.01\) together with the corresponding True Positive Rates (TPR) for the different variants of our method and those in [9] and [8] (with and without DS). A breakdown by subset of the previous table is given in Table II which also shows the time consumed to declare a match (under \(H_{1}\)) by the different alternatives (with early stopping in those cases where it applies).5 For reasons of space, we have excluded the method in [9] which yields a modest performance, as well as the worst-performing variants of our method (cf. Table I); see [19] for fully comprehensive \begin{table} \begin{tabular}{c|c|c|c|c} & \(\tau_{0.05}\) & \(\tau_{0.01}\) & TPR\({}_{0.05}\) & TPR\({}_{0.01}\) \\ \hline \(\overline{\text{ID}}\), Lin & 98.86 & 112.71 & 0.96 & 0.94 \\ \(\overline{\text{ID}}\), Lin & 90.01 & 105.63 & 0.97 & 0.95 \\ \(\overline{\text{ID}}\), Cub & 73.48 & 90.28 & 0.99 & 0.98 \\ \(\overline{\text{ID}}\), Cub & 71.13 & 84.75 & 0.99 & 0.98 \\ Inv, Lin & 97.66 & 111.72 & 0.93 & 0.91 \\ Dir, Lin & 90.01 & 105.63 & 0.95 & 0.92 \\ Inv, Cub & 73.48 & 90.20 & 0.98 & 0.98 \\ Dir, Cub & 71.13 & 84.57 & 0.99 & 0.98 \\ \(2\)W, Cub & 71.12 & 84.34 & 0.99 & 0.99 \\ \([8]\) & 4.81 & 8.36 & 0.86 & 0.82 \\ \([8]\) no DS & 7.65 & 10.54 & 0.94 & 0.93 \\ \([9]\) & 2.83 & 5.50 & 0.78 & 0.73 \\ \end{tabular} \end{table} Table I: Thresholds required to achieve FPR=0.05 and FPR=0.01 and corresponding TPRs for different methods and variants. results. The Receiver Operating Characteristic (ROC) curves for the variants in Table II are plotted in Fig. 7, where we have also added for comparison the baseline (BL) obtained by using \(\text{PCE}(\hat{\mathbf{K}}^{\prime},\mathbf{W})\) (i.e., with no transformations of either the PRNU or the residual) as test statistic. From the results in Fig. 7 and Table I, it is possible to conclude that the best performing variants of our method correspond to the cubic correction model ("Cub"), with "\(\widehat{\text{DI}}\)", "Dir", "Inv" and "2W" all achieving similar TPRs for the target FPRs. On the other hand, the average execution time of the "one way" variants, i.e. "Dir" and "Inv", is lower because only one statistic has to be computed per iteration. The experiments, conducted on a dataset composed of a variety of radial corrections, show that our variants outperform [8] (both with or without DS) in terms of TPR. Moreover, thanks to our early stopping strategy, our fastest versions (i.e., "Dir, Cub" and "Inv, Cub") achieve under \(H_{1}\) execution times that are comparable to [8] with DS. We also note that the original solution proposed in [8] (i.e. with DS) achieves a limited performance both on low-resolution devices (i.e. SX230 and ZS7) and in presence of complex out-camera radial corrections as those applied by Adobe Lightroom. Adapting [8] to avoid DS results in a significant performance increase in those difficult cases, at the expense of a much more costly execution. Nevertheless, for some severe radial corrections like those in our "Lightroom*" subset, using the full resolution in [8] is still not sufficient. In contrast, our method is able to adapt to this high complexity and offers an excellent performance with an affordable execution time. ## VI Conclusions In this paper, we have proposed an adaptive method for PRNU-based camera attribution that is able to cope with the proposed approach. We have also demonstrated that the proposed approach is able to cope with complex radial distortion corrections, as those performed incamera by most compact models and out-camera by image processing software. Existing approaches try to either "correct" the reference fingerprint or invert the correction by applying a further geometric transformation that, in order to avoid a combinatorial explosion, must use a reduced number of parameters. In turn, this limitation accounts for unsatisfactory performance when complex radial distortion corrections are in effect, an undesirable aspect in view of the trend of more elaborate transformations that are made possible by ever more powerful distortion correction firmware/software. Our approach is radically different: by applying a divide-and-conquer principle, embodied in the use of annuli, we are able to: 1) allow for complex distortion corrections, as locally the transformation undergone by each annulus is much simpler; 2) implement an early stopping strategy that offers large computational savings. The results presented in the paper clearly reveal that our algorithm (in most of its variants) outperforms the state of the art when accuracy and computational load are considered. We believe that the adaptive approach proposed here could also be fruitful in other very challenging camera attribution scenarios with a number of latent parameters, such as in HDR images [7], in-camera-stabilized videos [23], and emerging incamera processing [24]. ## Appendix A Derivation of the estimator of \(\alpha_{k}^{*}\) In this Appendix we derive a plausible estimator of \(\alpha_{k}^{*}\) under the inverse approach; the derivation would be identical for the direct approach and, hence, is skipped here. See the definition of \(\alpha_{k}^{*}\) in (26). We introduce a super-index in \(\alpha_{k}\) to enumerate the elements of the candidate set \(\mathcal{A}_{k}\), i.e., \(\{\alpha_{k}^{(n)}:n=1,\cdots,A_{k}\}=\mathcal{A}_{k}\). We assume that \(H_{1}\) holds, i.e., \(\mathbf{I}\) contains \(\mathbf{K}^{\prime}\), and the following model for the residuals: \[[T_{\alpha_{k}^{*}}^{-1}(\tilde{\mathbf{W}})]_{i,j}=\gamma_{i,j}^{(n)}[T_{ \alpha_{k}^{*}}^{-1}(T_{\alpha_{k}^{*}}^{\cdot}(\hat{\mathbf{K}}^{\prime}))]_{ i,j}+N_{i,j}^{(n)} \tag{38}\] for all \((i,j)\in\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(n)})\) and \(n\in\{1,\cdots,A_{k}\}\). In (38) \(\alpha_{k}^{\dagger}\) represents the _true_ (locally for the \(k\)th annulus) value of \(\alpha\). The multipliers \(\gamma_{i,j}^{(n)}\) are non-negative and take into account both the multiplicative effect of the image \(\mathbf{I}\) and the gain of the effective denoising filter (which also impacts on the estimate \(\hat{\mathbf{K}}^{\prime}\) of the true PRNU). We argue that these multipliers are very hard to estimate accurately; as a consequence, a full maximum likelihood decision will not be possible and some simplifications will be required. One such simplification is to consider that the cross-correlations between \(\hat{\mathbf{K}}^{\prime}\) and \(T_{\alpha_{k}^{(n)}}^{-1}(\tilde{\mathbf{W}})\), for all \(n=1,\cdots,A_{k}\), constitute a set of sufficient statistics for the estimation problem. Recall from (24) that these cross-correlations are denoted by \(\Phi_{k,\text{inv}}(\alpha_{k}^{(n)})\). We make the following hypotheses: 1) _Spikness:_ The \(\alpha_{k}^{(n)}\) are sufficiently separated so that the \(\Phi_{k,\text{inv}}(\alpha_{k}^{(n)})\) are mutually uncorrelated and \(\mathbb{E}\{\Phi_{k,\text{inv}}(\alpha_{k}^{(n)})\}=0\), for all \(n=1,\cdots,A_{k}\), except for \(n=l\), where \(l\) is such that \(\alpha_{k}^{(l)}\) is the closest to the true value \(\alpha_{k}^{\dagger}\) and the expectation is Figure 7: ROCs obtained with the variants of our method, [8] (with and without DS) and [9] on the test dataset. For a better visualization, the zoomed-in box corresponding to low FPRs uses a log scale on the x-axis. taken over the underlying distribution of \(\mathbf{K}^{\prime}\). This hypothesis is reasonable in view of the spikiness of the PCE with \(\alpha\) (see Fig. 2). We also assume that \(\alpha_{k}^{(l)}\) is close enough to \(\alpha_{k}^{\gamma}\) so that \(\big{[}T_{\alpha_{k}^{(l)}}^{-1}(T_{\alpha_{k}^{(l)}}^{\cdot}(\hat{\mathbf{K}}^{ \prime}))\big{]}_{i,j}\approx\hat{K}_{i,j}^{\prime}\) for all \((i,j)\in\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(l)})\). _2) Uncorrelatedness_: In (38), \(N_{i,j}^{(n)}\) and \(\gamma_{i,j}^{(n)}[T_{\alpha_{k}^{(n)}}^{-1}(\hat{\mathbf{K}}^{\prime})]_{i,j}\) are zero-mean and mutually uncorrelated for all \((i,j)\in\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(n)})\) and \(n\in\{1,\cdots,A_{k}\}\). For any \(l,n\in\{1,\cdots,A_{k}\}\), \(l\neq n\), the variables \(\hat{K}_{i,j}^{\prime}\cdot N_{i,j}^{(n)}\) and \(\hat{K}_{u,v}^{\prime}\cdot N_{u,v}^{(l)}\) are mutually uncorrelated for every \((i,j)\in\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(n)})\) and every \((u,v)\in\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(l)})\). _3) Weak PRNU:_ In (38), \(\left|\gamma_{i,j}^{(n)}\big{[}T_{\alpha_{k}^{(n)}}^{-1}(T_{\alpha_{k}^{(}} \hat{\mathbf{K}}^{\prime})\big{)}\big{]}_{i,j}\right|\ll\left|N_{i,j}\right|\) for a large number of pixels of each annulus; we write this more precisely as \[\sum_{(i,j)\in\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(n)})}\left(\gamma_{i,j}^ {(n)}\right)^{2}\big{[}T_{\alpha_{k}^{(n)}}^{-1}(T_{\alpha_{k}^{(l)}}^{\cdot} (\hat{\mathbf{K}}^{\prime})\big{)}\big{]}_{i,j}^{2}\ll\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! means, we can think of replacing \(\mu^{(l)}\) and \(\mu^{(l)}/\sigma^{(l)}\) in (45) by those means. This yields the simplified functional \[\psi^{\prime(l)}\doteq\Phi_{k,\text{inv}}(\alpha_{k}^{(l)})/(\sigma^{(l)})^{2} \tag{46}\] to be used in (44). After replacing \((\sigma^{(l)})^{2}\) in (46) by its estimator \(\hat{\sigma}_{\hat{K}^{\prime}}^{2}\mathcal{E}_{k,\text{inv}}(\alpha_{k}^{(l)})\), and dropping \(\hat{\sigma}_{\hat{K}^{\prime}}^{2}\) because it is independent of \(l\), we obtain the proposed (27). It is interesting to evaluate the loss of performance that results when using (46) instead of (45). We do so by assuming w.l.o.g. that \(\mathcal{E}_{l}\) holds and estimate the probabilities that a given \(n\in\{1,\cdots,A_{k}\}\), \(n\neq l\), produces a larger value than for \(n=l\) in \(\psi^{(n)}\) and \(\psi^{\prime(n)}\). Then, we compare the two resulting probabilities in terms of the effective signal-to-noise ratios (SNR). Therefore, in this case, following (40), we have that for \(n\neq l\), \(\Phi_{n,\text{inv}}(\alpha_{k}^{(n)})\sim\mathcal{N}(0,(\sigma^{(n)})^{2})\), and \(\Phi_{k,\text{inv}}(\alpha_{k}^{(l)})\sim\mathcal{N}(\mu^{(l)},(\sigma^{(l)})^ {2})\). Thus, when \(\mathcal{E}_{l}\) holds, \(\psi^{(l)}\sim\mathcal{N}\left((\mu^{(l)})^{2}/(\sqrt{2}\sigma^{(l)})^{2},(\mu ^{(l)})^{2}/(\sigma^{(l)})^{2}\right)\) and \(\psi^{(n)}\sim\mathcal{N}\left(-(\mu^{(n)})^{2}/(\sqrt{2}\sigma^{(n)})^{2},( \mu^{(n)})^{2}/(\sigma^{(n)})^{2}\right)\), \(n\neq l\). Since \(\psi^{(l)}\) and \(\psi^{(n)}\) are independent, the probability that \(\psi^{(n)}\geq\psi^{(l)}\) when \(\mathcal{E}_{l}\) holds is the probability that the random variable \(\psi^{(l)}-\psi^{(n)}\) is less than zero. And since both variables are Gaussian, so is their difference. Therefore, \(\psi^{(l)}-\psi^{(n)}\sim\mathcal{N}(\omega_{n,l}/2,\omega_{n,l})\), where \[\omega_{n,l}\doteq\frac{(\mu^{(l)})^{2}}{(\sigma^{(l)})^{2}}+\frac{(\mu^{(n)} )^{2}}{(\sigma^{(n)})^{2}} \tag{47}\] If we define the effective SNR as the ratio between the squared mean and the variance of \(\psi^{(l)}-\psi^{(n)}\), then we find that \(\text{SNR}_{\psi}=\omega_{n,l}/4\), where the subindex \(\psi\) indicates that we are using the estimator in (45). For the simplified estimator in (46), a similar derivation leads to showing that \[\psi^{\prime(l)}-\psi^{\prime(n)}\sim\mathcal{N}\left(\frac{\mu^{(l)}}{( \sigma^{(l)})^{2}};\left[\frac{1}{(\sigma^{(l)})^{2}}+\frac{1}{(\sigma^{(n)})^ {2}}\right]\right) \tag{48}\] for which the effective SNR, denoted as \(\text{SNR}_{\psi^{\prime}}\) is now \[\text{SNR}_{\psi^{\prime}}=\frac{(\mu^{(l)})^{2}/(\sigma^{(l)})^{2}}{\frac{( \sigma^{(l)})^{2}}{(\sigma^{(l)})^{2}}+1} \tag{49}\] In order to compare the effective SNRs, we compute their ratio: \[\frac{\text{SNR}_{\psi}}{\text{SNR}_{\psi^{\prime}}}=\frac{1+\left(\frac{\mu^{ (n)}}{\mu^{(l)}}\right)^{2}\cdot\left(\frac{\sigma^{(l)}}{\sigma^{(n)}}\right) ^{2}}{4}\cdot\left(\left(\frac{\sigma^{(l)}}{\sigma^{(n)}}\right)^{2}+1\right) \tag{50}\] To get a cleaner interpretation of this result, we can further assume that both \(\mu^{(n)}\) and \((\sigma^{(n)})^{2}\) are proportional to the cardinality of the support set \(|\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(n)})|\). This way, if we let \(\beta_{n,l}\doteq|\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(n)})|/|\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(l)})|\), we can write that \(\mu^{(n)}/\mu^{(l)}=\beta_{n,l}\) and \((\sigma^{(l)})^{2}/(\sigma^{(n)})^{2}=\beta_{n}^{-1}\). Then, substituting into (50) we find that \[\frac{\text{SNR}_{\psi}}{\text{SNR}_{\psi^{\prime}}}=\frac{(1+\beta_{n,l})^{2 }}{4\beta_{n,l}}=1+\frac{(1-\beta_{n,l})^{2}}{4\beta_{n,l}} \tag{51}\] which is clearly larger than one for all \(\beta_{n,l}\geq 0\), \(\beta_{n,l}\neq 1\). This confirms that, as expected, for any \(\beta_{n,l}\neq 1\) there is a loss of effective SNR with respect to the optimal estimator. However, in practice this loss will be rather small: for instance, suppose that \(|\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(n)})|\) is within the range of 20% larger and 20% smaller than \(|\mathcal{Q}_{k,\text{inv}}(\alpha_{k}^{(l)})|\), then the effective SNR for the suboptimal detector is at most 0.054 dB smaller than the corresponding to the optimal one. This supports the plausibility of the proposed simplified detector.
2309.10509
Polynomial-time Solver of Tridiagonal QUBO and QUDO problems with Tensor Networks
We present an algorithm for solving tridiagonal Quadratic Unconstrained Binary Optimization (QUBO) problems and Quadratic Unconstrained Discrete Optimization (QUDO) problems with one-neighbor interactions using the quantum-inspired technology of tensor networks. Our method is based on the simulation of a quantum state to which we will apply an imaginary time evolution and perform a series of partial traces to obtain the state of maximum amplitude, since it will be the optimal state. We will also deal with the degenerate case and check the polynomial complexity of the algorithm.
Alejandro Mata Ali, Iñigo Perez Delgado, Marina Ristol Roura, Aitor Moreno Fdez. de Leceta
2023-09-19T10:45:15Z
http://arxiv.org/abs/2309.10509v3
# Polynomial-time Solver of Tridiagonal QUBO and QUBO problems with Tensor Networks ###### Abstract We present an algorithm for solving tridiagonal Quadratic Unconstrained Binary Optimization (QUBO) problems and Quadratic Unconstrained Discrete Optimization (QUBO) problems with one-neighbor interactions using the quantum-inspired technology of tensor networks. Our method is based on the simulation of a quantum state to which we will apply an imaginary time evolution and perform a series of partial traces to obtain the state of maximum amplitude, since it will be the optimal state. We will also deal with the degenerate case and check the polynomial complexity of the algorithm. ## 1 Introduction Quadratic Unconstrained Binary Optimization (QUBO) [1] problems are a type of combinatorial optimization problem that lies at the intersection of quantum computing and optimization theory. These problems are characterized by their ability to represent a wide variety of complex challenges and their use in industrial and applied fields such as logistics [2][3], engineering [4], physics, biology [5] and economics [6]. In a QUBO problem, one seeks to find an assignment of binary values (0 or 1) to a set of decision variables such that a quadratic function of these variables is minimized. This quadratic function, also known as a cost or energy function, can represent constraints and objectives of a specific problem. Quadratic Unconstrained Discrete Optimization (QUBO) problems are a generalization of the QUBO problems, allowing the variables to take a larger number of integer values. These QUBO and QUDO problems have a degree of complexity too high to be solved classically in an efficient way [7], so they are usually solved with approximate or heuristic methods, such as genetic algorithms [8]. QUBO problems are particularly interesting and relevant in the era of quantum computing for their ability to take advantage of the properties of quantum computing. However, due to the current state of quantum hardware and its availability, the field of quantum-inspired technologies, which consist of simulating certain quantum properties and taking advantage of them to accelerate calculations, has gained importance. One of the most important are tensor networks [9], which use the algebraic mathematics of quantum systems to simulate them classically and extract certain properties of the simulated systems, being able to implement operations that would not be possible in quantum systems, such as forced post-selection or the application of non-unitary operators. In this paper we will explore how to solve with tensor networks general QUBO and QUDO problems with one-neighbor interactions in an efficient way by means of a sequential tensor network contraction algorithm that simulates an imaginary time evolution and a partial trace. We will also analyze its applicability to degenerate cases and see its computational complexity. ## 2 Description of the problem A general QUBO problem can be expressed by a quadratic cost function to be minimized by a vector of \(N\) binary components of a vector \(\vec{x}\). That is, we look for an optimal \(\vec{x}_{\text{opt}}\) such that \[\vec{x}_{\text{opt}}= \arg\min_{\vec{x}}C(\vec{x}) \tag{1}\] \[C(\vec{x})= \sum_{\begin{subarray}{c}i,j=0\\ i\leq j\end{subarray}}^{N-1}w_{ij}x_{i}x_{j},\qquad x_{i}\in\{0,1\}, \tag{2}\] where \(w_{ij}\) are the elements of the weight matrix \(w\) of the problem. The diagonal elements \(w_{ii}\) are the local terms and the the non-diagonal elements \(w_{ij}\) are the interaction terms. In a QUDO case, the problem is analogous, changing that the components of \(\vec{x}\) will be integers in a certain range, not only \(0\) or \(1\). This is \[\vec{x}_{\text{opt}}= \arg\min_{\vec{x}}C(\vec{x}) \tag{3}\] \[C(\vec{x})= \sum_{\begin{subarray}{c}i,j=0\\ i\leq j\end{subarray}}^{N-1}w_{i,j,x_{i},x_{j}}\] (4) \[x_{i}\in \{0,1,\ldots,D_{i}-1\},\] where \(w_{i,j,x_{i},x_{j}}\) are the elements of the weight tensor \(w\) of the problem, which depends on the state of the variables and its positions in the solution, and \(D_{i}\) are the number of possible values of the variable \(x_{i}\). A special case of interest is the case of nearest neighbor interaction in a linear chain, which can be understood as the Ising model in one dimension. In this problem, each variable interacts only with the one it has before and with the one it has just after. Therefore, our QUBO problem simplifies to \[\vec{x}_{\text{opt}}= \arg\min_{\vec{x}}C(\vec{x}) \tag{5}\] \[C(\vec{x})= \sum_{i=0}^{N-1}w_{i,i}x_{i}+\sum_{i=0}^{N-2}w_{i,i+1}x_{i}x_{i+1 },\quad x_{i}\in\{0,1\}, \tag{6}\] which implies that \(w\) is a tridiagonal matrix. In the QUDO case, the analogous problem would be \[\vec{x}_{\text{opt}}= \arg\min_{\vec{x}}C(\vec{x}) \tag{7}\] \[C(\vec{x})= \sum_{i=0}^{N-1}w_{i,i,x_{i},x_{i}}+\sum_{i=0}^{N-2}w_{i,i+1,x_{ i},x_{i+1}}\] (8) \[x_{i}\in \{0,1,\ldots,D_{i}-1\},\] which could be considered a kind of generalized travelling salesman without restrictions, where the distances change every time. ## 3 Tridiagonal QUBO tensor network solver First we will create a solver for the tridiagonal QUBO problem and then we will see how to generalize it to the QUDO problem. We use a modified version of the method [10]. For this, we will use the tensor network of Fig. 1 a, where the '+' nodes are nodes representing a qubit in uniform superposition \((1,1)\) and the \(T\) nodes represent the imaginary time evolution that depends on the state of the two neighboring qubits. The '+' nodes are a representation of the state of each variable \(x_{i}\). It can be visualized as the initialization of a quantum circuit. By performing the tensor product with all these tensors, each one being in uniform superposition, we will have the uniform superposition of all \(2^{N}\) combinations. The \(T\) tensors will be tensors with 4 2-dimensional indexes \(i,j,k,l\) (Fig. 2) whose non-zero elements will be those in which \(i=k\) and \(j=l\), so that the state of qubit 1 enters at index \(i\) and exits at index \(k\) and the state of qubit 2 en Figure 1: Tensor networks solving the tridiagonal QUBO problem and the one-neighbor QUDO problem. a) Bilocal version, b) Matrix Product Operator (MPO) version. Figure 2: Tensor indexes. a) \(T\) tensor, b) First \(S\) tensor, c) Intermediate \(S\) tensor, d) Final \(S\) tensor. ters at index \(j\) and exits at index \(l\). The non-zero elements will be \[T^{n}_{ijkl}=\begin{cases}1&\text{if }i=0\ \forall j\\ e^{-\tau w_{n,n}}&\text{if }i=1,j=0\\ e^{-\tau\Omega_{n}}&\text{if }i=1,j=1\end{cases} \tag{9}\] \[T^{N-2}_{ijkl}=\begin{cases}1&\text{if }i,j=0,0\\ e^{-\tau w_{N-2,N-2}}&\text{if }i,j=1,0\\ e^{-\tau w_{N-1,N-1}}&\text{if }i,j=0,1\\ e^{-\tau\left(\Omega_{N-2,N-2}+w_{N-1,N-1}\right)}&\text{if }i,j=1,1\end{cases} \tag{10}\] where \(\Omega_{n}=w_{n,n}+w_{n,n+1}\), \(\tau\) is a decay hyperparameter and \(T^{n}\) is the \(T\)-tensor which connects the variables \(n\) and \(n+1\). The goal is for our tensor layer \(T\) to make the state encoded in our tensor network to be \[|\psi\rangle=\sum_{\vec{x}}e^{-\tau C(\vec{x})}|\vec{x}\rangle, \tag{11}\] so that the combination with the lowest cost has an exponentially larger amplitude than the other combinations. However, because this state will be a tensor of \(2^{N}\) components, we cannot simply look at which component has the largest amplitude, so we will extract its information in a more efficient way. Let us assume that the amplitude of the lowest cost combination is sufficiently larger than the other amplitudes of the other combinations. This is a reasonable assumption, because if we increase \(\tau\), the combinations will change their amplitudes in exponentially different ways. If there is a combination with a sufficiently higher amplitude, if we add up all the combinations with \(x_{0}=0\) and all the combinations with \(x_{0}=1\) separately, the main contribution will be that of the combination with a higher amplitude. We will call this operation partial trace with \(x_{0}\) free, which returns a vector \(P^{x_{0}}\) with components \[P^{x_{0}}_{i}=\sum_{\begin{subarray}{c}\vec{x}\\ x_{0}=i\end{subarray}}e^{-\tau C(\vec{x})}. \tag{12}\] In this case, if the combination with the largest amplitude has \(x_{0}=0\), then \(P^{x_{0}}_{0}>P^{x_{0}}_{1}\), and in the opposite case, \(P^{x_{0}}_{0}<P^{x_{0}}_{1}\). This is done by connecting a '+' tensor into each of the qubits at the output of the \(T\) tensor layer, except for the \(x_{0}\) qubit. We can optimize the contraction of the tensor network by defining it as shown in Fig. 1 b, where we replace the \(T\)-tensors by a Matrix Product Operator (MPO) layer of \(S\)-tensors that performs exactly the same function, sending signals up and down through its bound indexes \(k,l\). All its indexes will be of dimension 2. These will tell the adjacent \(S\) tensor what the state of its associated qubit is and, depending on the signal they receive from the previous \(S\) tensor and their own qubit, apply a certain evolution in imaginary time. This tensor network is much easier and more straightforward to contract. We will call \(S^{n}\) the \(S\)-tensor connected with the variable \(x^{n}\). Therefore, the non-zero elements of the \(S\)-tensors will be those where \(i=j\) for \(S^{0}\) and \(S^{N-1}\), and \(i=j=l\) for the others. Their values will be \[S^{0}_{ijk}=\begin{cases}1&\text{if }i,k=0,0\\ e^{-\tau w_{0,0}}&\text{if }i,k=1,1\end{cases} \tag{13}\] \[S^{n}_{ijkl}=\begin{cases}1&\text{if }i=0\ \forall k\\ e^{-\tau w_{n,n}}&\text{if }i,k=1,0\\ e^{-\tau\Theta_{n}}&\text{if }i,k=1,1,\end{cases} \tag{14}\] \[S^{N-1}_{ijk}=\begin{cases}1&\text{if }i=0\ \forall k\\ e^{-\tau w_{N-1,N-1}}&\text{if }i,k=1,0,\\ e^{-\tau\Theta_{N-1}}&\text{if }i,k=1,1,\end{cases} \tag{15}\] being \(\Theta_{n}=w_{n,n}+w_{n-1,n}\). With this method we can determine the variable \(x_{0}\), and to determine the other components we will follow the following steps: 1. Perform the algorithm for the \(x_{0}\) component. 2. For \(n\in[1,N-2]\): * Carry out the same algorithm, but eliminating the previous \(n-1\) variables and changing the new tensor \(S^{0}\) so that its components are the same as those of the tensor \(S^{1}\) from the previous step with its index \(k\) set to \(x_{n-1}\). This means \[S^{0}_{ijk}=S^{1,\text{previous}}_{i,j,x_{n-1},l}\,.\] (16) The result of the algorithm will be \(x_{n}\). 3. For \(x_{N-1}\), we use classical comparison: * If \(x_{N-2}=0\): * \(x_{N-1}=1\) if \(w_{N-1,N-1}<0\) * \(x_{N-1}=0\) if \(w_{N-1,N-1}>0\). * If \(x_{N-2}=1\): * \(x_{N-1}=1\) if \(\Theta_{N-1}<0\) * \(x_{N-1}=0\) if \(\Theta_{N-1}>0\). The progressive reduction of the represented qubits is due to the fact that, if we already know the solution, we can consider the rest of the unknown system and introduce the known information to the system. The redefinition of the tensor \(S^{0}\) at each step allows us to take into consideration the result of the previous step in obtaining the costs of what would be the pair of the variable to be determined and the previous one. ## 4 One-neighbor QUDO problem In this case we are going to solve the one-neighbor QUDO problem making use of the tridiagonal QUBO algorithm in an extended version. Here we will only have to change our qubit formalism to a qudit formalism and allow the indexes of the tensor network in Fig. 1 to have the dimension that is required in each variable. From here on, the variable \(x_{i}\) will have \(D_{i}\) possible values. The '+' tensors will now each have \(D_{i}\) components, depending on the variable they represent. The same for those we use to make the partial trace. The indexes of the \(S\) tensors will have the following dimensions: * \(S^{0}\): \(i,j,k\) dimension \(D_{0}\). * \(S^{n}\): \(i,j,l\) dimension \(D_{n}\). \(k\) dimension \(D_{n-1}\). * \(S^{N-1}\): \(i,j\) dimension \(D_{N-1}\). \(k\) dimension \(D_{N-2}\). Once again, the elements of the \(S\)-tensors are non-zero when \(i=j\) for \(S^{0}\) and \(S^{N-1}\), and \(i=j=l\) for the others. They are \[S^{0}_{ijk}=e^{-\tau v_{0,0,i,i}}\text{ if }i=k \tag{17}\] \[S^{n}_{ijkl}=e^{-\tau\left(w_{n,n,i,i}+w_{n-1,n,i,k}\right)} \tag{18}\] \[S^{N-1}_{ijk}=e^{-\tau\left(w_{N-1,N-1,i,i}+w_{N-2,N-1,k,i}\right)}. \tag{19}\] As in the QUBO case, the \(S\)-tensors receive the state of the adjacent qudit through the index \(k\) and send theirs through the index \(l\). If we contract this tensor network analogously to the QUBO case, we obtain a vector \(P^{x_{0}}\) of dimension \(D_{0}\). From this we can extract the optimal value of \(x_{0}\) by seeing which component has the largest value. That is, if the vector obtained by the tensor network were \((2,4,27,2,0,1)\), the correct value for \(x_{0}\) will be 2. To obtain the other variables, we will perform exactly the same process that we have explained for the QUBO case, changing the last step to a comparison that gives us the \(x_{N-1}\) that will give us a lower cost based on the \(x_{N-2}\) we already have. ## 5 Tracing optimization One of the biggest problems we may encounter is to choose a value of \(\tau\) large enough to distinguish the combinations but not so large that the amplitudes go to zero. For this reason, in practice it is advisable to rescale the \(w\) matrix before starting so that the scale of the minimum costs does not vary too much from one problem to the next and we can leave \(\tau\) constant. To deal with this, an effective way to modify the general algorithm is to initialize in a superposition with complex phases between the base combinations instead of initializing with a superposition with the same phase. This allows us that, at the time of partial tracing, instead of summing all amplitudes in the same direction, they will be summed as 2-dimensional vectors. This allows the sub-optimal states to be damped against each other allowing the maximum to be seen better. We will add these phases by making the initialization '+' tensors instead of being \((1,1,\ldots,1)\), they will be \((e^{2\pi i\cdot 0\frac{1}{D_{n}}},e^{2\pi i\cdot 1\frac{1}{D_{n}}},e^{2\pi i \cdot 2\frac{1}{D_{n}}},\ldots,e^{2\pi i\cdot(D_{n}-1)\frac{1}{D_{n}}})\). Thus, each base state has its own associated phase. To improve performance, we can add a small random factor to each state phase. In this case we only need to change the definition of \(P^{x_{0}}\) to \[P^{x_{0}}_{i}=\left\|\sum_{\begin{subarray}{c}\vec{x}\\ x_{0}=i\end{subarray}}e^{i\gamma_{x}}e^{-\tau C(\vec{x})}\right\|, \tag{20}\] being \(\gamma_{\vec{x}}\) the phase of the \(\vec{x}\) state. ## 6 Degenerate case So far we have considered a non-degenerate case. However, in the degenerate case where we have more than one optimal combination, we have two peaks of exactly equal amplitude. In the phased method we can have the situation where the two peaks have opposite phase, so that they cancel out and we cannot see the optimum. For this reason, we recommend that the phased version should not be used in case of a possible degeneration of the problem. Our method without phases allows us to avoid the degeneracy problem, because, if in any step of the process we have the two peaks in two different \(x_{n}\) values, we will choose one of them and the rest of the combination we obtain will be the one associated exactly to the \(x_{n}\) we have chosen, so we avoid the degeneracy problem. In addition, if we want the other states of the degeneracy, we can do the same thing again, but choosing the other high amplitude component instead. ## 7 Complexity analysis Before analyzing the computational complexity, we have to emphasize that the tensor network can be further optimized if we take into account the fact that the contraction of the '+' tensors with the \(S\) tensors only implies that the resulting tensors will be the same as the \(S\) tensors that created them, but simply eliminating the \(i\) index, so the optimal tensor network would be exactly the same as in Fig. 1 b, but eliminating the first layer of '+' tensors and their associated indexes in the \(S\) tensors. This tensor network is the one in Fig. 3 a. The complexity of contracting each of the tensor networks for a QUDO problem with \(N\) variables that can take \(D\) values is \(O(ND^{3})\). This is due to we have to apply the contraction scheme in Fig. 3. We have to repeat it \(N-1\) times, so the total algorithm complexity is \(O(N^{2}D^{3})\). In the QUBO case, the complexity is \(O(N^{2}2^{3})\). ## 8 Conclusions We have developed a method in tensor networks that allows us to efficiently solve tridiagonal QUBO problems and one-neighbor QUDO problems in an efficient way. We have also seen how they can perform in degenerate cases and what is their computational complexity. Next lines of research will be to obtain a method to determine the \(\tau\) for each problem, for example, with the idea we proposed in [11]. Another possible line will be the efficient resolution of more general QUBO and QUDO problems with this methodology and its application to various industrial problems. ## Acknowledgement The research leading to this paper has received funding from the Q4Real project (Quantum Computing for Real Industries), HAZITEK 2022, no. ZE-2022/00033.
2309.09058
QTOS: An Open-Source Quadruped Trajectory Optimization Stack
We introduce a new open-source framework, Quadruped Trajectory Optimization Stack (QTOS), which integrates a global planner, local planner, simulator, controller, and robot interface into a single package. QTOS serves as a full-stack interface, simplifying continuous motion planning on an open-source quadruped platform by bridging the gap between middleware and gait planning. It empowers users to effortlessly translate high-level navigation objectives into low-level robot commands. Furthermore, QTOS enhances the stability and adaptability of long-distance gait planning across challenging terrain.
Alexy Skoutnev, Andrew Cinar, Praful Sigdel, Forrest Laine
2023-09-16T17:49:17Z
http://arxiv.org/abs/2309.09058v1
# QTOS: An Open-Source Quadruped Trajectory Optimization Stack ###### Abstract We introduce a new open-source framework, Quadruped Trajectory Optimization Stack (QTOS), which integrates a global planner, local planner, simulator, controller, and robot interface into a single package. QTOS serves as a full-stack interface, simplifying continuous motion planning on an open-source quadruped platform by bridging the gap between middleware and gait planning. It empowers users to effortlessly translate high-level navigation objectives into low-level robot commands. Furthermore, QTOS enhances the stability and adaptability of long-distance gait planning across challenging terrain. Additional videos and materials can be found at [https://alexyskoutnev.github.io/Quadruped-Trajectory-Optimization-Stack](https://alexyskoutnev.github.io/Quadruped-Trajectory-Optimization-Stack). ## I Introduction In this paper, we introduce the Quadruped Trajectory Optimization Stack (QTOS), which is a software stack that spans motion planning, control software, hardware interfaces, and deployment tools. QTOS is a one-stop framework for simplifying the setup and use of open-source quadruped robots for research. Many high-performance quadrupeds such as Unitree A1 [23], Anymal [14], and Spot [10], can be expensive for a new research lab, and the attached proprietary software can make comprehensive system testing difficult [7]. Because of this, open-source quadrupeds are gaining popularity, yet a critical gap exists in readily available open-source software for designing, verifying, and implementing advanced legged locomotion [26]. Many popular open-source projects have tried providing an end-to-end framework for quadruped control, and navigation such as Quad-SDK [19], WoLF [22], but none have been built down to directly communicate with an open-source quadruped system. Open-source quadruped robots, such as, SOLO [12], Oncilla [25], and Stanford doggo [16], facilitate collaboration and expedite troubleshooting for quadruped hardware issues, and lead to accelerated development cycles and robust hardware frameworks through community-driven effort [3]. To address this need, we present the Open-source Quadruped Trajectory Optimization Stack (QTOS) framework, emphasizing its role as a comprehensive solution for simplifying advanced motion planning on the quadruped platform, SOLO12. QTOS provides intuitive tools for translating high-level navigation tasks into low-level commands, with a design emphasis on simplicity and portability. With the development of QTOS, we built a modern locomotion framework by integrating a state-of-the-art gait planner with a newly constructed global motion planner and end-effector controller that reliably generates and tracks long-distance stitched gait plans. Along the way, we addressed the community's need for a standardized workflow that spans planning, control software, hardware interface, and deployment tools [6]. QTOS was developed to address the practical demand for a workflow that facilitates the design and the implementation of complex legged locomotion planning tasks following the setup of open-source quadruped hardware. To the best of our knowledge, SOLO is the only open-source robot that is able to execute dynamic motion plans and was used as the testing platform featured in this paper [12]. Specifically, SOLO12 by PAL Robotics is an open-source torque-controlled quadruped robot system that uses off-the-shelf components such as high-torque brushless DC motors and 3D printed components, developed by The Open Dynamic Robot Initiative (ODRI) [12]. SOLO12 unifies open-source hardware, firmware, and middleware within a single ecosystem. QTOS plays a crucial role in bridging the gap between the middleware and the realization of dynamic motion planning, effectively addressing the practical requirements that arise along the way by offering an integrated platform. Our work has three main contributions: (1) We introduce QTOS as a full-stack interface for planning and control on quadruped systems. (2) We have developed a software toolkit and a streamlined workflow aimed at expediting the testing and development cycle for the SOLO12 robot. Finally, (3) we demonstrate the effectiveness of stitched motion planning facilitated by QTOS in various simulation tasks and discuss the reliability of long-distance stitched gait plans. Fig. 1: Trajectory generated by the QTOS local planner while performing a climbing task. In this environment, the robot successfully overcomes different challenging terrains in simulation, showcasing the adaptability of QTOS. ## II Background and Overview There are significant gaps in existing features in the quadruped ecosystem to get from working hardware to implementing advanced motion plans on an open-source quadruped robot. We have extended and improved features from a range of open-source code such as ODRI [12], tour [29], and RViz [15], and we introduced new features aimed at simplifying the physical realization of legged locomotion tasks. #### Ii-A1 Cross-Platform Support Many robotics-based packages depend on Linux-based headers, which can inadvertently exclude potential users. To facilitate cross-platform accessibility, we employed both the containerization tool Docker [11] and the package management system Anaconda [1]. #### Ii-A2 UX Extension and Physical Hardware Support After assembling the robot, calibrating the incremental encoders in each limb is crucial. This is achieved by sweeping and saving the index pulse positions. We noticed that the calibration procedure provided in the ODRI software is time-consuming, and often results in a misaligned starting state. QTOS streamlines the necessary setup phase for robot operation by employing a state machine and an improved sweeping algorithm, resulting in an automated and convenient calibration procedure. In the common case of detecting the wrong index pulse, as we will discuss later, we feature command line arguments for a straightforward compensation for the misaligned joints. Furthermore, we needed increased diagnostics monitoring features required to track the robot's health, and we have introduced improved real-time data monitoring for user experience (UX). The improved UX enables users to assess controller performance and vital robot state statistics. This includes information on packet communication and controller operational states. Moreover, we have enhanced the user interface to improve the overall user experience for testing and evaluating trajectories and efficiently utilizing the software's capabilities. These enhancements streamline the process of initiating and executing trajectories through the implementation of intuitive state-based user commands. #### Ii-A3 Unified, Improved, Refactored SDK The original SOLO12 SDK relies on multiple external dependencies such as ROS2 [18] and many sparsely connected repositories in the ODRI ecosystem. While these dependencies can extend the functionality of the software, they also introduce complexity and potential compatibility issues [21, 9]. This complexity can make it challenging for users to set up and maintain their robotic systems efficiently. To address this, we have improved the original source code by eliminating dependencies and functions and refining vital methods. As a result, we have consolidated the SDK functionality into a single executable. All of the extended and improved SDK and extended functionality have been consolidated into the SOLO12_SDK repository. SOLO12_SDK provides a comprehensive and cohesive toolkit for users working with the SOLO12 robot. ### _Stitched Motion Planning_ An integral module of the QTOS is the continuous trajectory optimizer used to generate gait plans for the SOLO12 robot. We discovered TOWR [29] as the preferred performance-efficient gait planning solver without compromising reliability as seen by other solvers that decompose the optimization problem into several sub-problems [5]. TOWR is able to automatically determine the gait sequence, step-timing, footholds, end-effector motions, and 6-DoF body coordinates without the need to solve a mixed integer or a complementary problem [30][4]. Its flexibility in regulating the gait timing sequences is vital for constructing stable foot placements on elevated surfaces and allows for complete high-level autonomy. However, TOWR is prone to producing infeasible gaits for long-distance navigation tasks and requires intricate motion composition to robustly produce online trajectories. In this work, we discover that online trajectory planning is possible without the need for a complex MPC controller [5]. Having a dynamic look-ahead process that targets stable state configurations, we utilize trajectory stitching to produce robust gaits for real-time motion planning. ## III Software Architecture We introduce a new open-source framework coined, QTOS, that integrates a global planner, local planner, simulator, controller, and robot interface all in one package. To complete the end-to-end integration, QTOS also includes tools for hardware calibration, diagnostic monitoring, and an intuitive user interface to operate the robot. The full-stack software framework streamlines quadruped planning, control, simulation, and communication for researchers and Fig. 2: The QTOS system architecture follows a hierarchical structure where a high-level navigation task is translated into low-level robot commands. engineers, allowing them to emphasize core algorithm development rather than software tooling and infrastructure. Consequently, QTOS follows a modular structure that facilitates customization, as shown in Figure 2. A high-level navigation task is parsed through four layers of the stack before being consumed by the robot as a low-level data package. ### _Global Planner_ At the top of the stack is the global planner, responsible for computing a global trajectory that can be divided into solvable segments for the local planner. The global planner guides the robot from the initial state to a user-provided goal state while promoting feasibility for each local trajectory plan through a dynamic look-ahead search process. At each update step, this search process identifies a future trajectory state configuration where all end-effectors are in contact with the ground, serving as the new starting configuration state. Terrain information is supplied to the global planner via a 2.5D height map, corresponding to the robot's working environment. As part of its processing, the global planner simultaneously searches for and determines a feasible and nearly optimal trajectory plan. Regions potentially deemed infeasible, such as walls or steep curves, are evaluated using a custom feasibility search strategy (FSS), illustrated in Figure 3. The FSS algorithm performs a grid search for groups of nodes that violate the end-effector kinematics height constraint. The violation is determined by the height difference between a test node and its neighbors and if it is greater than a platform-determined height-deviation parameter. Then, FSS creates a convex hull that encloses the group of potentially infeasible nodes and utilizes multi-threading to pair bilateral starting and goal points inside or near the convex hull. Each pair is tested as a micro-trajectory (trajectories with a magnitude less than 0.2 meters) and assigned true or false based on the feasibility state of the local planner. In this manner, the original 2.5D height map is transformed into a Boolean grid map which is then made available to a tailored A* search algorithm that finds the shortest path and converts it to a cubic spline. Keep in mind a different type of feasibility test could be performed in the FSS algorithm based on the user's preferences for speed and accuracy. The global trajectory plan is divided into multiple local gait plans, and this division is governed by a step-size parameter that controls the lookahead distance along the global trajectory spline. During runtime, the current local gait plan is actively tracked, while in the background, the next local gait plan and the concluding segment of the previous trajectory plan undergo optimization by the trajectory solver. Once the next plan is completely solved, the global planner stitches together these two plans, considering the robot's current state. This process repeats with the new next plan. The global planner operates at the same rate as a full gait cycle (0.5 Hz) and can react to disturbances or unexpected obstacles by adjusting the start-goal state pair for the subsequent gait plan. ### _Local Planner_ The local planner generates a gait trajectory based on the initial and destination states in \(\mathrm{SE}(3)\), which are provided by the global planner. At each node along the trajectory, the end-effector and 6-DoF body motion are calculated using a modified TOWR solver. The modified TOWR executable can accommodate custom height map constraints specified by the user-defined terrains through the QTOS map generation API. Additionally, it was discovered that using a custom gait pattern significantly enhances the stability of the gait trajectory for the lower-level modules of the stack, contributing to the generation of more feasible trajectories. To ensure broad accessibility, QTOS encapsulates the Linux executable of TOWR within a Docker container [11], enabling platform-independent API calls between the user's system and the trajectory solver. The TOWR container achieves an update frequency of 2 Hz. ### _Controller_ The controller computes the desired joint angle and joint velocity, or the desired torque command for the embedded controller located inside the robot. Each trajectory point is processed through an inverse kinematics solver and transformed into a torque command by a PD controller. The controller has an update frequency of 1000 Hz. #### Iii-C1 Inverse Kinematics We use the Damped Least Squares (DLS) method [28] to find the generalized reference joint space position \(q_{\text{ref}}\) and reference joint space velocity \(\dot{q}_{\text{ref}}\). At each time step \(t_{n}\), the difference between the desired leg state \(x_{\text{des}}\in R^{3}\), and the current leg state \(x_{\text{cur}}\in R^{3}\) is calculated as the error vector \(e=x_{\text{des}}-x_{\text{cur}}\). Likewise, the end-effector Jacobian \(J_{e}\) for the legs is computed and is used to obtain the least squared error solution. \[\dot{q}_{\text{ref}}=J_{e}^{\dagger}e \tag{1}\] Here, \(J_{e}^{\dagger}\) refers to the pseudo-inverse matrix. Afterward, the joint space position is updated with the damping factor \(\lambda\), \[q_{\text{ref}}=q_{\text{ref}}+\lambda\dot{q}_{\text{ref}} \tag{2}\] The DLS method is repeated until error \(e\) is sufficiently small or the maximum number of interactions is reached. Keep in mind that joint-velocity control was used over Fig. 3: The global trajectory plan is computed via a 2-step transformation procedure. The FSS algorithm concurrently processes the entire environment until it is transformed into a Boolean map. In the last step, a tailored A* search is performed and a global trajectory spline is constructed. The three maps were generated from QTOS. joint-position control in the control software because of two reasons. Firstly, joint-velocity control enhances dynamic and responsive feedback [17]. Secondly, and notably, joint-velocity control effectively mitigates drift in long-distance gait plans. #### Iii-C2 PD Controller The PD controller provides the feedback torque needed to realize the desired trajectory by the following control law given the system states \(q\) and \(\dot{q}\), \[\tau_{\text{ref}}=K_{p}(q_{\text{ref}}-q)+K_{d}(\dot{q}_{\text{ref}}-\dot{q}) \tag{3}\] where \(K_{p}\) and \(K_{d}\) are the proportional and derivative gains respectively. ### _Robot Interface_ The robot interface is divided into multiple components: (1) The setup component which is responsible for configuring the physical robot to be in a ready-to-run state, (2) the communication component which handles the data packages between the user's system and the ESP32 microprocessor masterboard [20], (3) system diagnostics and data collection tools for debugging trajectory plans and other hardware-related issues. #### Iii-D1 Setup Component Before performing any tasks, the SOLO12 requires an initial calibration process because it is not an absolute encoder. The calibration procedure requires the determination of the nearest encoder index pulse position with respect to a desired reference zero position, we refer to this as the "hard" calibration procedure. Hard calibration needs to happen once after the hardware has been put together, or after there have been changes to the hardware. After the hard calibration has been completed once, we need to determine the nearest index pulse with respect to the initial pose of the robot every time the quadruped robot is powered up. We do this by sweeping the joints near their initial positions (the pose of the robot when the power is turned on), and we refer to this process as the "soft" calibration procedure. SOLO12 actuators have a reduction ratio of 9:1, which means one full rotation of the motor shaft is equivalent to 9 index pulses on the encoder. We approximately bring the joints to their desired reference positions to consistently calibrate the quadruped robot to the reference position. The sweeping algorithm is based on a sinusoidal function, \(f(q,t)\), that oscillates between minimum and maximum joint space values \(q\in[0,2\pi/9]\) for each robot actuator. The current joint angle \(q\), max-angle search amplitude \(A\), run time \(t\), and the sign function, \[\mathrm{sign(x)}:=\begin{cases}x&\text{Joint axis is clockwise}\\ -x&\text{Joint axis is counter-clockwise}\end{cases}\] is used to determine the next reference joint position \(q_{\text{ref}}\). \[q_{\text{ref}}=f(q,t)=9q[A-A\cos(2\pi t)]\,\mathrm{sign(q)} \tag{4}\] Once the encoding zeros are found and recorded, the robot is configured to hold state where the robot tracks the initial state of the trajectory plan. Upon receiving the user's I/O command, it transitions to the run state where the trajectory plan is executed. The setup sequence is represented by a finite state machine as shown in Figure 4. Note that the I/O command can also be executed automatically for online trajectory generation. Even after a successful hard calibration, during the subsequent soft calibration attempts, we observe that some joints may be offset by the distance between two index pulses due to the reduction ratio. For a successful soft calibration, the joints initially must be put within \(\pm\pi/9\) radians (\(\approx\pm 20^{\circ}\)) of the desired reference position. Due to gravity and other factors, it is common for one or two encoders to fall outside of this range at power-on, which results in misaligned starting positions. To correct this misalignment, we manually specify the desired index pulse offsets in the command line arguments provided to the robot interface. #### Iii-D2 Communication Component The communication component is responsible for sending and receiving packets between the computer and the ESP32 microprocessor masterboard. The SOLO12 masterboard has two modes of control: (1) On-board PD control, and (2) torque control. The communication component communicates the desired positions and velocities in the case of the on-board PD control, and the desired torques in the case of torque control. The on-board PD controller tracks the desired joint positions and velocities, or torques, using user-specified gains. This communication occurs at 1000 Hz and it is important to be carried out in a real-time manner for the accurate realization of the desired trajectory. However, the user interface runs on a user operating system, and the real-time behavior of the interface and the communication component needs to be monitored to ensure timing accuracy. To this end, the user interface communication component keeps track of timing statistics and prints them on screen for the user to verify that the communication component is running practically in real-time. In this case, "practically" real-time means that every update from receiving sensor data, processing, and the sent commands are completed within the 1 ms time-frame. Fig. 4: The state machine described above configures the quadruped robot into the Run state. Calibration via Sweep to find the desired zero positions of all joints. After the Sweep is complete and upon user command, it transitions to Hold, tracking the initial starting state. Finally, upon user command, it enters the Run state to execute the trajectory plan. ### _User Tools_ QTOS contains user tools to facilitate rapid development and hardware testing in both simulation and real hardware. We present these tools in this subsection. #### Iii-E1 Simulator The original TOWR trajectories were tested with RViz, but the absence of a physics engine made it challenging to assess the feasibility of the generated gait plans. During the design of QTOS, we prioritized a convenient and accurate simulation tool, leading to the integration of a physics simulator, Pybullet [8]. With QTOS's simulator, users have the capability to create terrain environments tailored to their workspace, complete with visual modeling and trajectory tracking. The custom terrain environment is represented as an \(n\) by \(n\) grid of heights. Figure 6 illustrates an example terrain map file that users can edit, and it displays the resulting terrain. #### Iii-E2 Data Monitoring Tools QTOS offers data monitoring tools for real-time analysis of critical performance metrics. The global plan is visualized as a Matplotlib plot [13]. Additionally, reference states are logged alongside operational states, and these data sets are graphically represented using Matplotlib plots to evaluate the controller's tracking performance. ## IV Experiments In this section, we evaluate the performance of QTOS gait trajectories in the context of three key navigation tasks: (1) walking, (2) climbing, and (3) collision avoidance. We primarily focus on the performance and reliability of the stitched motion plans. Furthermore, we describe how QTOS handles stitched trajectories in order to determine the maximum time that QTOS can reasonably track. We aim to address the following questions using the experimental data: (1) How well does QTOS perform across a diverse set of navigation tasks using stitched motion plans? (2) What kind of behavior can we expect from QTOS when generating long-distance gait plans? ### _Navigation Experiments_ In our navigation experiments, we define success as the robot's ability to navigate from the starting point to the specified goal without falling over. Failure occurs if the robot topples over or if it leaves the map boundary. To assess the consistency of the plans, we calculate the average distance traveled for each task. Additionally, we conduct a performance analysis by computing the average tracking error per second between the reference center of mass \(x_{\text{ref}}\) and the realized center of mass \(x_{\text{real}}\) using \(n\) data points at a tracking rate \(f_{\text{rack}}\) of 1000 Hz. \[\text{Tracking Error Rate}:=\frac{\|x_{\text{ref}}-x_{\text{real}}\|_{2}}{n}*f _{\text{rack}}\] With the built-in Pybullet simulator, QTOS is benchmarked over 20 simulated runs where each environment is randomly generated with the following navigation tasks except the walking task. 1. **Walking:** The robot is tasked with walking in a straight line, covering a distance of 2 meters while maintaining stability and balance. 2. **Avoidance:** The robot successfully navigates around two obstructive walls while covering a distance of 2 meters. 3. **Climbing:** The robot climbs over elevated terrain in its path while ensuring stability while covering a distance of 2 meters. ### _Quantitative Evaluation in Navigation Experiment_ Reviewing Table I, QTOS demonstrates a success rate of \(91.7\%\) across the three navigation tasks. However, there is a noticeable performance gap between the Avoidance task and the Walking task, indicating a challenge in controller tracking. In the Avoidance task, the trajectory stitching within the global planner remains unaffected by sudden changes in direction, unlike the body controller. Consequently, the global planner tends to advance ahead of the current controller plan, resulting in larger-than-normal tracking errors. Nevertheless, the gait plans generated by QTOS exhibit sufficient stability to enable the robot's successful completion of the task. It is worth noting that in most of the simulated runs for the Avoidance task, failure occurred because the controller lacked the agility to reactively push the robot away from the map's boundary, or the gap between the wall and the edge of the map was too small for the robot to fit through. In the Climbing task, it was evident that the failures typically occurred when one of the robot's legs made unmodeled contact with the terrain, leading to its worst performance in terms of the average distance traversed. This highlights a significant flaw in TOWR's constraint formulation, which could potentially result in non-recoverable drift. However, \begin{table} \begin{tabular}{c c c c} \hline \hline Task & Distance (m) & Success (\%) & Tracking Error (m/s) \\ \hline Walking & 2.0 m & 100 & 76.4 \\ Avoidance & 1.85 m & 80 & 96.2 \\ Climbing & 1.37 m & 95 & 70.6 \\ \hline \hline \end{tabular} \end{table} TABLE I: The results obtained from 20 simulated runs, with each row entry indicating the following metrics: average distance traversed, success rate percentage, and tracking error, respectively. Fig. 5: An example height map file being converted to a virtual terrain profile in the simulator. The user-defined inputs are highlighted with grey shading. since QTOS requires long-distance plans to be incrementally solved with stitched gait plans, the likelihood of terrain clipping along the trajectory plan is significantly reduced. ### _Stitched Trajectory Tracking Experiment_ In this section, we evaluate the reliability of stitched trajectory gait planning over a 10-minute time period for a walking task. We track both the reference center of mass and the robot's center of mass positions to assess the degree of system drift and the performance of the controller. This experiment focuses on addressing two crucial questions: (1) Can the optimizer effectively handle continuously fed stitched trajectory plans? (2) Can the controller reasonably track trajectories over an extended period of time and distance without encountering significant drift? ### _Quantitative Evaluation of Stitched Trajectory Tracking_ We notice that the tracking error maintains a constant rate of change over the distance traveled by the robot. This suggests that the controller can handle a long-distance gait plan without significant loss in performance. Similarly, in Figure 7, the robot drifted to the left by approximately 0.1 meters over the 10-minute period, while the planned trajectory advanced about 0.2 meters ahead of the center of mass along the x-axis. Overall, the drift in the system can be compensated for by the controller, especially between time steps 0 to 20k, where the joint-velocity controller corrects the slight leftward drift. However, around time step 40k, the robot starts to exhibit significant leftward drift, indicating model alignment errors between the optimizer and the robot's true model. From these observations, QTOS verifies that continuously fed stitched trajectory plans produced by TOWR are feasible and perform well in walking terrain. ## V Conclusion In this paper, we introduced QTOS, an end-to-end software stack for quadruped locomotion. QTOS fills a crucial gap in the open-source quadruped community by integrating motion planners, solvers, control strategies, and a robot interface. This integration results in a user-friendly framework capable of generating stitched motion plans in simulation. The simulation experiments conducted in this study highlight QTOS's proficiency in performing walking, avoidance, and climbing tasks. Additionally, the stitched trajectory tracking experiments demonstrate QTOS's effectiveness in generating long-distance gait plans. In future iterations of QTOS, we plan to conduct extensive testing of generated gait plans with a physical SOLO12 robot. We also have intentions to enhance the framework's capabilities, either in the domain of reinforcement learning [24][27] or nonlinear optimization [2], with the objective of producing more agile gait plans through improvements in the local planner module. Fig. 6: Timelapse of deploying QTOS in simulation. The three tasks: (1) walking, (2) avoidance, and (3) climbing can be seen from top to bottom respectively. Fig. 7: The reference and the robot’s center of mass along the planned trajectory. The middle plot illustrates how the joint-velocity controller effectively corrects minor perturbations along the trajectory. The three plots were generated from QTOS.
2301.00138
Chaos and Entanglement in Non-Markovian Optomechanical Systems
We study the chaotic motion of an optomechanical system coupled to a non-Markovian environment. We show that the environmental memory time can significantly affect chaos in an enhancing way. In addition to classical chaotic motion, the quantum entanglement in the presence of chaos is investigated. It is found that both the environmental memory and chaos can lift up bipartite entanglement in a non-linear optomechanical system. These observations may help expand our understanding of the transition from classical to quantum dynamics.
Pengju Chen, Nan Yang, Austen Couvertier, Quanzhen Ding, Rupak Chatterjee, Ting Yu
2022-12-31T06:49:12Z
http://arxiv.org/abs/2301.00138v1
# Chaos and Entanglement in Non-Markovian Optomechanical Systems ###### Abstract We study the chaotic motion of an optomechanical system coupled to a non-Markovian environment. We show that the environmental memory time can significantly affect chaos in an enhancing way. In addition to classical chaotic motion, the quantum entanglement in the presence of chaos is investigated. It is found that both the environmental memory and chaos can lift up bipartite entanglement in a non-linear optomechanical system. These observations may help expand our understanding of the transition from classical to quantum dynamics. ## I Introduction Optomechanics has provided a powerful tool to study both classical and quantum dynamics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. By using non-Markovian style "baths" interacting with such systems of interest, one is able to fine tune a memory time parameter found in these non-Markovian reservoirs and observe their effects on the system in question. In particular, we focus on the effects of this memory time parameter on the signature of classical chaos and quantum entanglement within our optomechanical system. While many have studied Markovian open systems using the widely used ansatz of the Born-Markov approximation, which assumes a memory-less environment and a weak coupling between system and environment [16; 17; 18; 19; 20], the theory on non-Markovian quantum open systems is less widely used but more realistic for many true experimental systems. With the development of non-Markovian quantum-state diffusion (NMQSD) [21; 22; 23; 24; 25; 26; 27; 28; 29], we are able to study the non-Markovian effects of open optomechanical systems where the bath of the system has a'memory' such that it may feed information back to the system in a non-trivial way. Classical chaos has exhibit fascinating features in connections with various topics in math and physics, such as fractal dimension [30], dynamical systems [31], and of course, quantum systems [32], while showing numerous applications [33; 34]. Many attempts have been made to study chaotic dynamics of quantum systems including semi-classical orbit methods [35; 36; 37; 38] and statistical and spectral chaos using random matrix methods [39; 40]. The route to chaos generated by a optomechanical system within a Markov environment has been studied in [6]. This naturally leads to the question of what would happen to such a system embedded in non-Markovian environments. This is the first part of our investigation below. As entangled states may be generated between these coupled optomechanical quantum systems, it is expected that the qualitative nature of the dynamics can affect entanglement [41]. This will be the second part of our study investigating how signatures of classical chaos can affect quantum entanglement. Different from a linearized system, our system has a non-linear Hamiltonian resulting in non-Gaussian states. Recent methods [42; 43] are employed to determine the negativity of our entangled state and a time averaged entanglement measure is used to provide a comparison with the prototypical quantitative measure of classical chaos, the maximal Lyapunov exponent (\(\mathbf{LE}\)). Additionally, it has been shown that non-Markovian environment plays a pivotal role in entanglement dynamics [44; 45; 46], which has shown usefulness in preverving and enhancing entanglement [47; 48] in optomechanical systems. It would be interesting to see whether it's still the case with the presence of classical chaos. The paper is organized as follows. Section II will give a detailed description of our optomechanical system coupled to a non-Markovian environment. Section III and IV will provide results of our numerical simulations, with section III focusing on the non-Markovian reservoir's influence on classical chaos. Section IV explores the relation between entanglement and chaos within the optomechanical non-Markovian open system. Finally, a conclusion of our results is given. ## II Optomechanical model coupled to non-Markovian environment In this section, we introduce the optomechanical system and the non-Markovian environment that will be used to investigate chaos and entanglement. The optomechanical system is where a mechanical mode couples with an optical mode, as exhibited in Figure 1. Assuming \(\hbar=1\), the system is described by the Hamiltonian [1] given by \[H_{s}=\big{[}-\Delta+g_{0}(b+b^{\dagger})\big{]}a^{\dagger}a+\Omega b^{ \dagger}b+\alpha_{L}(a^{\dagger}+a), \tag{1}\] where \(a\) and \(b\) are the annihilation operators of the optical mode and mechanical mode which satisfy the commutation relations \([a,a^{\dagger}]=1\) and \([b,b^{\dagger}]=1\). \(\Delta\) is the detuning parameter and \(\Omega\) is the cantilever frequency. The detuning is defined as \(\Delta\equiv\omega_{L}-\omega_{\text{cav}}\), where \(\omega_{L}\) is the pumping \[\dot{\rho}=-i[H_{s},\rho]+\Gamma D\big{[}b,\rho\big{]}+\bigg{\{}f_{0}(t)[a,\rho a ^{\dagger}]+if_{1}(t)[a^{\dagger},[H_{s},a]\rho]+f_{2}(t)[a^{\dagger},[a^{ \dagger},a]a\rho]+H.C.\bigg{\}}, \tag{2}\] where \(H.C.\) stands for Hermitian conjugate. The Lindblad term \(\Gamma D\big{[}b,\rho\big{]}=\Gamma\big{\{}b\rho b^{\dagger}-\frac{1}{2}(b^{ \dagger}b\rho+\rho b^{\dagger}b)\big{\}}\) represents the Markov mechanical bath. \(f_{0}\), \(f_{1}\) and \(f_{2}\) are time-dependent coefficients which are given by \[\begin{split}& f_{0}(t)=\int_{0}^{t}\alpha(t,s)\;ds,\\ & f_{1}(t)=\int_{0}^{t}\alpha(t,s)(t-s)\;ds,\\ & f_{2}(t)=\int_{0}^{t}\int_{0}^{s}\alpha(t,s)\alpha(s,u)(t-s)\; du\;ds.\end{split} \tag{3}\] \(\alpha(t,s)\) is the Ornstein-Uhlenbeck (O-U) correlation function \[\alpha(t,s)=\frac{\kappa\gamma}{2}e^{-\gamma|t-s|}, \tag{4}\] where \(1/\gamma\) is the memory time, and \(\kappa/\Omega=1\) is the optical damping rate. We express the system parameters in units of \(\Omega\) and introduce the dimensionless time parameter \(\tau=\Omega t\). Additionally, we introduce two dimensionless variables [50; 51; 6] \[\sigma=g_{0}/\kappa,\;P=\frac{8\alpha_{L}^{2}g_{0}^{2}}{\Omega^{4}}. \tag{5}\] The pumping parameter \(P\) gives the strength of the laser input of the cavity. The quantum-classical scaling parameter \(\sigma\) is set to be \(\sigma=x_{\rm{zpt}}/x_{\rm{res}}=g_{0}/\kappa=0.1\), where \(x_{\rm{zpt}}=\sqrt{\hbar/(2m\Omega)}\) are the zero-point fluctuations of the cantilever (with mass \(m\)). Furthermore, \(x_{\rm{res}}\) is the resonance width of the cavity, and \(g_{0}\) is the optomechanical single-photon coupling strength [51; 6]. Since \(x_{\rm{res}}\) is a classical quantity, \(\sigma\) shows how close the quantum dynamics of the optomechanical system is to the classical limit, where \(\sigma\to 0\) provides the classical limit. We use the re-scaled creation and annihilation operators \(\langle a_{1}\rangle=[\Omega/(2\alpha_{L})]\langle a\rangle\), \(\langle b_{1}\rangle=(g_{0}/\Omega)\langle b\rangle\). Applying the trace expectation \(\langle\hat{A}\rangle=tr(A\hat{\rho})\), the complete set of dynamical equations can be seen in Appendix B Eq. (B1), here we give the first two equations \[\begin{split}\frac{d}{d\tau}\langle a_{1}\rangle=&-i (1+f_{1})\bigg{\{}\langle a_{1}b_{1}\rangle+\langle a_{1}b_{1}^{\dagger} \rangle-\frac{\Delta}{\Omega}\langle a_{1}\rangle+\frac{1}{2}\bigg{\}}-\frac{ f_{0}^{*}+f_{2}}{\Omega}\langle a_{1}\rangle,\\ \frac{d}{d\tau}\langle b_{1}\rangle=&-i\bigg{\{} \frac{P}{2}\langle a_{1}^{\dagger}a_{1}\rangle+\langle b_{1}\rangle\bigg{\}}- \frac{\Gamma}{2\Omega}\langle b_{1}\rangle.\end{split} \tag{6}\] Eq. (B1) forms a set of closed dynamical equations which will be used to investigate both chaos and entangle Figure 1: Schematic diagram of the optomechanical system, where the light field is considered to be in non-Markovian environment. The mechanical bath is set to be Markov to account for mechanical loss. ment of our system. Semi-classical (SC) approximation is made since the coupling \(g_{0}\) is weak. Since the Hamiltonian is non-linear, an exact set of equations will have too many terms. Therefore, we use a SC approximation such that terms higher than second order are split. ## III Chaos and non-Markovian environments The mechanism of how a non-Markovian environment affects a chaotic system is the focus of this section. In general, chaotic systems tend to be very sensitive to parameter changes. The simulation methods used below allow one to manually adjust such a parameter, the non-Markovian memory time, which can have a delayed effect on the dissipative process of our open system. Such an effect may significantly influence chaos generation and therefore, we examine various memory times and their effects of chaos generation. Our simulation is mainly based on observing the optomechanical system while changing the memory time of the optical bath represented by the parameter \(\gamma\) (the inverse of memory time). Furthermore, we vary the system parameters \(\Delta\) and \(P\) to get a comprehensive picture of the chaos distribution of our system. The initial states are set to be the vacuum states. We use the maximal Lyapunov exponent (\(\mathbf{L}\mathbf{E}\)) as the indicator of chaos, which is calculated using the Wolf's method of phase reconstruction [52]. If the \(\mathbf{L}\mathbf{E}\) is positive, the system is considered to be chaotic. By taking the real and imaginary parts of the time series of \(\langle a_{1}\rangle\) and \(\langle b_{1}\rangle\) from Eq. (14), we can generate the four-dimensional phase diagrams based on different memory times (the inverse of \(\gamma\)). Figure (2) below illustrates the system dynamics under different memory times for a fixed point (\(P=1.25\) and \(\Delta=-0.70\)). The increase of \(\gamma\) results in the system changing from regular to chaotic, which serves as a key example of memory induced chaos. Next, we plot the bifurcation diagrams which mark the stable points of the cantilever oscillation. We fix \(P=1.37\). At \(\gamma=10\), the period-doubling bifurcation takes place, as is exhibited in Fig. (3a), where the chaotic region is bounded within a small segment \(\Delta\in[-1.03,-0.92]\). For the \(\gamma=2\) case, Fig. (3b) shows the chaotic region expanding to \(\Delta\in[-1.13,-0.99]\) while a new chaotic region at \(\Delta\in[-0.83,-0.52]\) emerges, with some inter-adjacent regular regions inside. The comparison between Fig. (3a) and (3b) clearly indicates that the longer memory time can, not only expand chaotic regions, but also induce chaos from the previously non-chaotic areas. Note that \(\gamma=2\) is not far away from the Markov limit, but it still causes significant change to chaotic dynamics, further indicating that the chaos is very sensitive to parameter changes, including the memory time. To summarize the appearances of chaos, the \(\mathbf{L}\mathbf{E}\) of every data point is plotted to form global chaos landscapes, as is shown in Fig. (4a) for \(\gamma=10\) and (4b) for \(\gamma=1\). The parameter ranges are set to be \(P\in[0.8,1.6]\) and \(\Delta\in[-1.4,-0.4]\). As the pumping increases, we see more chaotic motion [50; 6; 51]. In both cases, the chaotic regions have some fine structures of inter-adjacent regular regions. For our model, we have shown that the non-Markovian condition expands the chaotic regions while lowering the pumping bar for chaos generation. For the the case of \(\gamma=10\) (Fig. 4a), the chaos emerging level is \(P=1.37\), while for \(\gamma=1\) (Fig. 4b), the level is lowered to \(P=1.08\). Comparing Fig. (4a) and (4b), we have seen that the increase of memory time can expand chaotic areas in the parameter plane and decrease the pumping threshold for chaos generation. While the amplitude damping is the major dissipating channel, the environmental memory can slow down the dissipation of energy. The dissipative dynamics will undergo temporal revivals due to the memory effect. This results in the emerging of chaos that requires less pumping energy, thus lowering the pumping threshold while expanding chaotic regions across the map. A similar phenomena has been discussed in [48] where the memory enhanced entanglement generation was also due to the back-flow in the dissipation channel in the form of information. Since energy carries information, our model can be understood in a similar fashion. ## IV Entanglement and chaos We now turn to the quantum aspects of our optomechanical system to investigate entanglement under the presence of chaos and memory time. The optomechanical system we are studying is a weakly coupled bipartite system. There have been several studies focusing on chaos and its effects on entanglement [53; 54; 55; 56; 41; 57]. One previous study looks at the effects of environment memory on entanglement [48]. Since the non-linear Hamiltonian of our system (1) causes our physical states to be non-Gaussian, the conventional ways of calculating inseparability [58; 59; 60] would cause imaginary terms that cannot be ignored. Therefore, a non-conventional method is needed to calculate the entanglement of the two modes of our system. Here, we will use a hierarchy of necessary and sufficient conditions for the negativity of the partial transposition (NPT) in terms of observable moments [42]. Though this leads only to sufficient conditions for entanglement, it can be applied to a variety of quantum states (including non-Gaussian states). ### Entanglement Measurement A brief introduction of Shchukin and Vogel's (SV) criterion is provided here [42]. It is known that a bipartite quantum state is entangled if NPT is achieved [61; 43]. We start with a separable bipartite state \(\hat{\rho}\). The condi tion for its separability is that its partial transposed density operator \(\hat{\rho}^{\rm PT}\) can be written in the following form (assuming that we are transposing the second Hilbert space state, which won't affect our result) \[\hat{\rho}^{\rm PT}=\sum_{n=0}^{\infty}p_{n}\hat{\rho}_{1}^{(n)}\otimes\hat{\rho }_{2}^{(n)\rm T}. \tag{7}\] Based on this property of separable states, we can employ NPT as a sufficient condition for inseparability, which is referred to as the Peres-Horodecki condition [61; 62; 63]. The separability is determined through a matrix of moments of the partial transposition, which is given by \[M_{pqrs,nmkl}=\langle a^{\dagger q}a^{p}a^{\dagger\dagger}a^{m}b^{\dagger \dagger}s^{p}r^{b}b^{\dagger k}b^{\dagger}\rangle^{\rm PT}. \tag{8}\] Shchukin and Vogel [42] introduced a way of ordering the moments \(M_{pqrs,nmkl}\) as follows \[\begin{split} 1,\ \langle a\rangle,\ \langle a^{\dagger} \rangle,\ \langle b\rangle,\ \langle b^{\dagger}\rangle,\ \langle a^{2}\rangle,\ \langle a^{\dagger}a \rangle,\ \langle a^{\dagger 2}\rangle,\ \langle ab\rangle,\\ \langle a^{\dagger}b\rangle,\ \langle b^{2}\rangle,\ \langle ab^{ \dagger}\rangle,\ \langle a^{\dagger}b^{\dagger}\rangle,\ \langle b^{\dagger}b\rangle,\ \langle b^{\dagger 2} \rangle,\...\end{split} \tag{9}\] The moments \(M_{pqrs,nmkl}\) in Eq. (8) can be expressed in terms of the moments of the original state \[\langle a^{\dagger q}a^{p}a^{\dagger n}a^{m}b^{\dagger\dagger}s^{p}b^{\dagger k }b^{\dagger}\rangle^{\rm PT}=\langle a^{\dagger q}a^{p}a^{\dagger n}a^{m}b^{ \dagger l}b^{k}b^{\dagger r}b^{\rangle}. \tag{10}\] For our quantum state, it is convenient to use a higher-order test involving a small number of minors, which was introduced in [43]. The matrix determinant is given as Figure 3: Bifurcation diagrams and corresponding Lyapunov exponents (\(\mathbf{LE}\)), with the laser pumping parameter \(P=1.37\) and detuning varying from -1.2 to -0.4. In Fig. 2(a), the chaotic region is bounded within a small segment \(\Delta\in[-1.03,-0.92]\). In Fig. 2(b), the chaotic region expanding to \(\Delta\in[-1.13,-0.99]\) and a new chaotic region \(\Delta\in[-0.83,-0.52]\) emerges, with some inter-adjacent regular regions inside. Figure 2: Phase diagrams with different correlation frequencies \(\gamma\) at \(\Delta=-0.70\), \(P=1.25\). The coordinates \((Z_{1},Z_{2},Z_{3},Z_{4})\) are the real and imaginary parts of \(\langle a_{1}\rangle\) and \(\langle b_{1}\rangle\). The 4th coordinate \(Z_{4}\) is represented by scaled colours. As memory time increases (\(\gamma\) decreases), the dynamics goes from regular to chaotic. Fig. 1(c) exhibits the emerging of chaos as \(\gamma\) is lowered to 0.8. follows (using the re-scaled \(a_{1}\) and \(b_{1}\)) \[D_{HO}=\left|\begin{matrix}1&\langle a_{1}b_{1}^{\dagger}\\ \langle a_{1}^{\dagger}b_{1}\rangle&\langle a_{1}^{\dagger}a_{1}b_{1}^{ \dagger}b_{1}\rangle\end{matrix}\right|. \tag{11}\] If there exist a negative determinant \[D_{HO}<0, \tag{12}\] then the NPT has been demonstrated, which provides a sufficient condition for entanglement. Conveniently, the set of mean values are already provided by the semiclassical equations of motion (10), and the SC approximation is used such that \(\langle a_{1}^{\dagger}a_{1}b_{1}^{\dagger}b_{1}\rangle=\langle a_{1}^{ \dagger}a_{1}\rangle\langle b_{1}^{\dagger}b_{1}\rangle\). We define \(\mathcal{N}\) to denote the negativity of the partial transposition (NPT) \[\mathcal{N}=\max[0,-D_{HO}]. \tag{13}\] A positive value for \(\mathcal{N}\) indicates that the states are entangled. We then define \(\mathbf{En}\) as the long-time average of \(\mathcal{N}(t)\)[56]. It provides an overview of the entanglement Figure 4: Pictures of chaotic regions of the optomechanical systems with different memory times plotted in the \(P\)-\(\Delta\) plane. The colour scale shows the value of maximal Lyapunov exponent (\(\mathbf{L}\mathbf{E}\)) of every data point. Comparing Fig. 3(a) and 3(b), the chaotic area expands as the memory time gets longer (\(\gamma\) decreases). Figure 5: The entanglement strength \(\mathbf{En}\) on the \(P\)-\(\Delta\) plane. The colour scale measures the value of \(\mathbf{En}\). The entanglement shows similar fine structures as the chaos. intensity as is given by \[\mathbf{E}\mathbf{n}=\frac{1}{T}\int_{0}^{T}\mathcal{N}(t)dt, \tag{14}\] where \(T\rightarrow\infty\). Since \(\mathbf{E}\mathbf{n}\) is only determined by parameters \(P\), \(\Delta\) and \(\gamma\), we can calculate the average NPT (\(\mathbf{E}\mathbf{n}\)) of every data point analogous to the \(\mathbf{L}\mathbf{E}\) for chaos. Figure 6: Bifurcation diagram, maximal Lyapunov exponent (\(\mathbf{L}\mathbf{E}\)), comparing with average negativity of the partial transposition (\(\mathbf{E}\mathbf{n}\)), when \(\gamma=10\). Figure 7: Phase diagram and the corresponding long term dynamical evolution of the NPT \(\mathcal{N}\), with \(P=1.4\). The first roll shows \(\Delta=-1.1\) (non-chaotic) and the second roll \(\Delta=-1.0\) (chaotic). The dynamical evolution of \(\mathcal{N}\) appears solid simply because it is oscillating very quickly. ### Simulation Results The following simulations show the quantum entanglement under the influence of chaotic motion and non-Markovian environment. We consider two memory parameters, namely the close to Markov case and the non-Markovian case. First, we focus on the close to Markov case, with \(\gamma=10\). Overall, the graph (Fig. (a)a) of average NPT (\(\mathbf{En}\)) on the parameter plane reflects almost the same fine structure as the chaos graph (Fig. (a)a). It is worth noting that the very small \(\mathbf{LE}\) spikes in Fig. ((a)a) coincide with sharp \(\mathbf{En}\) spikes. To provide a closer look, we show the bifurcation, \(\mathbf{LE}\), and \(\mathbf{En}\) diagrams, fixing \(\gamma=10\). With \(P=1.3\), Fig. ((a)a) shows two very small \(\mathbf{LE}\) spikes coincide with two huge \(\mathbf{En}\) spikes. Fig. ((b)b) demonstrates that the chaotic area \(\Delta\in[-1.066,-0.94]\) corresponding to an area of entanglement increase. As shown in Fig. (7), the chaos can enhance the bipartite entanglement for the non-linear optomechanical system, our results are in tune with the observations for a quantum kicked top system [56]. In summary, the bipartite entanglement generation can be increased by classical chaotic dynamics. However, it should be noted that the \(\mathbf{En}\) level (as shown in Fig. (a)a) can be lower than some non-chaotic areas on the parameter plane meaning that the enhancement effect of entanglement by chaos is localized (on the parameter plane). Furthermore, in the non-Markovian \(\gamma=1\) case (Fig. (b)b), we can still see some fine structures related to the chaos graph (Fig. (b)b), where the \(\mathbf{En}\) level around chaotic areas is larger than the \(\gamma=10\) case (Fig. (a)a). In Fig. ((b)b), we can observe the slight enhancement of entanglement by chaos. Yet, that effect dies down at \(\Delta>-0.78\) while in Fig. ((a)a), it terminates at \(\Delta>-0.74\). Overall, the \(\mathbf{En}\) level is higher than the \(\gamma=10\) case and the distinction of \(\mathbf{En}\) between chaotic and non-chaotic areas is much less, which is similar to the results demonstrated in a linear optomechanical system [48], whose explanation is that the memory effect creates information back flow such that the dissipative dynamics will experience temporal revivals. The dissipation and the back flow from the environment may reach a new balance point so that the steady state entanglement (\(t\rightarrow\infty\)) may be dependent on its memory. The result of this mechanics is the enhancement of entanglement by the environmental memory time, in our non-linear optomechanical system. Though this effect is still localized, as one cannot tell whether all areas on Fig. ((b)b) have higher \(\mathbf{En}\) than Fig. ((a)a). In summary, both the environmental memory and the chaotic dynamics can significantly affect entanglement generation. ## V Conclusion We have analyzed classical chaos in a non-Markovian optomechanical system and measured the corresponding quantum entanglement. We have shown that environmental memory can increase chaos generation in the optomechanical system. For the non-linear optomechanical system, it is shown that the memory effect causes the energy back flow resulting in the chaos region expanding while lowering the pumping bar for chaos generation. As for the entanglement, we observed two effects. The first one is that bipartite entanglement is sensitive to chaos and can be enhanced by it. The second effect is that the environmental memory time can increase bipartite entanglement generation. In conclusion, in the non-linear optomechanical system, non-Markovian memory time can be an enhancing factor for classical chaos generation, while chaos and environmental memory both can increase entanglement generation. Figure 8: Bifurcation diagram, maximal Lyapunov exponent (\(\mathbf{LE}\)), comparing with average negativity of the partial transposition (\(\mathbf{En}\)), when \(\gamma=1\). ## Acknowledgement This work is partially supported by the ART020-Quantum Technologies Project. We thank Dr. Mengdi Sun, Kenneth Mui, and Yifan Shi for helpful discussion on optomechanics. ## Appendix A The non-Markovian quantum state diffusion (NMQSD) equation The following section provides a detailed derivation of the non-Markovian master equation. Both environments are set to be at zero temperature, and separate from each other. Since the Markov environment is well-understood and its formation can be added to the master equation easily, we can focus on the optical bath and use the non-Markovian quantum state diffusion (QSD) equation to derive the master equation. The non-Markovian bath for the optical mode can be described by a set of harmonic oscillators [48] \[H_{B}=\sum_{j}\omega_{j}c_{j}^{\dagger}c_{j}, \tag{10}\] where \(c_{j}\) and \(c_{j}^{\dagger}\) are annihilation and creation operators satisfying the commutation relation \([c_{i},c_{j}^{\dagger}]=\delta_{ij}\). In general, the interaction Hamiltonian between the system and its Bosonic bath is described by \[H_{I}=\sum_{j}g_{j}(Lc_{j}^{\dagger}+L^{\dagger}c_{j}), \tag{11}\] where \(L=a\) is the Lindblad operator representing the optical damping and \(g_{j}\) are the system-bath coupling intensities. The interaction Hamiltonian is in the form of a rotating wave approximation which assumes the condition of weak coupling strength (\(g_{j}\ll\Omega\)). Assuming the optomechanical system and the environment are not correlated initially, the state of the optomechanical system connecting to a non-Markovian bath can be represented by the non-Markovian quantum state diffusion (NMQSD) equation [21; 23] governing the time evolution of the a stochastic pure state \(|\psi_{t}(z^{*})\rangle\), which is given by \[\partial_{t}|\psi_{t}(z^{*})\rangle=\bigg{[}-iH_{s}+az_{t}^{*}-a^{\dagger} \overline{O}(t,z^{*})\bigg{]}|\psi_{t}(z^{*})\rangle, \tag{12}\] where \(O(t,s,z^{*})\psi_{t}\equiv\frac{\delta\psi_{t}}{\delta z_{s}}\) and \(\overline{O}(t,z^{*})\equiv\int_{0}^{t}ds\alpha(t,s)O(t,s,z^{*})\) with the initial condition \(O(t,s=t,z^{*})=a\). \(\alpha(t,s)=\frac{\kappa\gamma}{2}e^{-\gamma|t-s|}\) is the Ornstein-Uhlenbeck (O-U) correlation function. \(z_{t}^{*}=-i\sum_{j}g_{j}z_{j}^{*}e^{i\omega_{j}t}\) is a colored complex Gaussian process satisfying \[M[z_{s}^{*}z_{s}]=\alpha(t,s).\ M[z_{t}z_{s}]=0, \tag{13}\] where \(M[\cdot]\equiv\int\frac{dz_{t}}{\pi}e^{-|z|^{\dagger}}[\cdot]\) denotes the ensemble average over the noise \(z_{t}\). To solve (12), one needs to find the operator \(O(t,s,z^{*})\). Under the condition that the memory time is not too long, we consider the expansion of \(O(t,s,z^{*})\) in powers of \((t-s)\)[22] \[O(t,s,z^{*})=O(s,s,z^{*})+\frac{\partial O(t,s,z^{*})}{\partial t}\bigg{|}_{t =s}(t-s)+\cdots, \tag{14}\] which is a systematic expansion of the non-Markovian QSD. The zeroth-order term corresponds to the standard Markov QSD when \(1/\gamma\to 0\). We choose the first-order approximation of the operator \(O(t,s,z^{*})\) since the first-order term is the most important correction to Markovian dynamics given that the memory time is assumed to be short. At the time point \(t=s\), the expression of the operator \(O(t,s,z^{*})\) is given by [22] \[O(s,s,z^{*}) =a, \tag{15}\] \[\left.\frac{\partial O(t,s,z^{*})}{\partial t}\right|_{t=s} =-i[H_{s},a]-\int_{0}^{s}\alpha(s,u)du[a^{\dagger},a]a,\] where \(H_{s}\) is the system Hamiltonian and \(a\) is the Lindblad operator. At this time point, the operator \(\overline{O}\) becomes \[\overline{O}=f_{0}(t)a-f_{1}(t)i[H_{s},a]-f_{2}(t)[a^{\dagger},a]a, \tag{16}\] where \[f_{0}(t) =\int_{0}^{t}\alpha(t,s)\ ds, \tag{17}\] \[f_{1}(t) =\int_{0}^{t}\alpha(t,s)(t-s)\ ds,\] \[f_{2}(t) =\int_{0}^{t}\int_{0}^{s}\alpha(t,s)\alpha(s,u)(t-s)\ du\ ds.\] One can turn (12) into a master equation by taking the ensemble mean over the noise \(z_{t}\) and introducing the reduced density matrix \(\rho_{t}=M[\psi_{t}(z^{*})\rangle\langle\psi_{t}(z^{*})|]\). When the environment is not far away from Markov, the dependence of the operator \(\overline{O}(t,z^{*})\) on the noise \(z_{t}\) is negligible. Under this approximation, the master equation [22] takes the form \[\frac{d}{dt}\rho_{t}=-i[H_{s},\rho_{t}]+[a,\rho_{t}\overline{O}(t)^{\dagger}] -[a^{\dagger},\overline{O}(t)\rho_{t}]. \tag{18}\] For further details about the derivation of the master equation, see [24; 28]. Since the mechanical bath is separate and Markov, it can be added to the master equation by simply including the appropriate Lindblad term which is given by \[\Gamma D\big{[}b,\rho\big{]}=\Gamma\big{\{}b\rho b^{\dagger}-\frac{1}{2}(b^{ \dagger}b\rho+\rho b^{\dagger}b)\big{\}}, \tag{19}\] where \(\Gamma/\Omega=10^{-3}\) is the mechanical damping. Using the first order approximation of \(\overline{O}(t)\) (16), the master equation takes the final form \[\dot{\rho}=-i[H_{s},\rho]+\Gamma D\big{[}b,\rho\big{]}+\bigg{\{}f_{0}(t)[a,\rho a ^{\dagger}]+if_{1}(t)[a^{\dagger},[H_{s},a]\rho]+f_{2}(t)[a^{\dagger},[a^{\dagger },a]a\rho]+H.C.\bigg{\}}, \tag{10}\] where \(H.C.\) stands for Hermitian conjugate. ## Appendix B The complete set of dynamical equations \[\frac{d}{d\tau}\langle a_{1}\rangle= -i(1+f_{1})\bigg{\{}\langle a_{1}b_{1}\rangle+\langle a_{1}b_{1}^ {\dagger}\rangle-\frac{\Delta}{\Omega}\langle a_{1}\rangle+\frac{1}{2}\bigg{\}} -\frac{f_{0}^{*}+f_{2}}{\Omega}\langle a_{1}\rangle,\] \[\frac{d}{d\tau}\langle b_{1}\rangle= -i\bigg{\{}\frac{P}{2}\langle a_{1}^{\dagger}a_{1}\rangle+ \langle b_{1}\rangle\bigg{\}}-\frac{\Gamma}{2\Omega}\langle b_{1}\rangle,\] \[\frac{d}{d\tau}\langle a_{1}^{\dagger}a_{1}\rangle= -\frac{i}{2}(\langle a_{1}^{\dagger}\rangle-\langle a_{1} \rangle)-\frac{1}{\Omega}(f_{0}+f_{0}^{*}+f_{2}+f_{2}^{*})\langle a_{1}^{ \dagger}a_{1}\rangle\] \[+i(f_{1}^{*}-f_{1})(-\frac{\Delta}{\Omega}+\langle b_{1}^{ \dagger}\rangle+\langle b_{1}\rangle)\langle a_{1}^{\dagger}a_{1}\rangle+ \frac{i}{2}(f_{1}^{*}\langle a_{1}\rangle-f_{1}\langle a_{1}^{\dagger} \rangle),\] \[\frac{d}{d\tau}\langle b_{1}^{\dagger}b_{1}\rangle= -i\frac{P}{2}\langle a_{1}^{\dagger}a_{1}\rangle(\langle b_{1}^ {\dagger}\rangle-\langle b_{1}\rangle)-\frac{\Gamma}{\Omega}\langle b_{1}^{ \dagger}b_{1}\rangle,\] \[\frac{d}{d\tau}\langle a_{1}^{2}\rangle= -2i(1+f_{1})\bigg{\{}(-\frac{\Delta}{\Omega}+\langle b_{1}^{ \dagger}\rangle+\langle b_{1}\rangle)\langle a_{1}^{2}\rangle+\frac{\langle a _{1}\rangle}{2}\bigg{\}}-\frac{2}{\Omega}(f_{0}^{*}+f_{2})\langle a_{1}^{2}\rangle, \tag{11}\] \[\frac{d}{d\tau}\langle b_{1}^{2}\rangle= -2i\bigg{\{}\frac{P}{2}\langle a_{1}^{\dagger}a_{1}\rangle\langle b _{1}\rangle+\langle b_{1}^{2}\rangle\bigg{\}}-\frac{\Gamma}{\Omega}\langle b _{1}^{2}\rangle,\] \[\frac{d}{d\tau}\langle a_{1}b_{1}\rangle= -i(1+f_{1})\bigg{\{}(\langle b_{1}^{2}\rangle+\langle b_{1}^{ \dagger}b_{1}\rangle)a_{1}-\frac{\Delta}{\Omega}\langle a_{1}b_{1}\rangle+ \frac{1}{2}\langle b_{1}\rangle\bigg{\}}\] \[-i\bigg{\{}\langle a_{1}b_{1}\rangle+\frac{P}{2}\langle a_{1}^{ \dagger}a_{1}\rangle\langle a_{1}\rangle+(\frac{g_{0}}{\Omega})^{2}\langle a_ {1}\rangle\bigg{\}}-\frac{1}{\Omega}(f_{0}^{*}+f_{2})\langle a_{1}b_{1}\rangle,\] \[\frac{d}{d\tau}\langle a_{1}b_{1}^{\dagger}\rangle= -i(1+f_{1})\bigg{\{}(\langle b_{1}^{\dagger 2}\rangle+\langle b_{1}^{ \dagger}b_{1}\rangle)a_{1}+(\frac{g_{0}}{\Omega})^{2}\langle a_{1}\rangle- \frac{\Delta}{\Omega}\langle a_{1}b_{1}^{\dagger}\rangle+\frac{1}{2}\langle b _{1}^{\dagger}\rangle\bigg{\}}\] \[+i\bigg{\{}\langle a_{1}b_{1}^{\dagger}\rangle+\frac{P}{2}\langle a _{1}^{\dagger}a_{1}\rangle\langle a_{1}\rangle+(\frac{g_{0}}{\Omega})^{2} \langle a_{1}\rangle\bigg{\}}-\frac{1}{\Omega}(f_{0}^{*}+f_{2})\langle a_{1}b_ {1}^{\dagger}\rangle,\] where we have used the relations (in terms of \(a\) and \(b\)) \[\begin{array}{l}\langle a^{\dagger}\rangle=\langle a\rangle^{*},\ \langle b^{\dagger} \rangle=\langle b\rangle^{*},\ \langle aa^{\dagger}\rangle=\langle a^{\dagger}a\rangle+1,\\ \langle bb^{\dagger}\rangle=\langle b^{\dagger}b\rangle+1,\ \langle a^{\dagger 2} \rangle=\langle a^{2}\rangle^{*},\ \langle b^{\dagger 2}\rangle=\langle b^{2} \rangle^{*},\\ \langle a^{\dagger}b^{\dagger}\rangle=\langle ab\rangle^{*},\ \langle a^{\dagger}b \rangle=\langle ab^{\dagger}\rangle^{*}.\end{array} \tag{12}\]
2309.05104
Strategic Deployment of Swarm of UAVs for Secure IoT Networks
Security provisioning for low-complex and constrained devices in the Internet of Things (IoT) is exacerbating the concerns for the design of future wireless networks. To unveil the full potential of the sixth generation (6G), it is becoming even more evident that security measurements should be considered at all layers of the network. This work aims to contribute in this direction by investigating the employment of unmanned aerial vehicles (UAVs) for providing secure transmissions in ground IoT networks. Toward this purpose, it is considered that a set of UAVs acting as aerial base stations provide secure connectivity between the network and multiple ground nodes. Then, the association of IoT nodes, the 3D positioning of the UAVs and the power allocation of the UAVs are obtained by leveraging game theoretic and convex optimization-based tools with the goal of improving the secrecy of the system. It is shown that the proposed framework obtains better and more efficient secrecy performance over an IoT network than state-of-the-art greedy algorithms for positioning and association.
Xavier Alejandro Flores Cabezas, Diana Pamela Moya Osorio
2023-09-10T18:22:54Z
http://arxiv.org/abs/2309.05104v1
# Strategic Deployment of Swarm of UAVs for Secure IoT Networks ###### Abstract Security provisioning for low-complex and constrained devices in the Internet of Things (IoT) is exacerbating the concerns for the design of future wireless networks. To unveil the full potential of the sixth generation (6G), it is becoming even more evident that security measurements should be considered at all layers of the network. This work aims to contribute in this direction by investigating the employment of unmanned aerial vehicles (UAVs) for providing secure transmissions in ground IoT networks. Toward this purpose, it is considered that a set of UAVs acting as aerial base stations provide secure connectivity between the network and multiple ground nodes. Then, the association of IoT nodes, the 3D positioning of the UAVs and the power allocation of the UAVs are obtained by leveraging game theoretic and convex optimization-based tools with the goal of improving the secrecy of the system. It is shown that the proposed framework obtains better and more efficient secrecy performance over an IoT network than state-of-the-art greedy algorithms for positioning and association. 3D position control, IoT, node association, physical layer security, unmanned aerial vehicle. ## I Introduction The fifth generation of wireless networks (5G) is envisioned to bring upon ubiquitous connectivity. Looking forward, beyond 5G, great advancements have been envisioned for the sixth generation of wireless networks (6G), which promises ubiquitous intelligence [1]. Toward that, many low-complexity wireless devices would be part of populated decentralized networks, where absolutely everything is connected in massive deployments of Internet of Things (IoT) networks, with applications in very different sectors, namely, industry, defense, healthcare, intelligent transportation systems, to name a few [2]. In such dense, heterogeneous networks, very sensitive information is transmitted over a shared medium, thus security and privacy issues become critical, and they cannot be handled independently of other parameters, i.e. energy consumption or latency [1]. While traditional cryptographic approaches have developed to be trustable solutions for preserving security in communications, the limitations and constraints of IoT devices and sensors, and the advancements in quantum computing render these approaches unfeasible or unreliable [2]. On the other hand, physical layer security (PLS) techniques, that explore the inherent properties of the noisy and random wireless channels to provide security to communications, has emerged as a promising and attractive security solution. Some well-known PLS techniques include artificial noise injection through friendly jamming, spatial diversity, beamforming design and relaying [1, 3, 4]. These techniques aim at designing the physical layer to provide an advantage of the legitimate link over the eavesdropping link with no assumption on the computing power of the attacker, thus providing information-theoretic security guarantees. From other perspective, it is recognized that unmanned aerial vehicles (UAVs) will play an important role in IoT applications, specially to provide connectivity in remote areas, disaster zones, and harsh environments [5, 6]. Thanks to their flexible deployment, capability of providing strong line of sight (LoS) connectivity and, ease of maneuverability, UAVs open a new range of novel opportunities for wireless networks, but at the same time, novel threat vectors should be also considered [4]. Noting this advantageous properties, UAVs can also be exploited for the design of PLS techniques to safeguard UAV-assisted communications. For instance, the challenges and opportunities for preventing passive and active attacks in wireless networks have been recently discussed in [3]. Particularly, the introduction of UAV nodes acting as friendly jammers in order to improve the secrecy performance of wireless networks have recently risen special attention [7]. All in all, the integration of UAVs into the provisioning of security through PLS techniques provides novel opportunities for safeguarding 6G networks. Importantly, the use of learning methods would allow the UAVs not only to remain autonomous, but also to adapt to the complexity of PLS security provisioning under dynamic channels and complex IoT scenarios, which is the main focus of this work. ### _Related Work_ Recently, the flexibility of UAVs have rised attention for secure transmissions in wireless networks [8, 9, 10, 11, 12, 13, 14]. In particular, UAVs have been employed as friendly jammers to assist a legitimate transmission by introducing artificial noise in order to prevent leakage of information to possible eavesdroppers in the network [7, 15, 16, 17, 18, 19, 20, 21, 22, 23]. In [7], the optimal three-dimensional (3D) deployment and jamming power of a UAV-based jammer are investigated to improve the secrecy performance of a wireless network in terms of the outage probability and the intercept probability, by defining area-based metrics that ensure a given intercept probability threshold within a certain area. In [15], a UAV friendly jammer scheme is introduced to enhance the secrecy rate of a wireless system, where the problem of trajectory optimization is investigated. In [16], a joint jamming scheme between the legitimate UAVs serving as multi-access edge computing (MEC) servers and the ground nodes is proposed to safeguard the legitimate transmission against malicious UAVs. Therein, the minimum secrecy capacity among system users is maximized by jointly optimizing the position, jamming power, and the computing capacity of the legitimate UAV, as well as the offloading rate of the users to the UAV, the transmit power of the users, and the offloading user association. Therein, it was demonstrated that the max-min secrecy capacity is improved over the benchmarks, specially for low offloading requirements, while existing a trade-off between security and latency. In [17], the secrecy outage probability (SOP) of a UAV-based millimeter wave (mmWave) relay network in the presence of multiple eavesdroppers is investigated, where the scenarios with and without cooperative jamming were contrasted. In [18], the existence of an optimal UAV jammer location on a network with multiple eavesdroppers was proven, and the impact of the density of eavesdroppers, the transmission power of the UAV jammer, and the density of UAV jammers on the optimal location was investigated. In [19], two area-based secrecy metrics, the jamming coverage (JC) and the jamming efficiency (JE), were proposed to evaluate the impact of jamming for secure wireless communications based on the SOP over an area, without knowledge of the position of the eavesdropper. Later, in [20], this idea was extended by introducing a hybrid secrecy metric, the so-called weighted secrecy coverage (WSC), that considers both coverage and efficiency of friendly jamming, simultaneously, in the context of UAV-based friendly jamming. Therein, the positioning of the UAV jammers to maximize the WSC is tackled. Further, in [21], a null-space precoding scheme is employed to eliminate the interference at the legitimate receiver. Under that scheme, a better performance was obtained in terms of the WSC. Further, in [22] and [23], the previous scenario was extended to include the 3D movement of the UAVs and the movement of the legitimate ground user, respectively. These works consider the formulation of the problem of adaptive position control of the UAVs as a multi-armed bandit, and the results presented significant improvements of the secrecy of the system in terms of WSC. In [24], it is considered a system where a UAV is serving a group of ground users via non-orthogonal multiple access (NOMA), while sending artificial noise to disrupt a passive eavesdropper in the system. The total jamming power and the rate at each user are maximized by optimizing the UAV trajectory, the power allocation, and the user scheduling. Such scheme was proven to outperform orthogonal multiple access schemes as well as non-jamming schemes in terms of the system sum-rate and of the eavesdropper data-rate. In recent years, the use of machine learning techniques has been increasingly considered to optimize the deployment of UAVs in wireless networks [25, 26, 27, 28, 29]. For instance, a novel federated learning-based framework for the distributed joint power allocation and scheduling of swarm of UAVs was proposed in [25]. The proposed framework significantly improves the convergence time of two baseline methods, namely optimized power-randomized scheduling and randomized power-optimized scheduling. In [26], an actor-critic deep reinforcement learning (RL) approach is proposed to find the optimal trajectory design and power allocation in UAV-assisted cellular networks, which achieves better network performance in terms of the average sum-rate of the system. In [27], game theory and RL are used to enhance the data offloading from UAVs to MEC servers in an IoT scenario. Therein, it was proven that the proposed methods converge to a Nash equilibrium of average offloaded data, whereas the RL approach ensures the convergence without exchange of information between UAVs. In [28], a deep Q-Learning-based scheduling approach is used to minimize the packet loss of IoT nodes in UAV-assisted wireless powered-IoT networks. The deep Q-Learning algorithm performs IoT node and modulation scheme selection for IoT nodes that wish to send information and wirelessly receive power from the UAVs. It was shown that the deep Q-Learning approach obtains much lesser packet loss than greedy or random scheduling approaches. In [29], the binary log-learning (BLLL) and greedy algorithms are proposed to maximize the total sum rate of the users throughout the network by optimizing the user-UAV association and UAV position control in a UAV-assisted network. Therein, it was shown that greedy algorithms for UAV position control and user-UAV association are sub-optimal and obtain a lower sum-rate than BLLL. However, the convergence of BLLL present an exponential time, thus the greedy algorithms are preferable in this aspect. Also, a deep Q-Network-based power allocation strategy was proposed in [30], to improve the secrecy rate of a legitimate communication between a UAV and a mobile user in the presence of a malicious mobile user and UAV. Therein, it is assumed that the attackers can choose between eavesdropping, spoofing and jamming attacks, and the results proved to overcome benchmarks based on Q-Learning and a win or learn faster-policy hill climbing (WoLF-PHC) approach. More recently, the optimization of the sum secrecy rate of a system with a single UAV acting as an aerial base station (ABS), that serves a group of ground nodes in the presence of UAVs acting as adaptive eavesdroppers or jammers, was proposed in [31]. Therein, a Stackelberg game was formulated considering two strategies, the ABS positioning to increase the sum secrecy rate of the system as the leader, and the cooperative attack of the adaptive eavesdroppers as the follower. Then, a spatial adaptive play learning algorithm is utilized to reach the equilibrium, which is shown to obtain a better sum secrecy rate than a random or ring deployment of the ABS. ### _Main Contributions_ To contribute to the state-of-the-art, this work considers the association, power allocation, and position control of UAVs serving as ABSs to a set of ground IoT nodes through frequency division multiple access (FDMA), by focusing on the secrecy performance of the system. Different from the approach in [29], in this work the sum secrecy rate of the network is considered as the utility function, and the power allocation per node is also investigated. Moreover, different from the works in [16, 30, 31], inactive nodes in the system are treated as potential eavesdroppers, thus presenting a relatively high density of eavesdroppers in the system. For the user-UAV association and UAV positioning, the synchronous log-linear learning (SLLL) formulation is considered, which is a synchronous algorithm that offers faster convergence. All in all, the main contributions of this paper are three-fold: 1. A three-stage block-coordinate ascend (BCA) framework is proposed where node association, UAV 3D position control, and power allocation are the blocks that are optimized iteratively by considering the other blocks fixed in order to increase the sum secrecy rate and number of nodes with positive secrecy in the proposed network. 2. Game-theoretic algorithms are proposed for node association and UAV position control to improve the secrecy capacity of the system. 3. A convex optimization-based power allocation technique is developed to increase the minimum secrecy rate of IoT nodes that can achieve secrecy, while guaranteeing a level of service to all IoT nodes. ## II System Model Consider the system illustrated in Fig. 1, which consists of a set of \(N\) IoT devices that are distributed following an uniform binomial point process over a rectangular region of dimensions \(\Delta x=x_{\max}-x_{\min}\) and \(\Delta y=y_{\max}-y_{\min}\), with the bi-dimensional position of the \(n\)th-IoT device (that can be a legitimate node or eavesdropper) denoted by \(\mathbf{x}_{n}=(x_{n},y_{n})\). To provide connectivity to the IoT devices, a swarm of \(M\) single-antenna UAVs, acting as ABSs is deployed over the region of interest. These UAVs can move in three dimensions over the rectangular region, within a altitude range \(\Delta z=z_{\max}-z_{\min}\). In this system, it is considered that, for a certain transmission process, only a fraction of IoT devices (randomly and independently selected according to a Bernoulli distribution of parameter \(q\)) are set on receiving mode (legitimate nodes), while the rest are overhearing the channel, thus being considered as potential eavesdroppers. In this system, downlink transmissions from the UAVs to the IoT devices are based in frequency division multiple access (FDMA). Assuming that all UAVs have the same limited amount of bandwidth \(BW\), each one divides its total bandwidth into \(C\) ortogonal sub-channels of bandwidth \(B=BW/C\). Additionally, let \(\mathcal{N}\) be the set of ground nodes, while \(\mathcal{L}\) and \(\mathcal{E}\) are the sets of legitimate nodes and eavesdroppers, such that \(|\mathcal{N}|=N\), \(|\mathcal{L}|=L\) and \(|\mathcal{E}|=E\), respectively. Additionally, \(\mathcal{M}\) is the set of UAVs, such that \(|\mathcal{M}|=M\) and \(\mathcal{C}\) is the set of sub-channels available at each UAV, with \(|\mathcal{C}|=C\). For simplicity purposes, the described sets are treated as their respective sets of indices as well. Accordingly, each UAV can associate with up to \(C\) ground nodes, with the power allocated by UAV \(m\in\mathcal{M}\) to the sub-channel \(c\in\mathcal{C}\) denoted as \(p_{m}^{c}\), and the total power budget at each UAV is \(P\). Then, the power allocation vector at UAV \(m\) is given by \(\mathbf{p}_{m}=(p_{m}^{1},...,p_{m}^{C})^{T}\) and the power allocation matrix of the whole system is given by \(\mathbf{P}\in\mathbb{R}^{C\times M}\) with \(\mathbf{P}=[\mathbf{p}_{1},...,\mathbf{p}_{M}]\). Let \(\mathbf{A}\in\mathbb{R}^{L\times M\times C}\) be the association array with elements \(a_{l,m,c}\in\{0,1\}\), where \(a_{l,m,c}=1\) if node \(l\) is associated to UAV \(m\) through sub-channel \(c\), and \(0\) otherwise. Given that, at any time, a certain sub-channel is either available or assigned to a single node, and that all legitimate nodes are associated to a single sub-channel, it holds that \[\sum\limits_{l\in\mathcal{L}}a_{l,m,c}\leq 1 \forall m\in\mathcal{M},\forall c\in\mathcal{C}, \tag{1}\] \[\sum\limits_{m\in\mathcal{M}}\sum\limits_{c\in\mathcal{C}}a_{l,m, c}\leq 1 \forall l\in\mathcal{L}. \tag{2}\] The air-to-ground (A2G) channel between UAV \(m\), at altitude \(z_{m}\), and a ground node \(l\) is modeled as in [7], with \(P_{\text{LoS}}\) and \(P_{\text{NLoS}}\) probabilities of LoS and NLoS connection being, respectively, given by [7] \[P_{\text{LoS}}=\frac{1}{1+\psi\exp\left(-\omega\left[\frac{180}{ \pi}\tan^{-1}\left(\frac{z_{m}}{r_{m,l}}\right)-\psi\right]\right)} \tag{3}\] and \(P_{\text{NLoS}}=1-P_{\text{LoS}}\), with \(\psi\) and \(\omega\) being environmental constants [32, 33], and \(r_{m,l}\) is the distance from node \(l\) and the projection on the ground of UAV \(m\). Then, the average pathloss of the links is given by \[L_{m,l}=\left(z_{m}^{2}+r_{m,l}^{2}\right)^{\frac{\alpha_{J}}{ 2}}\left(P_{\text{LoS}}\eta_{\text{LoS}}+P_{\text{NLoS}}\eta_{\text{NLoS}} \right), \tag{4}\] where \(\alpha_{J}\) is the pathloss exponent for the A2G links, and \(\eta_{\text{LoS}}\) and \(\eta_{\text{NLoS}}\) are the attenuation factors for the LoS and the NLoS links, respectively. Also, the A2G channel response \(h_{m,l}\) and channel gain \(g_{m,l}\) are given by \(h_{m,l}=(\sqrt{L_{m,l}})^{-1}\) and \(g_{m,l}=|h_{m,l}|^{2}\), respectively. Let \(s_{m}^{c}\) be the unit-power symbol sent by UAV \(m\) to node \(l\) through its sub-channel \(c\) with power \(p_{m}^{c}\). Then, the received signal \(y_{l}^{c}\) at node \(l\) is given by \[y_{l}^{c}=h_{m,l}\sqrt{p_{m}^{c}}s_{m}^{c}+\sum\limits_{\begin{subarray}{c}k \in\mathcal{M}\\ k\neq m\end{subarray}}h_{k,l}\sqrt{p_{k}^{c}}s_{k}^{c}+w, \tag{5}\] where \(w\) is the additive white Gaussian noise (AWGN) of power \(N_{0}\). Then, the received signal-to-interference-plus-noise ratio (SINR) at node \(l\) from UAV \(m\) through channel \(c\) is given by \[\gamma_{m,l}^{c}=\frac{a_{l,m,c}\gamma_{m}^{c}g_{m,l}}{\sum\limits_{ \begin{subarray}{c}k\in\mathcal{M}\\ k\neq m\end{subarray}}\gamma_{k}^{c}g_{k,l}+1}, \tag{6}\] where \(\gamma_{m}^{c}=\frac{p_{m}^{c}}{N_{0}}\) is the transmit SINR at UAV \(m\) in sub-channel \(c\). Furthermore, no cooperation is considered among eavesdroppers, i.e. they are non-colluding, thus the eavesdroppers Fig. 1: System model. ping risk is dominated by the eavesdropper with the strongest received SINR given by \[\gamma_{m,e*}^{c} =\frac{\gamma_{m}^{c}g_{m,e*}}{\sum\limits_{\begin{subarray}{c}k\in \mathcal{M}\\ k\neq m\end{subarray}}\gamma_{k}^{c}g_{k,e*}+1}, \tag{7}\] \[e* =\operatorname*{argmax}_{e\in\mathcal{E}}\left\{\frac{\gamma_{m}^ {c}g_{m,e}}{\sum\limits_{\begin{subarray}{c}k\in\mathcal{M}\\ k\neq m\end{subarray}}\gamma_{k}^{c}g_{k,e}+1}\right\}. \tag{8}\] For ease of notation, \(\gamma_{m,l}^{c}\) will be written as \(\gamma_{l}\) when \(a_{l,m,c}=1\), and its corresponding \(\gamma_{m,e*}^{c}\) will be written as \(\gamma_{e*}\). The secrecy capacity \(C_{S}\) of the wiretap channel [34], which is the maximum achievable secrecy rate for a wiretap channel, is defined as \(C_{S}=\left[C_{\mathrm{M}}-C_{\mathrm{W}}\right]^{+}\)[35] with \([X]^{+}=\max[X,0]\). Here \(C_{\mathrm{M}}\) is the main channel capacity between the legitimate receiver and the legitimate transmitter, and \(C_{\mathrm{W}}\) is the wiretap channel capacity between the eavesdropper and the legitimate transmitter. Then, the secrecy capacity for the downlink communication of the corresponding UAV to node \(l\), considering Gaussian channels, is given as \[C_{S}=\left[\log_{2}\left(\frac{1+\gamma_{l}}{1+\gamma_{e*}}\right)\right]^{+}. \tag{9}\] ## III Sum Secrecy Rate Maximization In this section, the optimal node association, the 3D-deployment of UAVs, and the power allocation are obtained to maximize the downlink sum secrecy rate of ground IoT nodes. Considering that the achievable secrecy rate for the node \(l\) is given by (9), the optimization problem can be formulated as \[\textbf{P}: \max_{\textbf{A},\{\textbf{m}_{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}} \sum_{l\in\mathcal{L}}\log_{2}\left(\frac{1+\gamma_{l}}{1+ \gamma_{e*}}\right)\] (10a) s.t. \[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeqeq: thus \(f_{n}(r_{n})\) remains constant under a change of strategy of node \(l\neq n\), and then \[F(r_{l}^{\prime},\mathbf{r}_{-l})-F(r_{l},\mathbf{r}_{-l})\] \[=\left(\sum_{\begin{subarray}{c}n\in\mathcal{L}\\ n\neq l\end{subarray}}f_{n}(r_{n})+f_{l}(r_{l}^{\prime})\right)-\left(\sum_{ \begin{subarray}{c}n\in\mathcal{L}\\ n\neq l\end{subarray}}f_{n}(r_{n})+f_{l}(r_{l})\right)\] \[=f_{l}(r_{l}^{\prime})-f_{l}(r_{l}). \tag{15}\] This indicates that this is a **potential game** with the potential function being the overall utility of the system \(F(\cdot)\). Therefore, the best response dynamics can be used to reach a pure Nash equilibrium. Furthermore, given that every node can be considered an independent entity, the overall game is a simultaneous move game, where every node chooses its next strategy independently. Under these considerations, two conflicts may arise. Particularly, it is possible for more than one node to choose the same resource at a certain moment, and it is also possible for a node to choose an already occupied resource at a certain moment. To address these conflicts, it is proposed a protocol to be followed by each UAV. For the first conflict, UAVs will be programmed to allocate the resource to the contending node with the highest \(\phi_{l}\), and if there are two or more nodes with the same value of \(\phi_{l}\), the UAV will associate to one of them arbitrarily. To address the second conflict, nodes are only allowed to choose resources that are not currently occupied. It can be seen as the UAVs advertising only their available sub-channels to the legitimate nodes. Apart from best response dynamics, a potential game is guaranteed to reach a pure Nash equilibrium under a synchronous log-linear learning (SLLL) algorithm [36], which is described next. #### Iii-A2 Synchronous Log Linear Learning In this algorithm, it is considered that the gain in payoff, obtained by performing an action, changes with respect to the current action (marginal payoff), which is given by \[f_{l}(r_{l}^{\prime})=\phi_{l}(r_{l}^{\prime})-\phi_{l}(r_{l}). \tag{16}\] Therefore, the gain in payoff obtained by remaining in the current strategy is 0 and the potential game modeling holds. The SLLL algorithm is considered for the potential game with (16) as the payoff function. Under the SLLL algorithm, a legitimate node chooses an action from their available actions following the smooth best response (SBR) mixed strategy [37] given by \[\pi_{l}(r_{l})=\frac{e^{f_{l}(r_{l})}}{\sum\limits_{x_{l}\in\mathcal{A}_{l}}e ^{f_{l}(x_{l})}}. \tag{17}\] After each legitimate node has chosen an action, if two or more nodes choose the same resource, UAVs apply the protocol to solve conflicts, then all the legitimate nodes choose their next strategy. This goes on until no legitimate nodes have available strategies, i.e., until no node has an incentive to change strategies (i.e., they are already in their best response strategy), which constitutes a pure Nash equilibrium. Algorithm 1 describes the operation of this algorithm. ``` 1counter \(\leftarrow\) 0; 2whilecounter\(<n\_iter\)do 3\(\text{conv}_{\text{flag}}\leftarrow\) 1 ; 4\(\mathbb{I}\)\(\leftarrow\)\(-1\)\(\mathbb{I}\)\(\mathbb{I}\)\(\mathbb{I}\); 5for\(l\in\mathcal{L}\)do 6\(\mathcal{A}_{l}\)\(\leftarrow\)\(\{r=(m,c),\ s.t.\ (m,c)\in\mathcal{M}\times\mathcal{C}\}\); 7\(\mathcal{A}_{l}\)\(\leftarrow\)\(\mathcal{A}_{l}\)\(\backslash\)\(\{r=(m,c),\ s.t.\ \sum_{n\in\mathcal{L}}a_{n,m,c}>0\}\); 8\(f_{l}(r)\)\(\leftarrow\)\(\text{compute as in (\ref{eq:1})}\)\(\forall r\in\mathcal{A}_{l}\); 9\(\mathcal{A}_{l}\)\(\leftarrow\)\(\mathcal{A}_{l}\)\(\backslash\)\(\{r=(m,c)\ s.t.\ f_{l}(r)\leq 0\}\) ; 10if\(\mathcal{A}_{m}\)\(\neq\)\(\emptyset\)then 11\(\mathcal{P}\left[\mathcal{I}_{l}=r\right]\)\(\leftarrow\)\(\text{compute as in (\ref{eq:1})}\)\(\forall r\in\mathcal{A}_{l}\); 12\(x_{l}\)\(\leftarrow\)\(\text{choose from }r\in\mathcal{A}_{l}\) according to \(\Pr\left[X_{l}=r\right]\); 13\(\text{x}[l]\)\(\leftarrow\)\(x_{l}\); 14\(\text{conv}_{\text{flag}}\)\(\leftarrow\)\(0\) ; 15 16 end if 17if\(\text{conv}_{\text{flag}}\)\(==\)\(1\)then 18 19 Stop the association process; 20for\(m\in\mathcal{M}\)do 21for\(c\in\{c\in\mathcal{C}\ s.t.\sum_{n\in\mathcal{L}}a_{n,m,c}=0\}\)do 22\(\mathcal{N}_{m,c}\)\(\leftarrow\)\(\{1\)\(s.t.\ \mathbb{I}[l]=(m,c)\}\); 23if\(\left|\mathcal{N}_{m,c}\right|>0\)then 24\(f_{l}(m,c)\)\(\leftarrow\)\(\text{compute as in (\ref{eq:1})}\)\(\forall l\in\mathcal{N}_{m,c}\); 25\(f_{l,\max}\)\(\leftarrow\)\(\max_{l\in\mathcal{N}_{m,c}}f_{l}(m,c)\) ; 26\(\mathcal{N}_{m,c,\max}\)\(\leftarrow\)\(\{l\in\mathcal{N}_{m,c}\ s.t.\ f_{l}(m,c)=f_{l,\max}\}\); 27\(l^{*}\)\(\leftarrow\)\(\text{choose from }l\in\mathcal{N}_{m,c,\max}\) randomly; 28\((m_{\text{prev},\text{prev}})\leftarrow(m,c)\ s.t.\ q_{l^{*},m,c}=1\); 29\(a_{l^{*},m_{\text{prev},\text{prev}}}\leftarrow\)\(0\); 30\(a_{l^{*},m,c}\gets\)\(1\); 31 32 33 end if 34 35 end for 36 37 38 end for 39 40 end for 410 42 end for ``` **Algorithm 1**SLLL for node association algorithm ### _UAV Position Control_ The second stage in the framework consists of the 3D positioning of the UAVs within region \(S\) based on the sum secrecy rate obtained by each UAV, having \(\mathcal{L}_{m}\) be the set of legitimate nodes associated to UAV \(m\). For the UAV positioning, the following optimization sub-problem is formulated \[\mathbf{P2}: \max_{\mathbf{A}_{i}\{\mathbf{x}_{m}\}_{m\in\mathcal{M}}} \Phi=\sum_{l\in\mathcal{L}}\phi_{l}\] (18a) s.t. (10c), (10d), (10e). (10e). The positioning of the UAVs, unlike the association of the nodes, is performed over a continuous domain which is the entire region, with a continuous altitude range, for all of the UAVs. Heuristic methods have shown to work well over a continuous space, such as particle swarm optimization [38] and genetic algorithm [39]. However, these methods require increased complexity, continuous coordination between the agents, and longer convergence time. While the outcomes from these continuous-domain algorithms are close to optimum values, discrete-domain algorithms may provide simpler and satisfactory solutions, which is beneficial when considering resource-limited IoT nodes. Thus, a two-stage positioning protocol is proposed, where a global 2D \(M\)-centroid clustering is solved as the first stage, then an individual altitude selection is performed over the altitude range \(\Delta z\) discretized over \(N_{z}\) altitude levels. The set of discretized altitude levels is denoted as \(\mathcal{Z}\), with \(|\mathcal{Z}|=N_{z}\). The two stages of this protocol are described in the following. #### Iii-B1 2D Clustering For the 2D positioning, we aim at finding the 2D points with the highest concentration of legitimate nodes, or barycenters of the concentrations of nodes, which will privilege the best secrecy coverage. For this purpose, the unsupervised learning algorithm k-means clustering [40] is applied, which returns the centroids of the clusters (points in the area) and the members of each cluster. A diagram of this algorithm can be seen in Fig. 2. The k-means algorithm requires the knowledge of the position of the legitimate nodes of the system. Then, the algorithm is run at some central unit (one of the UAVs) only once for the real positions of the nodes. #### Iii-B2 Best Response Dynamics Once the UAV 2D positioning is solved, the UAV altitude selection problem can be formulated as a game consisting of * **Players:** UAVs \(m\in\mathcal{M}\). * **Actions:** discrete altitude levels \(r_{m}=z_{m}\in\mathcal{Z}\). * **Payoffs:** the sum secrecy metric obtained by their associated nodes \(f_{m}(r_{m})=\Phi_{m}(r_{m})=\sum_{l\in\mathcal{L}_{m}}\phi_{l}\). We utilize a best response algorithm to solve the positioning problem with a modified payoff into the marginal gain payoff of UAV \(m\) for choosing altitude \(r^{\prime}_{m}\): \[f_{m}(r^{\prime}_{m})=\Phi_{m}(r^{\prime}_{m})-\Phi_{m}(r_{m}). \tag{19}\] where \(r_{m}\) is the current position of UAV \(m\). Then this algorithm considers the simple action selection per UAV, i.e. \(r_{m}=\operatorname*{argmax}_{z_{m}\in\mathcal{Z}}z_{m}\), which is performed simultaneously and independently at each UAV. This algorithm is described at Algorithm 2. ``` 1counter\(\leftarrow\) 0; 2whilecounter\(<n\_iter\)do 3\(\operatorname*{config}\leftarrow\) 1 : 4for\(m\in\mathcal{M}\)do 5\(\mathcal{A}_{m}\leftarrow\mathcal{Z}\); 6\(f_{m}(r)\leftarrow\) compute as in (19) \(\forall r\in\mathcal{A}_{m}\); 7\(\mathcal{A}_{m}\leftarrow\mathcal{A}_{m}\setminus\{r\in\mathcal{A}_{m}\;s.t.\;\;f_{m}(r) \leq 0\}\) ; 8if\(\mathcal{A}_{m}\not\in\emptyset\)then 9\(f_{m,\max}\leftarrow\max_{r\in\mathcal{A}_{m}}f_{m}(r)\) ; 10\(\mathcal{A}_{m,\max}\leftarrow\{r\in\mathcal{A}_{m}\;s.t.\;f_{m}(r)=f_{m,\max}\}\); 11\(z_{m}\leftarrow\) choose from \(r\in\mathcal{A}_{m,\max}\) randomly; 12\(\operatorname*{conv}_{\text{flag}}\gets\) 0 ; 13 Make UAV \(m\) assume altitude \(z_{m}\); 14 15 end if 16if\(conv\_\text{flag}==1\)then 17 Stop the positioning process ; 18 counter \(\leftarrow\) counter + 1; 19 20 end if ``` **Algorithm 2**Best response for UAV altitude positioning algorithm The information required for Algorithm 2 is local to each UAV, disregarding the strategy taken by other UAVs or their exact positions. This algorithm is fast compared to exhaustive search, and it usually converges within two or three iterations. ### _Secure Power Allocation_ In the third and final stage, each UAV allocates its available power to the nodes associated to them. To this end, the following convex optimization problem is addressed \[\textbf{P3}: \max_{\textbf{P}} \sum_{l\in\mathcal{L}}\log_{2}\left(\frac{1+\gamma_{l}}{1+ \gamma_{\text{\tiny{e}}\ast}}\right)\] (20a) s.t. (10f). In **P3**, the objective (20a) is non-convex on **P**, so this problem cannot be directly solved. Moreover, the condition for secrecy for a user is given by \[\frac{g_{m,l}}{I^{c}_{m,l}+1}>\frac{g_{m,\text{e}\ast}}{I^{c}_{m,\text{e} \ast}+1}, \tag{21}\] which cannot be guaranteed to all nodes. In that case, the power optimization formulation as expressed in **P3** will allocate all the power budget only to the nodes that can achieve secrecy, leaving without power to those that cannot, which is not desirable. Alternatively, it is considered to the original problem in order to guarantee a minimum SINR requirement to every node in the system. To that purpose, the set \(\mathcal{L}^{S}_{m}\) is introduced as the set of nodes associated to UAV \(m\) that can be guaranteed secrecy, that is to say, for which (21) holds. Afterwards, the proposed optimization problem is a max-min secrecy rate problem for the nodes in \(\mathcal{L}^{S}_{m}\), performed locally at each UAV, expressed as \[\max_{\textbf{P}_{m}}\min_{l\in\mathcal{L}^{S}_{m}} \log_{2}\left(\frac{1+\gamma_{l}}{1+\gamma_{\text{\tiny{e}}\ast }}\right)\] (22a) s.t. \[\gamma_{l}>\gamma_{0} \forall l\in\mathcal{L}_{m} \tag{22b}\] \[\sum_{c\in\mathcal{L}}p^{c}_{m}\leq P, \tag{22c}\] Fig. 2: K-means algorithm flowchart An equivalent optimization problem can be formulated as \[\textbf{P3'}: \max_{\textbf{p}_{m}} R_{S}\] (23a) s.t. \[\gamma_{l}>\gamma_{0} \forall l\in\mathcal{L}_{m} \tag{23b}\] \[\log_{2}\left(\frac{1+\gamma_{l}}{1+\gamma_{cs}}\right)>R_{S} \forall l\in\mathcal{L}_{m}^{S}\] (23c) \[\sum_{c\in\mathcal{C}}p_{m}^{c}\leq P, \tag{23d}\] In this formulation, the interference perceived at each node is assumed constant over the optimization process, and an iterative optimization scheme can be applied. Thus, the interference at its associated nodes are computed at each UAV, and problem **P3'** is solved in parallel in all UAVs. Then the updated interference terms are computed, and the process is repeated until convergence or for a number of iterations. Once **P3'** is convex, it can be split into two subproblems, **P3'a** and **P3'b**, as \[\textbf{P3'a}: \min_{\textbf{p}_{m}^{(a)}} P_{NS}\] (24a) s.t. \[\gamma_{l}>\gamma_{0} \forall l\in\mathcal{L}_{m}. \tag{24b}\] \[\textbf{P3'b}: \max_{\textbf{p}_{m}^{(b)}} R_{S}\] (25a) s.t. \[\log_{2}\left(\frac{1+\gamma_{l}}{1+\gamma_{cs}}\right)>R_{S} \forall l\in\mathcal{L}_{m}^{S} \tag{25b}\] \[\sum_{c\in\mathcal{C}}p_{m}^{c,(b)}\leq P_{S}, \tag{25c}\] where \(\textbf{p}_{m}^{(a)}\) is the power profile for the minimum SINR requirement, and \(\textbf{p}_{m}^{(b)}\) is the power profile for the max-min secrecy rate optimization, such that \(\textbf{p}_{m}=\textbf{p}_{m}^{(a)}+\textbf{p}_{m}^{(b)}\), \(P_{NS}\) is the power used to meet the minimum SINR requirement, and \(P_{S}=\left[P-P_{NS}\right]^{+}\) is the power available for max-min secrecy rate optimization. First, problem **P3'a** is solved for the power profile \(\textbf{p}_{m}^{(a)}\) and power \(P_{NS}\) is found, which is power required to guarantee the minimum SINR \(\gamma_{0}\) for all associated nodes. If \(P_{NS}\geq P\), there is not enough power to meet the SINR constraint, then the overall local power profile is taken as \(\textbf{p}_{m}=\textbf{p}_{m}^{(a)}(P/P_{NS})\), and the local power allocation process ends. If \(P_{NS}<P\), then the available power for the max-min secrecy rate problem is assumed as \(P_{S}=P-P_{NS}\), and the problem **P3'b** is solved by obtaining the power profile \(\textbf{p}_{m}^{(b)}\), and the overall local power profile is given as \(\textbf{p}_{m}=\textbf{p}_{m}^{(b)}+\textbf{p}_{m}^{(b)}\). The closed form solution for problem **P3'a** is given as \[p_{m}^{c,(a)}=\gamma_{0}\left(\frac{I_{m,l}^{c}+1}{g_{m,l}}\right) \forall l\in\mathcal{L}_{m} \tag{26}\] Problem **P3'b** can be solved by bisection over the following minimum power optimization problem \[\textbf{P3'b'}: \min_{\textbf{p}_{m}^{(b)}} P_{S}\] (27a) s.t. \[\frac{1+\gamma_{l}}{1+\gamma_{cs}}>\gamma_{S} \forall l\in\mathcal{L}_{m}^{S}. \tag{27b}\] where \(\gamma_{S}=2^{R_{S}}\). This problem has the following closed-form solution \[p_{m}^{c,(b)}=\left[\frac{\gamma_{S}-1}{\frac{g_{m,l}}{\frac{g_{m,l}}{I_{m,l} ^{c}+1}-\gamma_{S}\left(\frac{g_{m,cs}}{I_{m,cs}^{c}+1}\right)}}\right]^{+} \forall m\in\mathcal{L}_{m}. \tag{28}\] Considering that this problem is solved for nodes that can achieve secrecy, and assuring that \(p_{m}^{c,(b)}\) is non-zero, the bounds for \(\gamma_{S}\) are \[1<\gamma_{S}<\min_{l\in\mathcal{L}_{m}^{S}}\left\{\frac{\frac{g_{m,l}}{I_{m,l} ^{c}+1}}{\frac{g_{m,cs}}{I_{m,cs}^{c}+1}}\right\} \tag{29}\] All in all, to solve problem **P3'b**, bisection is performed on problem **P3'b'** with closed form solution (28), over \(\gamma_{S}\), whose initial minimum and maximum values are given by the bounds in (29). The power allocation algorithm is described in Algorithm 3. ``` 1whilecounter < n_prev_pow do 2for\(m\in\mathcal{M}\)do 3for\(l\in\mathcal{L}_{m}\)do 4\(I_{m,l}\leftarrow\) compute as in (13); 5\(I_{m,e}\leftarrow\) compute as in (13); 6 7 end for 8for\(m\in\mathcal{M}\)do 9\(\mathcal{L}_{S}^{S}\leftarrow\{\}\); 10for\(l\in\mathcal{L}_{m}\)do 11\(\mathcal{L}_{m}^{S}\leftarrow\mathcal{L}_{m}^{S}\cup\{l\}\); 12 13 end for 14for\(l\in\mathcal{L}_{m}\)do 15\(p_{m}^{c,(a)}\leftarrow\) compute as in (26); 16 17 end for 18\(P_{NS}\leftarrow\sum_{l\in\mathcal{L}_{m}}p_{m}^{c,(a)}\); 19if\(P_{NS}\geq P\)OR\(\mathcal{L}_{m}^{S}\)is emptythen 20for\(l\in\mathcal{L}_{m}\)do 21\(p_{m}^{c}\gets p_{m}^{c,(a)}(P/P_{NS})\) ; 22 [MISSING_PAGE_POST] 42nd [MISSING_PAGE_POST] 999 9999 100001 110001 111111 11111 1111 11111 1111 1111 1111 1111 1111 1111 11111 1111 11111 1111 1111 11111 11111 1111 11111 11111 111111 11111 111111 11111 11111 111111 11111 11111 111111 111111 111111 11111 111111 111111 1111111 111111 1111111 111111 1111111 1111111 1111111 111111 1111111 11111111 1111111 1111111 1111111 1111111 11111111 11111111 1111111 1111111 11111111 1111111 11111111 11 pose, unless otherwise stated, the adopted simulation parameters are presented in Table I. Therein, \(\gamma_{P}=P/N_{0}\) is the total transmit SNR of each UAV, and \(N_{it}\) is the number of association-positioning iterations for a given realization of the system. The number of UAVs \(M\) to be deployed is chosen such that \((M-1)C<L\leq MC\). Unless otherwise stated, for each realization the following steps are taken 1. The \(N\) nodes are distributed over the region following a binomial point process. 2. Legitimate nodes are selected following a Bernoulli distribution of parameter \(q\). 3. The association and positioning processes are performed subsequently a number \(N_{it}\) of iterations. ### _Association and Positioning Benchmarks_ Three association and positioning benchmarks are presented for the sake of comparison: 1. **Best Response Association:** Framework with a best response algorithm for the association phase. Similar to Algorithm 1, but on line 11, \(\Pr\left[X_{l}=r\right]=1\) for \(r=\operatorname*{argmax}_{\tau_{l}\in\mathcal{A}_{l}}f_{l}(\tau_{l})\) and zero for the rest of available actions \(r_{l}\in\mathcal{A}_{l}\setminus\{r\}\). 2. **Greedy Association:** Framework with greedy association algorithm from [29]. This approach iteratively associates the best node-UAV pair through the system in terms of the secrecy rate, until all nodes are associated. 3. **Adapted Greedy:** Framework with adapted greedy algorithm for association and positioning from [29]. This approach positions each UAV one by one, and associates to it the nodes that present the best secrecy rate, until all UAVs are positioned, and all nodes associated. Fig. 3 shows the sum secrecy rate of the system versus \(\gamma_{0}\) for the proposed secure power allocation scheme, and results are compared to the benchmarks described above. It can be seen that, for smaller \(\gamma_{0}\) values, where more power is allocated for the max-min secrecy rate subproblem, the proposed framework and the one with best response association perform better than the greedy benchmarks. On the other hand, for larger \(\gamma_{0}\) values, where the power allocation tends to a max-min SINR power allocation, the proposed solution performs better than the one with best response association, as good as the one for adapted greedy benchmark, but worse than the greedy association benchmark. Fig. 4 shows the percentage of legitimate nodes that achieve positive secrecy rate versus the number of nodes in the system \(N\), with \(\gamma_{0}=-10\)dB and \(M\) chosen such that \((M-1)C<L\leq MC\). Note that there is an initial drop in the percentage of users with positive secrecy for small \(N\) values due to the added interference of an increasing number of UAVs. However, after a certain value of \(N\), the percentage of users with positive secrecy in the system remains steady, where the proposed framework performs better than the adapted greedy and best response association benchmarks, but worse than the greedy association benchmark. While the greedy association benchmark outperforms the proposed framework, the greedy association is more complex and presents slow convergence. The best response association benchmark exhibits a similar complexity than the proposed association solution, the greedy association, the adapted greedy association, while positioning benchmarks have an increased complexity in their executions, require more coordination, and take a longer time to converge. Then, letting \(T_{\mathrm{ass}}\) and \(T_{\mathrm{pos}}\) be the times for a round of association iterations and of positioning iterations, respectively, and \(T_{\mathrm{pow}}\) be the total time of the power allocation. It can be observed that, with \(N=80\) for the proposed framework and the framework with best response association, the node association finds a Nash Equilibrium in less than 10 iterations, the UAV positioning finds a Nash Equilibrium in less than 3 iterations, and the overall framework converges in less than Fig. 4: Average percentage of legitimate nodes with positive secrecy rate vs. number of IoT nodes in the system \(N\), obtained by different frameworks. Fig. 3: Average sum secrecy rate vs. minimum SINR constraint \(\gamma_{0}\) obtained by different frameworks. 5 iterations. The overall convergence time of the frameworks are presented in Table II, for \(N=80\). Therefore, the proposed framework presents much smaller convergence times than the greedy algorithms presented in [29], while approaching the greedy association benchmark results. ### _Power Allocation Benchmarks_ To compare the proposed secure power allocation strategy, the following power allocation benchmarks are considered 1. **Max. Min SINR:** An iterative local max-min SINR power allocation per UAV. It solves the following optimization problem \[\max_{\mathbf{p}_{m}}\min_{l\in\mathcal{L}_{m}} \gamma_{l}\] (30a) s.t. \[\gamma_{l}>\gamma_{0} \forall l\in\mathcal{L}_{m}\] (30b) \[\sum_{c\in\mathcal{C}}p_{m}^{c}\leq P,\] (30c) This power allocation scheme targets to guarantee the same SINR to all the nodes served by a given UAV. 2. **Max. Sum Rate:** An iterative local sum-rate maximization power allocation per UAV. It solves the following optimization problem \[\max_{\mathbf{p}_{m}} \sum_{l\in\mathcal{L}_{m}}\log_{2}\left(1+\gamma_{l}\right)\] (31a) s.t. \[\sum_{c\in\mathcal{C}}p_{m}^{c}\leq P,\] (31b) This power allocation scheme seeks to maximize the sum rate across all of the nodes served by a UAV. By doing so, it may cause some nodes to have no power allocated to them. The proposed power allocation strategy as well as the power allocation benchmarks are performed with the secure association and positioning phases proposed. Fig. 5 shows the sum secrecy rate of the system versus \(\gamma_{T}\) for the proposed secure power allocation scheme compared to the benchmarks with \(\gamma_{0}=-10\)dB. It can be seen that for smaller transmit SNR values, the proposed power allocation scheme matches with the max-min benchmark. This behavior occurs because, at these ranges of \(\gamma_{T}\), there is not enough transmit SNR to satisfy the minimum SINR requirement, so no power is allocated for \(P_{S}\). At higher \(\gamma_{T}\) values, the proposed scheme outperforms the max-min benchmark, as power is allocated for secrecy improvement after fulfilling the minimum SINR requirements for all nodes. On the other hand, the max. sum rate benchmark outperforms the proposed secure power allocation scheme in terms of sum secrecy rate. However, the max. sum rate scheme allocates all the power of a given UAV to the nodes with the strongest channel to it. This causes the nodes with weaker channels to their serving UAV to receive no power from it, effectively disconnecting a large number of nodes from the network, as can be seen in the next figure. Fig. 6 shows the percentage of legitimate nodes that are able to achieve positive secrecy rate versus \(\gamma_{T}\), for the proposed secure power allocation scheme compared to the benchmarks and \(\gamma_{0}=-10\)dB. Note that the proposed secure power allocation scheme presents a similar behavior compared to the max-min SINR benchmark as in the previous figure. However, it can be seen that the max-sum-rate benchmark presents a significant smaller number of users that can achieve secrecy in the system due to all the power being allocated only to the users with strongest channels. Even for high \(\gamma_{T}\) values, the performance of max-sum-rate benchmark is still worse than the proposed secure power allocation in terms of users that achieve positive secrecy rates in the system. ## V Conclusions In this work, an IoT scenario was investigated, where a swarm of UAVs, acting as ABSs, provide coverage to a group Fig. 5: Average sum secrecy rate vs. transmit SNR available to UAVs, obtained by the different power allocation schemes. Fig. 6: Average percentage of legitimate nodes with positive secrecy rate vs.transmit SNR available to UAVs, obtained by the different power allocation schemes. of ground nodes, while considering all nodes that do not participate of the communication process as eavesdroppers. In this scenario, the maximization of the sum-secrecy rate of the system is addressed by proposing a BCA secure framework consisting of the association of the ground nodes, the 3D positioning of the UAVs, and the power allocation for the associated nodes. Different approaches based on game theory and optimization-based techniques were employed. Extensive simulations were performed, for which the proposed framework achieved enhanced secrecy performance while maintaining low complexity, compared to greedy association and positioning benchmarks.
2309.12478
Emergence of fractal cosmic space from fractional quantum gravity
Based on Padmanabhan's theory, the spatial expansion of the Universe can be explained by the emergence of space as cosmic time progresses. To further explore this idea, we have developed fractional-fractal Friedmann and Raychaudhuri equations for an isotropic and homogeneous universe. Our analysis has also delved into how Padmanabhan's concept fits into the framework of fractional quantum gravity. Our research shows that a fractal horizon model strongly supports the validity of the emerging Universe paradigm and its connection to horizon thermodynamics. This study indicates early how the emergent gravity perspective might manifest in quantum gravity. By utilizing the fractional-fractal Friedmann and Raychaudhuri equations, we have established that the mainstream cosmology model can be justified without a dark matter component. As a result, the standard $\Lambda$CDM model has been reduced to $\Lambda$-Cold Baryonic Matter, which has significant implications for our understanding of the Universe.
P. F. da Silva Junior, E. W. de Oliveira Costa, S. Jalalzadeh
2023-09-21T20:49:56Z
http://arxiv.org/abs/2309.12478v1
# Emergence of fractal cosmic space from fractional quantum gravity ###### Abstract Based on Padmanabhan's theory, the spatial expansion of the Universe can be explained by the emergence of space as cosmic time progresses. To further explore this idea, we have developed fractional-fractal Friedmann and Raychaudhuri equations for an isotropic and homogeneous universe. Our analysis has also delved into how Padmanabhan's concept fits into the framework of fractional quantum gravity. Our research shows that a fractal horizon model strongly supports the validity of the emerging Universe paradigm and its connection to horizon thermodynamics. This study indicates early how the emergent gravity perspective might manifest in quantum gravity. By utilizing the fractional-fractal Friedmann and Raychaudhuri equations, we have established that the mainstream cosmology model can be justified without a dark matter component. As a result, the standard \(\Lambda\)CDM model has been reduced to \(\Lambda\)-Cold Baryonic Matter, which has significant implications for our understanding of the Universe. ## 1 Introduction Quantum gravity constitutes a vast research area that envelops frameworks that consistently integrate gravitational force and quantum physics. On the one hand, specific approaches quantize gravity as a fundamental force or incorporate it in a unified theory of elementary interactions. In this scenario, new physics will likely surface at short ultraviolet (UV) scales on the order of the Planck length. On the other hand, quantum gravity may constitute an effective field theory with infrared (IR) corrections visible at large scales. In this scenario, the emphasis does not necessarily lie in incorporating these models in a complete fundamental theory; rather, it lies in phenomenology and IR deviations from general relativity (GR) that may be visible in near-future cosmological observations [1]. This alternate technique proves particularly effective in resolving the issues of quantum gravity, a subject of ongoing research in theoretical physics. Therefore, to gain a complete understanding of quantum gravity, it's imperative to take into account both the UV and IR regimes. In order to address the shortcomings of quantum mechanics and GR, researchers have proposed various approaches, including string theory, loop quantum gravity, and asymptotic safety. These theories aim to provide a consistent framework for understanding the universe's behavior at both the most minor and largest scales. The ultimate objective of quantum gravity research is to merge the fundamental forces of nature and establish a quantum theory of gravity that can explain all observed phenomena. Quantum field theory is commonly regarded as non-local due to non-polynomial momentum functions in loop corrections to the bare propagator. The modifications to the quantum effective action are typically non-local in nature, which can leave an imprint in the IR region, particularly in the case of gravity. This IR non-locality defines a specific regime of a theory that is essentially local. On the other hand, UV non-locality pertains to fundamentally non-local theories at the classical level. In contrast to IR non-locality, UV non-locality is not a result of quantum corrections but rather an intrinsic property of the theory itself. Therefore, it is important to distinguish between these two types of non-locality in order to understand the behavior of quantum field theories better. In GR, fundamental non-locality is utilized to overcome the classical singularities, improve its renormalizability, and maintain unitarity (the absence of ghosts and probability conservation). String theory [2], non-local quantum gravity with exponential or asymptotically polynomial form factors [3, 4, 5], fractional quantum gravity [6, 7, 8, 9, 10], and multi-fractional field theories with fractional operators [11, 12, 13] are examples of UV non-local theories, whereas IR non-locality is realized by models with inverse powers of the d'Alembertian [1, 14]. Numerous applications of fractional calculus in the fields of gravity and cosmology have been identified, and current research in this area is actively ongoing. Fractional calculus has proven to be a valuable tool in addressing a range of issues related to gravitational forces and cosmological models [15, 16, 17, 18, 19, 20, 11, 21, 12, 23, 24, 25, 26]. In addition, the stochastic gravitational-wave background in quantum gravity [27], gravitational-wave luminosity distance [28], inflation and cosmic microwave background (CMB) spectrum [9, 29, 30], fractional action cosmology [31, 32, 33], fractional geodesic equation, discrete gravity [34], non-minimal coupling and chaotic inflation [35], phantom cosmology with conformal coupling [36], Ornstein-Uhlenbeck like fractional differential equation in cosmology [37], fractional action cosmology with a variable order parameter [38], wormholes in fractional action cosmology [39] are other examples in the application of fractional calculus in GR and cosmology. New metrics were considered [40], as well as some dark energy models in emergent, logamediate, and intermediate scenarios of the Universe [41, 42]. For instance, [20, 21] found \(\alpha=0.926\) (where \(\alpha\) is the order of the Riemann-Liouville fractional integral). In [16, 17, 18] were obtained several exact solutions for cosmological models, which, since space-time is fractal, deviates sharply from the standard model [43, 44]. The Ref. [10] explore the interval \(1\leq\alpha<2\) using Riesz's fractional derivative to obtain the non-boundary and tunneling wave functions for a closed de Sitter geometry. In Ref. [9], the pre-inflation epoch is studied in the context of fractional quantum cosmology. The thermodynamics of fractional BHs has been studied in Ref. [8]. Another example is the fractional calculus modification of the Friedmann and Raychaudhuri equations to study the dynamics of the Universe without the presence of cold dark matter (CDM) and dark energy [45]. A further approach involves utilizing fractional calculus to determine the value of the cosmological constant, which must be restructured owing to the well-known ultraviolet divergence in traditional quantum field theory [46, 47, 48]. In [49, 50] explore modified Newtonian dynamics (MOND) and quantum cosmology in this fractional approach [45]. Finally, notice that there are several definitions of fractional derivatives and fractional integrals, such as those of Riemann-Liouville, Caputo, Riesz, Hadamard, Marchand, and Grinwald-Letnikov, among other more recent ones (see [51], and [52] and references therein). Even though these operators are already well studied, some of the usual features related to function differentiation fail, such as Leibniz's rule, the chain rule, and the semi-group property [52, 51]. According to a recent study conducted by the authors of Ref. [8], the surface area of a Schwarzschild black hole (BH) can be considered a random fractal due to a modification made to the horizon area using the fractional Wheeler-Dewitt equation. This new development changes how the entropy of BHs is calculated, as it no longer follows the traditional Bekenstein-Hawking area law and may increase due to the quantum gravitational effects. The formula used to calculate the entropy of the BH is now expressed as \[S_{\text{fractal-BH}}=S_{\text{BH}}^{\frac{d}{2}}\] where \(S_{\text{BH}}=4\pi GM^{2}\) represents the Bekenstein-Hawking entropy of a Schwarzschild BH and \(d\) is the fractal dimension of the BH surface. The aforementioned expression for fractionally deformed entropy may bear some resemblance to Tsallis [53], and Barrow [54] entropies, yet it is important to note that its correction, motivation, and physical principles are altogether separate and unique. The main objective of this study is to evaluate the efficiency of Padmanabhan's emerging cosmic space model [55] using fractional entropy developed within the framework of fractional quantum cosmology (FQC). FQC has been introduced in different references, including [9, 10, 8, 7, 56], and provides a unique framework for examining the structure of the Universe. By employing this strategy, we seek to acquire insight into the Universe's matter composition and improve our grasp of its fundamental features. The birth of Padmanabhan's emergent cosmic space paradigm lies in the connection between thermodynamics and gravity. In the context of BHs, the term "surface gravity" refers to the temperature where the temperature points are on the BH's horizon. However, the BH's entropy is also related to this horizon's region. With that, a solid relationship between thermodynamics and gravity was not discovered until Jacobson's investigations. Jacobson's work began with the Clausius equation, \(dQ=TdS\), from which, using the equivalence principle, Einstein's field equations were constructed, demonstrating that they constitute the equation of state for space-time [57]. Padmanabhan postulated [55] that the variation of the cosmic volume, \(dV\), in an infinitesimal interval of cosmic time, \(dt\), is given by \[\frac{dV}{dt}=L_{\rm P}^{2}(N_{\rm sur}-N_{\rm bulk}),\] where \(N_{\rm sur}\) and \(N_{\rm bulk}\) are the surface and bulk degrees of freedoms, respectively. While the surface degree of freedom is proportional to the Universe's horizon surface, the bulk degree of freedom is proportional to the Komar energy and inversely to the horizon's temperature. By combining Padmanabhan's equation with the continuity equation for the matter field, which is often assumed to be a perfect fluid, we can obtain Friedmann and Raychaudhuri's equations. The model proposed by Padmanabhan has sparked a great deal of interest among scholars who have conducted further investigations into its potential generalizations. For instance, in the study by Cai in 2012 [58], the original model was extended by modifying the volume increase and the number of degrees of freedom on the holographic surface from the entropy formulas of black holes in Gauss-Bonnet gravity and more general Lovelock gravity. Similarly, a paper by Tu and colleagues in 2013 [59] discovered a general relationship between the horizon entropy and the number of degrees of freedom on the surface, which can be applied to quantum gravity. The corresponding dynamic equations were obtained using the idea of the emergence of spaces in the \(f(R)\) theory and deformed Horava-Lifshitz theory. In their 2013 publication [60], Hashemi and colleagues demonstrated that the apparent horizon is the only horizon to which all thermodynamic laws apply. They also provided a set of cosmology equations for information and thermodynamical parameters in their 2015 paper [61], including a generalized form for the Bekenstein-Hawking entropy for both the holographic principle and the asymptotic holographic principle. In 2013, Yuan [62] investigated the cases where logarithmic and power-law entropic corrections are present, respectively. Additionally, Ali [63] derived a modified Friedmann equation in Gauss-Bonnet gravity by considering a generic form of entropy as a function of the area in their 2013 study. Moradpour's 2016 paper used Tsallis entropy to bridge Verlinde's and Padmanabhan's proposals [64]. Padmanabhan's proposal was also exploited to study the Rastall theory by extending the Komar energy and the general entropy of the apparent horizon in Yuan's 2016 publication [65]. In 2016, Komatsu and colleagues applied a modified Renyi entropy to Padmanabhan's holographic equipartition law by regarding the Bekenstein-Hawking entropy as a non-extensive Tsallis entropy and using a logarithmic formula of the original Renyi entropy [66]. Finally, in studies by Sheykhi and Chen in 2018 and 2022 [67, 68], respectively, Padmanabhan's emergence scenario of the modified Friedmann equations was employed utilizing Tsallis entropy. Barrow entropy was also employed in studies by Sheykhi and Luciano in 2021, and 2023 [69, 70], respectively, to extend Padmanabhan's proposals. A common and effective method utilized in all these articles is applying a generalized definition of entropy, originating from BH entropy, to redefine the effective area of the apparent horizon, the number of surface degrees of freedom, and the increase in the effective volume of the Universe. This is followed by applying Padmanabhan's emergence of gravity and the first law of thermodynamics to arrive at the modified Friedmann equation. In this article, we take a comparable approach to the references mentioned earlier. Our objective is to broaden the fractional entropy of the black hole and encompass the apparent horizon of a homogeneous and isotropic universe. To achieve this goal, we must redefine an effective form for all the constituents that are associated with Padmanabhan's emergence equation and the first law of thermodynamics. Eventually, this procedure leads us to the fractional-fractal 1 Friedmann and Raychaudhuri equations. Footnote 1: It’s worth noting that a fractal can be described as an object or process with a fractal dimension. When fractional calculus is applied to such objects, it can alter their fractal dimension [71]. This article explores this relationship and how the fractal structure emerges through the process of applying fractional calculus. To emphasize both qualities, we use the term ”fractal-fractional.” Our present comprehension of the Universe is founded on the postulate that General Relativity (GR) and the standard model of particle physics accurately portray its underlying physics. It is believed that the large-scale geometry of the Universe is flat, while its material composition is intricate, comprising of baryonic matter, CDM, and dark energy. This standard cosmological model has been substantiated by gauging temperature and polarization fluctuations in the CMB from both space and ground-based observatories. Remarkably, a model requiring just six independent variables provides an exhaustive fit to all statistical characteristics of recent CMB measurements and corroborates with the distribution of galaxies, Hubble constant, and supernova distance meas triumph comes with a price: dark energy constitutes the majority of the Universe's energy density, while CDM accounts for less than 5% of its ordinary matter content but governs the galaxy mass. Evidence from rotation curves, gravitational lensing, and hot gas in clusters suggests that 95% of the mass of galaxies and clusters is formed of CDM. The gravitational clustering of CDM serves as the cornerstone for the current paradigm of structure formation, as baryonic matter alone is insufficient to produce structures compatible with galaxy clustering. Meanwhile, as indicated above, all astronomical arguments of dark matter presuppose that GR holds true on galaxy scales. Alternatives theories of gravity, including MOND [72, 73, 74, 75], eliminate the necessity for CDM by altering the nature of gravity. The notion of using anomalous fractional dimensions to actualize CDM galaxy data has recently been revived, implicitly in Newtonian dynamics with a fractional Laplacian [49, 76] and explicitly in Newtonian fractional-dimension gravity [77, 78, 79, 80]. One intriguing concept to consider is expanding the fractal-fractional gravity models explained in the above paragraph to the cosmological level. Could these adjustments eliminate the necessity for CDM at this level if fractional-fractal dynamics can account for the flattening of galaxy rotation curves? We embrace Padmanabhan's paradigm and deduce the fractional-fractal Friedmann and Raychaudhuri equations to delve into this inquiry. By utilizing these equations, we recognize novel density parameters for dark energy and cold matter (or dust) that are altered by the fractal dimension of the cosmic event horizon. Our study demonstrates that the density parameter for fractal cold matter can substitute for the need for a CDM component. This parameter is created solely by baryonic matter in a fractal distribution of the Universe. The following is an outline of this article. The essential ingredient of Padmanabhan's paradigm is the cosmic horizon. Therefore, the following section briefly reviews the fractional-fractal quantum BH obtained in Ref. [8]. Thereafter, in section 3, we review some basics of fractals. This section is included to help readers quickly pursue our definitions in the next section. Utilizing definitions of section 3, we demonstrate how the Schwarzchild BH's event horizon is fractal structures in section 4. We show that the BH's horizon is a fractal surface with dimension \(2\leq d<3\). The effective Schwarzschild radius, the BH's mass, area, temperature, and entropy are obtained in this section. After briefly reviewing emergent cosmology in section 5, we apply the fractal-fractional quantities developed in section 4 to the cosmic space emergence scenario to get the modified Raychaudhuri and Friedmann equations in section 6. After reviewing the basics of the standard \(\Lambda\)CDM cosmology in section 7, we analyzed its fractal-fractional extension in section 8 and demonstrated that the \(\Lambda\)CDM cosmology reduces to \(\Lambda\)-Cold Baryonic Matter. In the final section, we draw our conclusions. We will refer to natural units in this paper as \(\hbar=c=k_{B}=1\), for convenience. ## 2 fractional quantum gravity of Schwarzschild black hole In this section, we will provide a brief summary of the findings presented in [8]. This information will be used in upcoming sections. To achieve the desired fractional mass spectrum of a Schwarzschild BH, let us first briefly explain the fractional Wheeler-DeWitt (WDW) in a \(v\)-dimensional minisuperspace with coordinates \(q^{\alpha}\), (\(\alpha=0,...,v\)). In this case, the minisuperspace WDW equation is \[\left\{\frac{1}{2}\Box+U(q^{\sigma})\right\}\Psi(q^{\eta})=0. \tag{1}\] In this equation, \(\Box=\frac{1}{\sqrt{-\sigma}}\partial_{\alpha}(\sqrt{-f}f^{\alpha\beta} \partial_{\beta})\) is the d'Alembertian operator in minisuperspace, \(U(q^{\nu})\) is potential, and \(f_{\alpha\beta}\) stands for the corresponding minisuperspace metric with signature \((-,+,+,...,+)\). Regarding obtaining the fractional counterpart of the WDW equation (1), the usual d'Alembertian operator should be replaced by the fractional Riesz-d'Alembertian operator [81, 82] \[(-\Box)^{\frac{\alpha}{2}}\,\Psi(q^{\alpha})=\mathcal{F}^{-1}\Big{(}|\mathbf{p}| ^{\alpha}\mathcal{F}\Psi(\mathbf{p})\Big{)}, \tag{2}\] where \(|\mathbf{p}|=\sqrt{-p_{0}^{2}+p_{i}p^{i}}\), \(i=1,2,...,v\), and \(\mathcal{F}\) denotes Fourier transformation. Hence, the fractional counterpart of the WDW equation (1) will be [6, 56, 7, 8, 10] \[\left\{\frac{M_{\rm P}^{2-\alpha}}{2}(-\Box)^{\frac{\alpha}{2}}-U(q^{\nu}) \right\}\Psi(q^{\nu})=0, \tag{3}\] and the Levy's fractional parameter, \(\alpha\), is set to \(1<\alpha\leq 2\)[83]. Bekenstein postulated that a BH's entropy is proportional to its horizon area in the early 1970s of the previous century [84]. Subsequently, Hawking proposed in 1975 the evaporation of BHs [85], which recent appraisal that tested this property through observation produced encouraging findings [86]. After that, theoretical physicists discovered a close link between thermodynamic temperature, quantum physics, and geometrical horizons. It was also suggested that the BH horizon area, \(A\) might be quantized, and the associated eigenvalues would be [87] provided by \[A_{n}=\gamma\,L_{\rm P}^{2}\,n,\ \ \ \ n=1,2,3,...\, \tag{4}\] where \(\gamma\) is a dimensionless constant of order one and \(L_{\rm P}=\sqrt{G}\) is the Planck length. Since its inception, the realm of literature has been imbued with a plethora of contributions, all of which have served to reinforce the veracity of the area spectrum (4). These contributions, which are both multifaceted and diverse, encompass a broad range of considerations, including but not limited to information-theoretic considerations (as set forth in [88, 89]), string theory arguments [90], and the periodicity of time [91, 92, 93]. Furthermore, these contributions extend to a Hamiltonian quantization of a dust collapse [94, 95], among other things. It is well-known that the above spectrum, among the various methods, can be obtained by the canonical quantization of the Schwarzschild BH [96]. To review this process, let us start with the Hamiltonian of the Schwarzschild BH given by [96] \[H=\frac{M_{\rm P}^{2}}{2}\left(\frac{p^{2}}{M_{\rm P}^{4}\,x}+x\right)=M, \tag{5}\] where \(M\) is the BH's mass, \(M_{\rm P}=1/L_{\rm P}\) denotes the Planck mass, and \(p\) is conjugate momenta of \(x\) constrained to condition \(x\geq 0\). The Wheeler-DeWitt equation corresponds to the Schwarzschild BH [8], is given by \[-\frac{1}{2M_{\rm P}}\frac{d^{2}\psi(x)}{dx^{2}}+\frac{M_{\rm P}^{3}}{2}\left( x-\frac{M}{M_{\rm P}^{2}}\right)^{2}\psi(x)=\frac{M^{2}}{2M_{\rm P}}\psi(x), \tag{6}\] which provides us with the following mass spectrum in the semi-classical limit \[M_{n}=M_{\rm P}\sqrt{2n},\ \ \ \ n=\{\mbox{large positive integer}\}. \tag{7}\] Assume that a huge BH emits Hawking radiation when it spontaneously transitions from state \(n+1\) to the nearest lower state level, i.e., \(n\). Suppose the frequency of the emitted thermal radiation is \(\omega_{0}\). Then \[\omega_{0}=M_{n+1}-M_{n}=\frac{M_{\rm P}}{\sqrt{2n}}=\frac{M_{\rm P}^{2}}{M}. \tag{8}\] The BH entropy can be expressed in terms of the following adiabatic invariant \[S_{\rm BH}=8\pi\int_{M_{\rm P}}^{M}\frac{dM}{\omega_{0}}=\frac{A}{4G}=4\pi M^ {2}G, \tag{9}\] where \(A=4\pi R_{\rm S}^{2}\) is the BH horizon area, and \(R_{\rm S}=2MG\) is the Schwarzschild radius. In quantum mechanics, the adiabatic invariance principle related to the Hamiltonian with a discrete spectrum can be explained with the following reasoning. During an adiabatic transformation, any external perturbations applied to the system should have much lower frequencies than the characteristic frequency \(\omega_{0}\). To understand how the adiabatic invariance concept is applied in BH spectroscopy, kindly refer to the literature sources mentioned as Refs. [97, 98, 99]. This results in a significant decrease in transitions between the states with successive quantum numbers. Because this spectrum is equally spaced, the feasible values for the area of a massive BH are equally spaced as well. One can obtain the fractional extension of the Wheeler-DeWitt equation (6) by replacing the ordinary derivative by Riesz fractional derivative (2) [8] \[-\frac{d^{2}}{dz^{2}}\rightarrow\frac{1}{M_{\rm P}^{\alpha-2}}\left(-\frac{d^{ 2}}{dz^{2}}\right)^{\frac{\alpha}{2}}, \tag{10}\] where a new minisuperspace coordinate, \(z\), defined by \(z=x-M/M_{\rm P}^{2}\). To obtain the semi-classical solution of this fractional Wheeler-DeWitt equation, we use the alternative equivalent representation of the Riesz fractional derivative 2 (or fractional Laplacian), given by \[-\left(-\frac{d^{2}}{dz^{2}}\right)^{\frac{\alpha}{2}}\psi(z)=c_{1,\alpha}\int _{0}^{\infty}\frac{\psi(z-v)-2\psi(z)+\psi(z+v)}{v^{\alpha+1}}dv, \tag{11}\] where \(c_{1,\alpha}=\frac{\alpha 2^{\alpha-1}}{\sqrt{\pi}}\frac{\Gamma((1+\alpha)/2)}{ \Gamma((2-\alpha)/2)}\)[100]. It is a relatively uncomplicated detector to showcase that the aforementioned fractional operator is indeed non-local in nature. In order to achieve this objective, we shall initiate by scrutinizing the definition of a particle that is localized in Newtonian mechanics. As per the foundational principles of Newtonian mechanics, the magnitude of the alteration in the position of a singular point mass, \(x(t)\), with respect to the variable of time, \(t\), is commensurate with its velocity, which is epitomized by the primary derivative. Following the works of Weierstrass, the initial c can be defined as \[v(t)=\frac{dx(t)}{dt}=\begin{cases}\lim_{h\to 0}\frac{x(t)-x(t-h)}{h},&\text{ left, causal,}\\ \lim_{h\to 0}\frac{x(t+h)-x(t-h)}{2h},&\text{symmetric,}\\ \lim_{h\to 0}\frac{x(t+h)-x(t)}{h},&\text{right, anti- causal,}\end{cases} \tag{12}\] where \(h\geq 0\) and \(h<\epsilon\) for an arbitrary small positive real-valued \(\epsilon\). As long as the function \(x(t)\) is analytic and smooth with respect to \(t\), the three definitions provided appear interchangeable. If \(t\) is considered a space-like coordinate, the definitions suggest that positional data is collected only from the left or right or concurrently from both directions. If \(t\) is regarded as a time-like coordinate, the initial definition in (12) adheres to the principle of causality, which broadly posits that the current state of a physical object can only be affected by past events. The final definition (12) conspicuously contravenes the principle of causality in the context of classical mechanics. Notwithstanding, one could embrace Stuckelberg's and Feynman's perspectives in quantum mechanics and recollect their notion that anti-particles travel in reverse time. Consequently, this definition proves fitting to delineate the velocity of an anti-particle. The second definition of a derivative in Eq. (12), which combines causal and anti-causal propagation, ultimately suggests that relying solely on individual particle interpretation may pose challenges. Hence, examining whether this definition could be used to depict a particle-antiparticle pair's velocity is crucial. Although this discussion may sound quite sophisticated and artificial when it comes to ordinary derivatives, it is important to note that regardless of the definition used, a violation of the causality principle is only of order \(\epsilon\). However, for non-local quantities, these considerations become crucial. As a next step, we shall proceed to redefine the conventional derivative in the context of an integral formulation: \[v(t)=\frac{d}{dt}\Big{|}_{\text{non local}}x(t)=\begin{cases}\frac{1}{c}\int _{0}^{\infty}W(\sigma,y)v(t-y)dy,&\text{left, causal,}\\ \frac{1}{c}\int_{0}^{\infty}W(\sigma,y)\frac{v(t+y)+v(t-y)}{2}dy,&\text{ symmetric,}\\ \frac{1}{c}\int_{0}^{\infty}W(\sigma,y)v(t+y)dy,&\text{ right, anti- causal,}\end{cases} \tag{13}\] where \(\sigma>0\) is a parameter and \[\lim_{\sigma\to 0}W(\sigma,y)=\delta(y). \tag{14}\] One can consider the relations as extensions from the local operator \(d/dt\) to a non-local version of the same operator. A potential extension of the above method for any local operator \(O_{\text{local}}(x)\) to its corresponding non-local representation \(O_{\text{nonlocal}}(x)\) is given by [101] \[O_{\text{nonlocal}}(x)f(x)=\frac{1}{C}\int_{0}^{\infty}\Omega(\sigma,y)U(y)O_ {\text{local}}(x)f(x)dy, \tag{15}\] where \(U(y)\) is a shift operator. Choosing \(W(\sigma,y)=y^{\sigma-1}\) (allowing for a weakly singular weight function), \(C=\Gamma(\sigma)\), and the shift operator by \(U(y)f(x)=f(x+y)-f(x-y)\) we obtain the Riesz fractional derivative (11) [101]. Therefore, the Riesz fractional derivative is a non-local operator as a result of antisymmetric shift operation. Non-locality is a phenomenon that can be observed in all fractional derivatives and integrals. Temporal processes with time-fractional derivatives are usually called memory. In contrast, spatial-fractional derivatives are associated with large quantum jumps [101]. Note that the external time coordinate disappears in the Wheeler-DeWitt equation, and the space-fractional derivative may play a crucial role. It is worth noting that the presence of Planck mass in the preceding formulation of the Riesz fractional derivative indicates the reality of its quantum gravity roots. This gives us the following fractional Wheeler-DeWitt equation \[\frac{1}{2M_{\rm P}^{\alpha-1}}\left(-\frac{d^{2}}{dx^{2}}\right)^{\frac{\alpha }{2}}\psi(z)+\frac{M_{\rm P}^{3}}{2}z^{2}\psi(z)=\frac{M^{2}}{2M_{\rm P}}\psi(z). \tag{16}\] Note that in a particular case where \(\alpha=2\), equation (16) reduces to its standard counterpart (6). The minisuperspace of the aforementioned model is unidimensional, implying that there is only one degree of freedom. The fractional extension of the corresponding model can be attained by substituting the ordinary derivative with its fractional extension. This concept is explained thoroughly in Ref. [8]. For a thorough understanding of the general formalism of fractional classical and quantum gravity, the interested reader may find more information in Refs. [19, 23, 11, 12, 6]. For the aforementioned fractional Wheeler-DeWitt equation, there is currently no general solution that explicitly bears a dependence on \(\alpha\). As a result, the Bohr-Sommerfeld quantization rule may be used. By incorporating the exponential form of the wavefunction, \(\psi(z)=\exp(-iS)\) into the fractional Laplacian (11) and expanding \(\psi(z\pm\nu)\) through a Taylor series centered on a, we can leverage the semi-classical approximation to arrive at our final result \[-\left(-\frac{d^{2}}{dx^{2}}\right)^{\frac{\alpha}{2}}\psi(z)=c_{1,\alpha}e^{ -iS}\int_{0}^{\infty}\frac{\sin^{2}(\frac{\nu S^{\prime}}{2})}{v^{1+\alpha}}d \nu=|S^{\prime}|^{\alpha}\psi(z), \tag{17}\] where \(S^{\prime}=dS/dz\). Using this relation, and \(S^{\prime}=p\) in Eq. (16) gives us \[|p|^{\alpha}+M_{\rm P}^{\alpha+2}z^{2}=M^{2}M_{\rm P}^{\alpha-2}, \tag{18}\] where \(p\) is the canonical momentum corresponding to \(z\). This equation is the fractional extension of the BH Hamiltonian (5). The classical turning points, i.e., \(|p|=0\), are \(z=\pm M/M_{\rm P}\). Therefore, the Bohr-Sommerfeld quantization rule reads \[2\pi n\;=\;\oint pdz\;=\;4\left(\frac{M}{M_{\rm P}}\right)^{\frac{2}{\alpha}+ 1}\int_{0}^{1}(1-y^{2})^{\frac{1}{\alpha}}dy\;=\;\frac{2\sqrt{\pi}\Gamma(\frac {d+1}{2})}{\Gamma(\frac{d+2}{2})}\left(\frac{M}{M_{\rm P}}\right)^{d}\;=\;2 \frac{\Omega_{d}}{\Omega_{d-1}}\left(\frac{M}{M_{\rm P}}\right)^{d}, \tag{19}\] where \(\Omega_{d}\) is the volume of a \(d\)-dimensional unit sphere, and we defined \[d=\frac{2}{\alpha}+1. \tag{20}\] The semi-classical mass spectrum below is the result of using the conventional Bohr-Sommerfeld quantization procedure in this situation \[M=\left(\frac{\Omega_{d-1}}{\Omega_{d}}n\pi\right)^{\frac{1}{2}}M_{\rm P},\;\; \;\;\;n=\{\mbox{large positive integer}\}. \tag{21}\] The frequency of the emitted thermal radiation, the fractional extension of (8), will be \[\omega_{0}(d)=M_{n+1}-M_{n}=\frac{\pi\Omega_{d-1}}{d\Omega_{d}}\left(\frac{M}{ M_{\rm P}}\right)^{1-d}M_{\rm P}. \tag{22}\] As it is shown in Ref. [8], the above fractional spectrum of the BH leads us to the following modified entropy \[S_{\rm fractal-BH}=S_{\rm BH}^{\frac{d}{2}}, \tag{23}\] where \(S_{\rm BH}=4\pi GM^{2}\) is the entropy of ordinary BH obtained in Eq. (9). One may derive the aforementioned entropy by employing the adiabatic invariant (9). In such an instance, the emitted thermal radiation's frequency is determined by (22), thus affording us \[\int_{M_{\rm P}}^{M}\frac{dM}{\omega_{0}(d)}=\frac{\Omega_{d}}{\pi\Omega_{d-1}(4 \pi)^{\frac{d}{2}}}S_{\rm fractal\text{-BH}}. \tag{24}\] In addition, the Hawking temperature, denoted as \(T_{\rm H}\), of the black hole can be obtained from the differential form of the aforementioned adiabatic invariant. To derive \(T_{\rm H}\), we simply rearrange the equation presented above as \[dM=\frac{\Omega_{d}\omega_{0}(d)}{\pi\Omega_{d-1}(4\pi)^{\frac{d}{2}}}dS_{\rm fractal \text{-BH}}. \tag{25}\] Comparing this relation with the first law of thermodynamics of BHs, \(dM=T_{\rm H}dS\), gives us \[T_{\rm H}=\frac{1}{d(4\pi)^{\frac{d}{2}}}\left(\frac{M}{M_{\rm P}}\right)^{1- d}M_{\rm P}, \tag{26}\] which is the same as the temperature obtained in Ref. [8]. It is widely acknowledged that fractional calculus can modify the fractal dimension [71]. In references [10, 8], the authors demonstrate that Levy's fractional parameter \(\alpha\) signifies the fractal dimension of the BH horizon and the cosmological horizon. Additionally, the authors of reference [9] indicate that due to big jumps in minisuperspace, the initial value of the emerged classical scale factor is reliant on the universe's global geometry (or topology). Moreover, they show that the acceleration of the universe and the e-folding of the inflation epoch are direct outcomes of its topology. This implies that the early inflationary epoch is a direct consequence of fractional quantum cosmology. In the following section, we will provide a brief overview of the fundamental concepts and definitions of fractal geometry to showcase the impact of fractional Wheeler-DeWitt on the entropy of the BH and elucidate the significance of Eqs. (20) and (23). In section 4, we shall employ the notions discussed in section 3 to exemplify that the fractal dimension of the fractal-fractional BH horizon surface is equal to \(d=2/\alpha+1\). ## 3 Fractals and fractal dimension Before we get into how fractional quantum gravity transforms the smooth structure of a Schwarzschild BH's horizon into a fractal structure, let us review some fractal basics. Since the surface-to-volume ratio is inversely proportional to the linear size of the system, which is defined by a large number of relevant units, the surface-to-volume ratio for typical macroscopic entities (sphere, cube, etc.) is small. Objects which have had a high surface-to-volume ratio for quite a while are, therefore, porous or hairy. For instance, the lung's high surface-to-volume ratio can be explained by the necessity for fast gas exchange. The respiratory surface of the human lungs (measured with a resolution of 100 \(\mu\)m) is the size of a tennis court, yet the volume it encloses is just a few liters. B. Mandelbrot [102] realized the general significance of such systems. He also created the term "fractal" and devised a new type of geometry to describe them mathematically. Suppose \(D\) is the geometric entity's Euclidean dimension, containing the interests set. The total volume of the boxes necessary to cover the item, please see Fig. 1, or of boxes carrying a portion of the set, is the observed volume \(\Omega(l)\) for a given grid of \(D\)-dimensional cubes of size \(l\). A fractal object is one whose perceived volume displays power law behavior with a nontrivial exponent, and resolution (grid size) changes over several orders of magnitude. It makes natural sense to consider a hairy surface to be an object with dimensions greater than 2. Larger sizes should be used for more ramified surfaces. The idea of the fractal dimension provides a quantitative formulation for this idea. For the sake of this inquiry, assume \(L\) to be the set's characteristic linear size. The number of boxes necessary to completely cover the set is denoted by \(N(l,L)\) using the previously mentioned box-size \(l\) grid (please see Fig. 1). This number can only be determined by a dimensionless quantity, and it must be \[\varepsilon=\frac{l}{L}. \tag{27}\] his means that the box size is represented in \(L\) units. As a result, \(N(l,L)=N(\varepsilon)\). With decreasing box size, the number of nonempty cubes grows. Then, the following relation \[N(\varepsilon)=k\varepsilon^{-d},\ \ \ \ \varepsilon\ll 1, \tag{28}\] specifies the so-called fractal dimension, \(d\), as a positive number. Note that the proportionality constant, \(k\), is independent of the resolution. Moreover, \(d\leq D\) is attained since \(N(\varepsilon)\) cannot be greater than the number of cubes required to fill the space completely. Fractals are objects that are self-similar or have a common appearance over a broad range of scales in the region defined by Eq. (28). This corresponds to the following scale form description \[N(\lambda\varepsilon)=k\lambda^{-d}\varepsilon^{-d}=\lambda^{-d}N(\varepsilon ),\ \ \ \ \lambda>0. \tag{29}\] Fractals are classified into two types: "deterministic fractals" and "random fractals". In the first class (for example, consider Koch's Curve or Snowflake Fractal), we utilize deterministic rules to construct them. Because the second class is formed by nondeterministic rules, we refer to them as random fractals. The horizon of a BH cannot have a deterministic fractal structure due to the quantum randomness of quantum space-time [103]. Several authors [54, 104], influenced by the geometrical structure of the COVID-19 virus, suggested that the horizon of a BH should have a Koch snowflake fractal shape. This notion clearly contradicts quantum randomness in quantum gravity. Thus, we are interested in random fractals in this study. We assume that the fractal geometry of medium \(\mathcal{B}\) under investigation is random. In other words, \(\mathcal{B}\) is considered to be a collection of all realizations \(B(\sigma)\) parametrized by elementary events, \(\omega\), of the space \(\mathcal{V}\) \[\mathcal{B}=\{B(\sigma)|\ \sigma\in\mathcal{V}\}. \tag{30}\] Any realization of form \(B(\sigma)\) complies with the laws of quantum physics. ## 4 Fractal structure of fractional Schwarzschild BH Let us examine how the preceding section may be applied to the fractional entropy of a BH obtained in Eq. (23), in section 2. According to Bekenstein and Hawking, the entropy of a BH is equal to the surface area of the BH's horizon in Planck units, \(A_{\rm P}=4L_{\rm P}^{2}\). Thus, one can rewrite the Eq. (23) as \[S_{\rm fractal-BH}=\frac{A_{\rm fractal}}{A_{\rm P}}. \tag{31}\] In this equation, we defined \[A_{\rm fractal}=A_{\rm P}\left(\frac{A_{\rm S}}{A_{\rm P}}\right)^{\frac{d}{ d}}=4\pi^{\frac{d}{2}}L_{\rm P}^{2-d}R_{\rm S}^{d}, \tag{32}\] Figure 1: (a) a set (of stars) and a grid of size \(l\). \(L\) stands for the set’s diameter. (b) The black boxes are required to enclose the set. where \(A_{\rm S}=16\pi G^{2}M^{2}\) is the Schwarzschild area of the BH, with corresponding radius \(R_{\rm S}=2MG\). According to definition (28), the number of boxes (squares) necessary to cover the surface area of the fractional BH \[N=\frac{A_{\rm fractal}}{A_{\rm P}}=(4\pi)^{\frac{d}{2}}\varepsilon^{-d}, \tag{33}\] where \(\varepsilon=M_{\rm P}/M\ll 1\). This shows that the entropy is equal to the number of squares to cover the surface area of the horizon, \(S_{\rm fractal-BH}=N\) completely. In addition, regarding the definition (28), the horizon of fractional BH is a fractal surface with a dimension equal to \(d=2/\alpha+1\), as defined in Eq.(20). Regarding Levy's fractional parameter, \(\alpha\) is restricted to the interval \(1<\alpha\leq 2\) (please see, for example, [83]), the fractal dimension of fractal-fractional BH is given by the following interval \[2\leq d<3. \tag{34}\] Employing the fractal BH entropy (31), one can define effective Schwarzschild radius, \(R_{\rm eff}\) and mass, \(M_{\rm eff}\) as \[\begin{split} R_{\rm eff}&=(4\pi)^{\frac{d-2}{4}} \left(\frac{R_{\rm S}}{L_{\rm P}}\right)^{\frac{d}{2}}L_{\rm P}.\\ M_{\rm eff}&=\frac{R_{\rm eff}}{2G}=(4\pi)^{\frac {d-2}{4}}\left(\frac{M}{M_{\rm P}}\right)^{\frac{d}{2}}M_{\rm P}.\end{split} \tag{35}\] As a result, one can rewrite the fractal surface area and fractal entropy of the BH as \[\begin{split} A_{\rm fractal}&=4\pi R_{\rm eff} ^{2},\\ S_{\rm fractal-BH}&=\frac{A_{\rm fractal}}{4G}= \frac{\pi}{G}R_{\rm eff}^{2}.\end{split} \tag{36}\] Now, the generic expression for the fractional-fractal Bekenstein-Hawking temperature may be deduced using the first law of thermodynamics \[\frac{1}{T_{\rm eff}}=\frac{dS_{\rm fractal-BH}}{dM_{\rm eff}},\ \ \ \ \ T_{\rm eff}=\frac{1}{2\pi R_{\rm eff}}. \tag{37}\] Note that we use units in which \(\hbar=c=k_{B}=1\). It is worth noting that when \(d=2\), or equivalently \(\alpha=2\), all fractional-fractal quantities acquired in the preceding will be reduced to their original values. ## 5 Emergent Cosmology According to Padmanabhan's proposal [55], classical gravity is an emerging phenomenon, and cosmic space emerged as cosmic time advanced. He claimed that the difference in the number of degrees of freedom between the holographic surface and the emerging bulk is proportional to the change in the cosmic volume. In this sense, he could effectively extrapolate the Friedmann equation describing the Universe's evolution with zero spatial curvature. Furthermore, claiming that space emerges around finite gravitational systems, such as the Sun-Earth system, is not the wisest course of action. Nevertheless, Padmanabhan intriguingly demonstrated that this issue is moot, at least in the context of cosmology, by picking any chosen time interval, say cosmic time. This is how Padmanabhan came to make the claim that the Universe will keep expanding until holographic equipartition takes place. He postulated that the variation of the cosmic volume, \(dV\), in an infinitesimal interval of cosmic time, \(dt\), is given by \[\frac{dV}{dt}=L_{\rm P}^{2}(N_{\rm sur}-N_{\rm bulk}), \tag{38}\] where \(N_{\rm sur}\) and \(N_{\rm bulk}\) are the surface and bulk degrees of freedoms, respectively. For a flat (\(k=0\)) FLRW Universe, as Padmanabhan assumed [55], the cosmic volume, the surface degree of freedom, and the bulk degree of freedom are \[V=\frac{4\pi}{3}R_{H}^{3},\ \ \ \ N_{\rm sur}=\frac{4\pi R_{\rm H}^{2}}{L_{ \rm P}^{2}},\ \ \ \ N_{\rm bulk}=\frac{2|E_{\rm Komar}|}{T}, \tag{39}\] where \(R_{\rm H}\) defined by \[R_{\rm H}=\frac{1}{H},\ \ \ \ (H=\frac{\dot{a}}{a}), \tag{40}\] is the radius of the Hubble horizon, \(V\) is its volume, \[T=\frac{1}{2\pi R_{\rm H}}=\frac{H}{2\pi}, \tag{41}\] is the temperature of the horizon, and \[|E_{\rm Komar}|=\epsilon(\rho+3p)V,\ \ \ \epsilon=\begin{cases}+1,&\text{if}\ \ \frac{p}{\rho}<-\frac{1}{3},\\ -1,&\text{if}\ \ \frac{p}{\rho}>-\frac{1}{3},\end{cases} \tag{42}\] is the proper Komar energy of a perfect fluid (\(\rho\) is the energy density, and \(p\) is the pressure of the fluid) contained inside the Hubble volume. Substituting Eqs. (39) into (38) gives the Raychaudhuri equation \[\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+3p). \tag{43}\] Note that to obtain the Friedmann equation, one also needs the continuity equation \(\dot{\rho}=-3H(\rho+p)\). It is well known the continuity equation is equivalent to \[\frac{dM_{\rm MSH}}{dt}=-p\frac{dV_{\rm PV}}{dt}, \tag{44}\] where \(M_{\rm MSH}\) is the Misner-Sharp-Hernandez mass, \[M_{\rm MSH}=\rho V_{\rm PV}. \tag{45}\] In these equations, \(V_{\rm PV}\) is the areal (proper) volume \[V_{\rm PV}=\frac{4\pi}{3}R_{\rm PR}^{3} \tag{46}\] and \(R_{\rm PR}=ra(t)\) is the proper, or areal, radius. ## 6 Modified emergent Cosmology from fractional-fractal entropy This paper aims to derive the modified Friedmann and Raychaudhuri equations from the BH's fractional-fractal entropy of cosmic space. Inspired by the expression of fractal entropy (36), assume that the effective area of the apparent horizon, which serves as our holographic screen, is a random fractal surface similar to the fractal BH obtained in the preceding section. The conclusions found in Reference [10] for a fractional quantum gravity of de Sitter space provide validity for this assumption. Their findings reveal that the effective area of the de Sitter space's apparent classical horizon has the same random fractal structure as the equation (36). As a result, extending the fractional-fractal surface of the apparent horizon is natural to a general homogeneous and isotropic Universe. Thus, let us define the effective random fractal radius, \(R_{\rm eff}\), the surface area \(A_{\rm fractal-H}\), and the corresponding volume, \(V_{\rm fractal-H}\) of the Hubble horizon by \[\begin{split}& R_{\rm eff}=(4\pi)^{\frac{d-2}{4}}\left(\frac{R_{ \rm H}}{L_{\rm P}}\right)^{\frac{d}{2}}L_{\rm P},\\ & A_{\rm fractal-H}=4\pi R_{\rm eff}^{2},\\ & V_{\rm fractal-H}=\frac{4\pi}{3}R_{\rm eff}^{3}.\end{split} \tag{47}\] In addition, we use the modified version of (38) given by \[\frac{dV_{\rm fractal-H}}{dt}=\frac{d}{2}L_{\rm P}^{2}R_{\rm eff}H(N_{\rm sur }-N_{\rm bulk}), \tag{48}\] proposed in Ref. [60]. This extension can produce the Friedmann and Raychaudhuri equations in higher-order gravity theories, such as Gauss-Bonnet and Lovelock gravities, with any spatial curvature. Using relations (47) in Eqs. (42) and (39) one can obtain the \(N_{\rm sur}\) and \(N_{\rm bulk}\) for their fractal extensions \[\begin{split}& N_{\rm sur}=\frac{4\pi R_{\rm eff}^{2}}{L_{\rm P}^{ 2}},\ \ \ \ N_{\rm bulk}=\frac{2|E_{\rm Komar}|}{T},\\ & T=\frac{1}{2\pi R_{\rm eff}},\ \ \ |E_{\rm Komar}|=\epsilon\sum_{i}( \rho_{i}+3p_{i})V_{\rm fractal-H},\end{split} \tag{49}\] where in the Komar energy, we consider the mixture of perfect fluids with \(\rho_{i}=\omega_{i}p_{i}\). Substituting resulting relations into (48) gives us \[3\frac{\dot{R}_{\rm eff}}{R_{\rm eff}}=\frac{3dH}{2}-2\pi dL_{\rm P}^{2} \epsilon\sum_{i}(\rho_{i}+3p_{i})HR_{\rm eff}^{2}. \tag{50}\] where an overdot means a time derivative. Using the definition of \(R_{\rm eff}\) in Eq. (47) one can simplify the above equation into \[\dot{H}+H^{2}=-\frac{4\pi L_{\rm P}^{2}}{3}\sum_{i}(\rho_{i}+3p_{i})H^{2}R_{ \rm eff}^{2}, \tag{51}\] which is the fractional-fractal extension of the Raychaudhuri equation. To obtain the corresponding Friedmann equation, first, we have to obtain the fractional-fractal extension of the continuity equation (44). To this end, we define the fractional-fractal extension of the proper distance and proper volumes (45) and (46) \[\begin{split}& R_{\rm eff\text{-}PD}=(4\pi)^{\frac{d-2}{4}} \left(\frac{R_{\rm PD}}{L_{\rm P}}\right)^{\frac{d}{2}}L_{\rm P},\\ & V_{\rm fractal\text{-}PV}=\frac{4\pi}{3}R_{\rm eff\text{-}PD} ^{3}.\end{split} \tag{52}\] As a result, the effective Misner-Sharp-Hernandez mass will be \[M_{MSH}^{(\rm eff)}=\sum_{i}\rho_{i}V_{\rm fractal\text{-}PV}. \tag{53}\] The continuity equation (44) modify to \[dM_{\rm MSH}^{(\rm eff)}=-\sum_{i}p_{i}dV_{\rm fractal\text{-}PD}. \tag{54}\] This equation is equivalent to \[\dot{\rho}_{i}=-\frac{3d}{2}(\rho_{i}+p_{i})H. \tag{55}\] In addition, note that the above extension of the proper distance gives us the fractional extension of the redshift-scale factor relation. The scale factor is related to the observed redshift \(z\) of the light emitted at the time \(t_{\rm em}\) by \[\left(\frac{a_{0}}{a(t_{\rm em})}\right)^{\frac{d}{2}}=1+z. \tag{56}\] Using the modified continuity equation (55), one can show that \[(\rho_{i}+3p_{i})H=-\frac{2}{da^{d}}\frac{d}{dt}(a^{d}\rho_{i}). \tag{57}\] Now, using the definition of effective radius, \(R_{\rm eff}\), in (47), and the above equation, one can simplify Eq. (50) into \[\frac{d}{dt}\left(a^{d}H^{d}\right)=\frac{2}{3}(4\pi)^{\frac{d}{2}}L_{\rm P}^{4 -d}\frac{d}{dt}\sum_{i}(a^{d}\rho_{i}). \tag{58}\] This gives us the fractional-fractal extension of the Friedmann equation for the flat space \[H^{d}=\frac{2}{3}(4\pi)^{\frac{d}{2}}L_{\rm P}^{4-d}\sum_{i}\rho_{i}. \tag{59}\] One can rewrite the above equation as \[H^{2}=\frac{8\pi G}{3}\left(\frac{2}{3\rho_{\rm P}}\sum_{j}\rho_{j}\right)^{ \frac{2}{3}-1}\sum_{i}\rho_{i}, \tag{60}\] where \(\rho_{\rm P}=1/L_{\rm P}^{4}=5.1550\times 10^{96}\) Kg/m\({}^{3}\) is the Planck energy density. Again, regarding the definition of \(R_{\rm eff}\), using the relations \(\dot{H}=\ddot{a}/a-H^{2}\) and (59) in (51) we obtain the fractional-fractal Raychaudhuri equation \[\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}\left(\frac{2}{3\rho_{\rm P}}\sum_{j}\rho_ {j}\right)^{\frac{2}{3}-1}\sum_{i}(\rho_{i}+3p_{i}). \tag{61}\] It is worth noting that when \(d=2\) (or \(\alpha=2\)), Eqs. (60) and (61) will be reduced to the original Friedmann and Raychaudhuri equations. ## 7 Lambda-CDM cosmological model In this part, we briefly review some basic aspects of the standard model of cosmology in preparation for the modification and extension of the \(\Lambda\)CDM model in the following section. The \(\Lambda\)CDM model is a formulation of the relativistic cosmology, according to which the Universe is made up of three main components: ordinary matter, the postulated cold dark matter (CDM), and a cosmological constant, \(\Lambda\). Due to its simplicity and ability to fairly explain the cosmic microwave background's existence and structure the large-scale structure of the Universe, the observed abundances of lithium, helium, and hydrogen (including deuterium), and the late time acceleration of the Universe, it is commonly referred to as the standard model of cosmology. In this model, which has been used in observational cosmology since the 1960s, the Friedmann equation gives the time evolution of the scale factor as a function of the Hubble parameter at the present time, \(H_{0}\), and three independent density parameters. The Friedmann equation for a spatially flat Universe of the standard model of cosmology is \[H^{2}=H_{0}^{2}\left\{\Omega_{0}^{\rm(rad)}\left(\frac{a_{0}}{a}\right)^{4}+ \Omega_{0}^{\rm(cm)}\left(\frac{a_{0}}{a}\right)^{3}+\Omega^{\rm(\Lambda)} \right\}, \tag{62}\] where \(\Omega_{0}^{\rm(i)}=\frac{8\pi G\rho_{i}(t_{0})}{3H_{0}^{2}}\) are density parameters of radiation, \(i=\)"rad", cold matter, \(i=\)"cm" and the cosmological constant, \(i=\Lambda\), at the present time, \(t=t_{0}\), respectively. The Hubble parameter \(H_{0}\) is usually written as \[H_{0}=100h\ {\rm Km\ sec^{-1}\ Mpc^{-1}}=2.1332h\times 10^{-42}\ {\rm GeV}, \tag{63}\] where \(h\) represents the uncertainty on the value \(H_{0}\). The observations of the Planck 2018 collaboration [105] constrain this value to be \(h=0.674\pm 0.005\) based on _Planck_ TT, TE, EE+lowE+lensing CMB data. By Planck's 2018 collaboration, the current dark energy density parameter is \[\Omega_{0}^{\rm(A)}=0.685\pm 0.007. \tag{64}\] The density parameter \(\Omega_{0}^{\rm(cm)}\) is equal to the total of the baryon \(\Omega_{0}^{\rm(b)}\) and CDM \(\Omega_{0}^{\rm(CDM)}\) contributions, i.e. \(\Omega^{\rm(cm)}=\Omega_{0}^{\rm(CDM)}+\Omega_{0}^{\rm(b)}\). Following Planck 2018 collaboration [105], for \(\Omega_{0}^{\rm(b)}\) and \(\Omega_{0}^{\rm(CDM)}\), respectively we have \[\Omega_{0}^{\rm(b)}h^{2} =0.02237\pm 0.00015,\] \[\Omega_{0}^{\rm(CDM)}h^{2} =0.1200\pm 0.0012. \tag{65}\] Assuming \(h=0.674\), then the density parameters are \(\Omega_{0}^{\rm(b)}=0.04924319\), and \(\Omega_{0}^{\rm(CDM)}=0.2641566\), for the central value. Thus, \[\Omega_{0}^{\rm(cm)}=0.315. \tag{66}\] Results from the Planck 2018 collaboration [105] for TT, TE, EE+lowE+lensing added with BAO data also suggests that our Universe is accurately spatially flat with a curvature density \(\Omega_{0}^{\rm(k)}=0.0007\pm 0.0019\). The expression (62) gives us the age of the Universe \[t_{0}=\frac{1}{H_{0}}\int_{0}^{1}\frac{dx}{x\left[\Omega_{0}^{\rm(rad)}x^{-4}+ \Omega_{0}^{\rm(cm)}x^{-3}+\Omega^{\rm(\Lambda)}\right]^{\frac{1}{2}}}, \tag{67}\] where \(x=a/a_{0}\). Since the current energy density parameter of the radiation is of the order of \(10^{-5}-10^{-4}\), radiation becomes important only for high redshifts, i.g., \(z\simeq 1000\). Hence, it is a reasonable approximation to neglect the contribution from radiation in the above integral. The cosmic age of the Universe is constrained to be \(t_{0}=13.797\pm 0.023\) Gyr [105]. Under this bound, we find that the density parameter of the non-relativistic matter is constrained to satisfy (66). As a result, CDM is an essential component of the standard model of cosmology in order to explain the age of the Universe. At the present epoch, the deceleration parameter, \(q\), is connected to the combination of \(\Omega_{0}^{\rm(cm)}\) and \(\Omega^{\rm(\Lambda)}\) by \(q=-\Omega^{\rm(\Lambda)}+\Omega_{0}^{\rm(cm)}/2\). ## 8 Lambda-Cold Baryonic Matter cosmological model In a similar fashion to the standard model of cosmology, we assume the admixture of three species for the matter content of the model Universe: relativistic radiation, with the equation of state parameter \(\omega=1/3\), cold matter, with \(\omega=0\), and the cosmological constant, \(\Lambda\) with \(\omega=-1\). Using the continuity equation (55) gives the energy density of these species as \[\rho_{\rm rad}(t) =\rho_{\rm rad}(t_{0})\left(\frac{a}{a_{0}}\right)^{-2d}, \tag{68}\] \[\rho_{\rm cm}(t) =\rho_{\rm cm}(t_{0})\left(\frac{a}{a_{0}}\right)^{-\frac{3d}{2}},\] \[\rho_{\Lambda}(t) =\rho_{\Lambda}(t_{0}),\] where \(t_{0}\) stands for the present epoch, "rad" and "cm" denote relativistic radiation and cold matter, respectively. Plugging these relations into the Friedmann equation (60) gives \[H^{2}=H_{0}^{2}\Big{\{}\Omega_{0}^{\rm(rad,fractal)}x^{-2d}+\Omega_{0}^{\rm( cm,fractal)}x^{-\frac{3d}{2}}+\Omega^{\rm(\Lambda,fractal)}\Big{\}}^{\frac{2}{2}}, \tag{69}\] where \(x=a/a_{0}\), \(H_{0}\) is the Hubble parameter at the present time, and \[\Omega_{0}^{\rm(i,fractal)}=\frac{8\pi G\rho_{i}(t_{0})}{3H_{0}^{2}}\left( \frac{L_{\rm P}H_{0}}{2\sqrt{\pi}}\right)^{2-d}=\Omega_{0}^{\rm(i)}\left( \frac{L_{\rm P}H_{0}}{2\sqrt{\pi}}\right)^{2-d}, \tag{70}\] denote the density parameters of three species and the subscript \(i\) is one of \(i=\)rad, cm, \(\Lambda\). Note that \(\Omega_{0}^{\rm(i)}\) denotes the standard density parameters of \(\Lambda\)CDM cosmology defined in (62). It follows that similar to the standard cosmology, the present density parameters defined in Eq. (70) obey \[\Omega_{0}^{\rm(rad,fractal)}+\Omega_{0}^{\rm(cm,fractal)}+\Omega_{0}^{\rm( \Lambda,fractal)}=1. \tag{71}\] Using the fractional-fractal Friedmann equation (60), we can determine the age of the Universe as \[t_{0}=\frac{1}{H_{0}}\int_{0}^{1}\frac{dx}{x\Big{[}\Omega_{0}^{\rm(rad,fractal)}x^{-2d}+\Omega_{0}^{\rm(cm,fractal)}x^{-\frac{3d}{2}}+\Omega^{\rm(\Lambda,fractal)}\Big{]}^{\frac{1}{2}}}. \tag{72}\] Table 1 shows the age of the Universe for various values of \(d\). Here, we neglect the contribution of radiation, and we assumed \(\Omega_{0}^{\rm(cm,fractal)}=0.315\) (similar to the standard \(\Lambda\)CDM cosmology). As can be seen, altering the fractal dimension affects the Universe's age, which ranges from 12.760 Gyr to 13.797 Gyr, and this last value is obtained from \(d=2\) in which the standard \(\Lambda\)CDM cosmology holds. On the other hand, it significantly alters the parameter for the "actual" density of the Universe's cold matter composition. According to Eq. (70) the ordinary density parameter is \[\Omega_{0}^{\rm(cm)}=\frac{8\pi G\rho_{m}(t_{0})}{3H_{0}^{2}}=\Omega_{0}^{\rm( cm,fractal)}\left(\frac{L_{\rm P}H_{0}}{2\sqrt{\pi}}\right)^{d-2}. \tag{73}\] Altering \(d\) from two to three changes the required cold matter density from 0.315 to \(10^{-56}\); please see Table 1. This means that the fractal dimension could highly amplify the effective contribution of the matter content in the cosmological parameters. A very interesting case is \(d=2.01314\). For this value of the fractal dimension, the actual density parameter of the cold matter is \(\Omega_{0}^{\rm(cm)}=0.049\), which is equal to the density parameter of baryonic matter \(\Omega_{0}^{\rm(b)}\). This means that by assuming the total cold matter content of the Universe that is measured is of fractal origin, the baryonic matter density parameter supersedes the need for a CDM component. Thus, introducing the CDM density parameter as part of the cold matter of the Universe on standard \(\Lambda\)CDM cosmology is reflected in the current cosmological model as the fractal extension expressed by Eq. (73) with \(d=2.01314\). In addition, using the fractional-fractal Raychaudhuri equation (61) and the new fractal redshift relation (56), one obtains the deceleration parameter as \[q(z)=\frac{2\Omega_{0}^{\rm(rad,fractal)}(1+z)^{4}+\Omega_{0}^{\rm(cm,fractal)}(1+z)^{3}-2 \Omega_{0}^{\rm(\Lambda,fractal)}}{2\Big{[}\Omega_{0}^{\rm(rad,fractal)}(1+z)^{ 4}+\Omega_{0}^{\rm(cm,fractal)}(1+z)^{3}+\Omega_{0}^{\rm(\Lambda,fractal)} \Big{]}}. \tag{74}\] This demonstrates that the deceleration parameter is the same as the conventional one. As a result, the redshift from deceleration to the acceleration phase is identical to the standard model of cosmology. ## 9 Conclusions According to fractional quantum gravity, a black hole's event horizon is a random fractal surface with a dimension less than three and more than or equal to two. In addition, the event horizon of de Sitter space-time follows the same property. These tips take the concept of the fractal horizon and apply it to the cosmic context. We obtained the Friedmann and Raychaudhuri equations utilizing Padmanabhan's emergent cosmic space paradigm by assuming a fractal horizon surface for an isotropic and homogeneous universe. Our model provides an alternative perspective, implying that fractional-fractal characteristics may have impacted the measurable features of the Universe's matter content by modifying the density parameter of cold matter. Specifically, altering the fractal dimension could highly amplify the baryonic matter content's effective contribution. Therefore, just as the fractal-fractional alteration prevents the need for dark matter at galactic scales [49, 76, 77, 78, 79, 80], it can do the same cosmological scales. \begin{table} \begin{tabular}{|c|c|c|} \hline \(d\) & \(t_{0}\) (Gyr) & \(\Omega_{0}^{\rm(cm)}\) \\ \hline 2 & 13.797 & 0.315 \\ \hline 2.01314 & 13.777 & 0.049 \\ \hline 2.1 & 13.648 & \(2.2\times 10^{-7}\) \\ \hline 2.5 & 13.147 & \(5.7\times 10^{-32}\) \\ \hline 2.7 & 12.941 & \(2.9\times 10^{-44}\) \\ \hline 2.9 & 12.760 & \(1.5\times 10^{-56}\) \\ \hline \end{tabular} \end{table} Table 1: The age of Universe, \(t_{0}\) and the density parameter of cold matter for various values of fractal dimension \(d\). Here we consider \(h=0.674\) and the Hubble time \(t_{H}=1/H_{0}=14.508\) Gyr. ## Acknowledgments S.J. acknowledges financial support from the National Council for Scientific and Technological Development-CNPq, Brazil, Grant no. 308131/2022-3.
2305.19820
Partial domination in supercubic graphs
For some $\alpha$ with $0 < \alpha \le 1$, a subset $X$ of vertices in a graph $G$ of order~$n$ is an $\alpha$-partial dominating set of $G$ if the set $X$ dominates at least $\alpha \times n$ vertices in $G$. The $\alpha$-partial domination number ${\rm pd}_{\alpha}(G)$ of $G$ is the minimum cardinality of an $\alpha$-partial dominating set of $G$. In this paper partial domination of graphs with minimum degree at least $3$ is studied. It is proved that if $G$ is a graph of order~$n$ and with $\delta(G)\ge 3$, then ${\rm pd}_{\frac{7}{8}}(G) \le \frac{1}{3}n$. If in addition $n\ge 60$, then ${\rm pd}_{\frac{9}{10}}(G) \le \frac{1}{3}n$, and if $G$ is a connected cubic graph of order $n\ge 28$, then ${\rm pd}_{\frac{13}{14}}(G) \le \frac{1}{3}n$. Along the way it is shown that there are exactly four connected cubic graphs of order $14$ with domination number $5$.
Csilla Bujtás andMichael A. Henning, Sandi Klavžar
2023-05-31T13:02:19Z
http://arxiv.org/abs/2305.19820v1
# Partial domination in supercubic graphs ###### Abstract For some \(\alpha\) with \(0<\alpha\leq 1\), a subset \(X\) of vertices in a graph \(G\) of order \(n\) is an \(\alpha\)-partial dominating set of \(G\) if the set \(X\) dominates at least \(\alpha\times n\) vertices in \(G\). The \(\alpha\)-partial domination number \(\operatorname{pd}_{\alpha}(G)\) of \(G\) is the minimum cardinality of an \(\alpha\)-partial dominating set of \(G\). In this paper partial domination of graphs with minimum degree at least \(3\) is studied. It is proved that if \(G\) is a graph of order \(n\) and with \(\delta(G)\geq 3\), then \(\operatorname{pd}_{\frac{\alpha}{6}}(G)\leq\frac{1}{3}n\). If in addition \(n\geq 60\), then \(\operatorname{pd}_{\frac{\alpha}{60}}(G)\leq\frac{1}{3}n\), and if \(G\) is a connected cubic graph of order \(n\geq 28\), then \(\operatorname{pd}_{\frac{13}{4}}(G)\leq\frac{1}{3}n\). Along the way it is shown that there are exactly four connected cubic graphs of order \(14\) with domination number \(5\). **Keywords:** domination; partial domination; cubic graph; supercubic graph **AMS Subj. Class.:** 05C69 Introduction One of the central themes of the theory of graph domination is establishing upper bounds for graphs with a prescribed minimum degree as a function of graph order. The topic is in depth surveyed in the paper [14] as well as in the 2023 book [13]. Special attention has been paid to cubic graphs and graphs of minimum degree at least \(3\). For the latter graphs, Reed [25] in 1996 established a best possible upper bound. **Theorem 1.1**: ([25]) _If \(G\) is a graph of order \(n\) with \(\delta(G)\geq 3\), then \(\gamma(G)\leq\frac{3}{8}n\)._ In 2009, Kostochka and Stocker sharpened Reed's bound for connected cubic graphs as follows. **Theorem 1.2**: ([19]) _If \(G\) is a connected cubic graph of order \(n\), then \(\gamma(G)\leq\frac{5}{14}n\), unless \(G\) is one of the two graphs \(A_{1}\) and \(A_{2}\) shown in Figure 1._ Kostochka and Stocker further proved that the graphs \(A_{1}\) and \(A_{2}\) are the only connected, cubic graphs that achieve the \(\frac{3}{8}\)-bound of Theorem 1.1. On the other hand, Reed [25] conjectured that \(\gamma(G)\leq\lceil\frac{1}{3}n\rceil\) whenever \(G\) is a connected cubic graph of order \(n\). Kostochka and Stodolsky [17] disproved this conjecture by constructing an infinite sequence \(\{G_{k}\}_{k=1}^{\infty}\) of connected, cubic graphs with \[\lim_{k\to\infty}\frac{\gamma(G_{k})}{|V(G_{k})|}\geq\frac{1}{3}+\frac{1}{69}.\] Subsequently, Kelmans [16] constructed an infinite series of \(2\)-connected, cubic graphs \(H_{k}\) with \[\lim_{k\to\infty}\frac{\gamma(H_{k})}{|V(H_{k})|}\geq\frac{1}{3}+\frac{1}{60}.\] Thus, there exist connected cubic graphs \(G\) of arbitrarily large order \(n\) satisfying \[\gamma(G)\geq\left(\frac{1}{3}+\frac{1}{60}\right)n.\] So, \(\gamma(G)\leq\lceil\frac{1}{3}n\rceil\) does not hold for all connected cubic graphs. On the other hand, in 2010 Verstraete conjectured that if \(G\) is a cubic graph of order \(n\) and girth at least \(6\), then \(\gamma(G)\leq\frac{1}{3}n\), see [13, Conjecture 10.23]. In [22] the conjecture has been verified for cubic graphs with girth at least 83. Further upper bounds on the domination number of a cubic graph in terms of its order and girth were proved in [18, 21, 24]. The following concepts were independently introduced in [4, 5]. Let \(G=(V(G),E(G))\) be a graph of order \(n\). For some \(\alpha\) with \(0<\alpha\leq 1\), a set \(S\subseteq V(G)\) is an \(\alpha\)-_partial dominating set_ of \(G\) if \[|N_{G}[S]|\geq\alpha\times n,\] that is, the set \(S\) dominates at least \(\alpha n\) vertices in \(G\). The \(\alpha\)_-partial domination number_ of \(G\), denoted by \(\operatorname{pd}_{\alpha}(G)\) (also by \(\gamma_{\alpha}(G)\) in the literature), is the minimum cardinality of an \(\alpha\)-partial dominating set of \(G\). Investigations on the concept of partial domination in graphs can be found in [3, 4, 5, 6, 20, 23]. At this point, it should be pointed out that the term "partial domination" is also used to refer to a concept that is different from ours [2]. We also remark that the concept of an \(\alpha\)-dominating set [7, 8, 15] is different from our concept of an \(\alpha\)-partial dominating set. In light of the above, this paper addresses the following natural question: What is the largest possible value on \(\alpha\) such that the \(\alpha\)-partial domination number of a connected cubic graph is at most one-third the order of the graph? We further consider the same question in the more general setting of graphs with minimum degree at least 3. We proceed as follows. In Section 2, we present the graph theory terminology we adopt in this paper, and state preliminary results. In Section 3, we prove that the \(\frac{7}{8}\)-partial domination number of a connected cubic graph \(G\) of order 14 is at most 4. Thereafter in Section 4, we prove that the \(\frac{7}{8}\)-partial domination number of a graph with minimum degree at least 3 is at most one-third its order, and prove a stronger statement if the order of the graph is large enough. In Section 5 we show that there are exactly four connected cubic graphs of order 14 with domination number 5, and conjecture that these are the only graphs achieving equality in the upper bound \(\gamma(G)\leq\frac{5}{14}n\) given by Kostochka and Stocker in Theorem 1.2. ## 2 Preliminaries In this section, we call up the definitions, concepts and known results that we need for what follows. Let \(G=(V(G),E(G))\) be a graph. The _open neighborhood_\(N_{G}(v)\) of a vertex \(v\) in \(G\) is the set of vertices adjacent to \(v\), while the _closed neighborhood_ of \(v\) is the set \(N_{G}[v]=\{v\}\cup N_{G}(v)\). For a set \(D\subseteq V(G)\), its _open neighborhood_ is the set \(N_{G}(D)=\cup_{v\in D}N_{G}(v)\), and its _closed neighborhood_ is the set \(N_{G}[D]=N_{G}(D)\cup D\). The minimum and maximum degrees in \(G\) are denoted by \(\delta(G)\) and \(\Delta(G)\), respectively. The graph \(G\) is \(r\)-_regular_ if every vertex in \(G\) has degree \(r\). A 3-regular graph is called a _cubic graph_, and a graph \(G\) with \(\Delta(G)\leq 3\) a _subcubic graph_. To these established terms we add the term _supercubic graph_ which refers to graphs \(G\) with \(\delta(G)\geq 3\). A _dominating set_ of a graph \(G\) is a set \(S\) of vertices of \(G\) such that every vertex not in \(S\) has a neighbor in \(S\). The _domination number_ of \(G\), denoted by \(\gamma(G)\), is the minimum cardinality of a dominating set. A \(\gamma\)-set of \(G\) is a dominating set of \(G\) of minimum cardinality \(\gamma(G)\). Let \(X\) and \(Y\) be subsets of vertices in \(G\). The set \(X\)_dominates_ the set \(Y\) if every vertex in \(Y\) is in the set \(X\) or has a neighbor in the set \(X\), that is, if \(Y\subseteq N_{G}[X]\). If \(X\) is a set of vertices in a graph \(G\), then we denote by \(\operatorname{dom}_{G}(X)\) the number of vertices dominated by the set \(X\), and so \({\rm dom}_{G}(X)=|N_{G}[X]|\). A thorough treatise on domination in graphs can be found in [11, 12, 13]. For a set of vertices \(S\) in a graph \(G\) and a vertex \(v\in S\), the \(S\)_-private neighborhood_ of \(v\) is defined by \({\rm pn}[v,S]=\{w\in V(G)\colon N_{G}[w]\cap S=\{v\}\}\). The \(S\)_-external private neighborhood_ of \(v\) is the set \({\rm epn}[v,S]={\rm pn}[v,S]\setminus S\). (The set \({\rm epn}[v,S]\) is also denoted \({\rm epn}(v,S)\) in the literature.) An \(S\)_-external private neighbor_ of \(v\) is a vertex in \({\rm epn}[v,S]\). In 1979, Bollobas and Cockayne [1] established the following property of minimum dominating sets in graphs to be used later on. **Lemma 2.1**: ([1]) _Every isolate-free graph \(G\) contains a \(\gamma\)-set \(D\) such that \({\rm epn}[v,D]\neq\emptyset\) for every vertex \(v\in D\)._ A set \(S\) of vertices in \(G\) is a _packing_ in \(G\) if the closed neighborhoods of vertices in \(S\) are pairwise disjoint. Equivalently, \(S\) is a packing in \(G\) if the vertices in \(S\) are pairwise at distance at least \(3\). A packing is sometimes called a \(2\)-packing in the literature. The _packing number_ of \(G\), denoted by \(\rho(G)\), is the maximum cardinality of a packing in \(G\). In 1996, Favaron [9] proved the following result on the packing number of a cubic graph. **Theorem 2.2**: (Favaron [9]) _If \(G\) is a connected cubic graph of order \(n\) different from the Petersen graph, then \(\rho(G)\geq\frac{n}{8}\)._ For a set of vertices in \(G\), the subgraph of \(G\) induced by \(S\) is denoted by \(G[S]\). Finally, the _boundary_ of a set \(S\) of vertices in \(G\), denoted by \(\partial(S)\), is the set of vertices not in \(S\) that have a neighbor in \(S\), that is, \(\partial(S)=N_{G}[S]\setminus S\). ## 3 (Partial) domination in cubic graphs of order \(14\) In this section, we present a preliminary result that the \(\frac{7}{8}\)-partial domination number of a connected cubic graph \(G\) of order \(14\) is at most \(4\). We will need this result when proving our main theorem in Section 4. **Theorem 3.1**: _If \(G\) is a connected cubic graph of order \(n=14\), then_ \[{\rm pd}_{\frac{7}{8}}(G)\leq 4<\frac{1}{3}n\,.\] **Proof.** Let \(\alpha=\frac{7}{8}\) and let \(G\) be a connected cubic graph of order \(n=14\). Let \(\gamma=\gamma(G)\). By Theorem 1.2, \(\gamma\leq\lfloor\frac{5}{14}n\rfloor=5\). If \(\gamma\leq\lfloor\frac{1}{3}n\rfloor=4\), then every \(\gamma\)-set of \(G\) is certainly an \(\alpha\)-partial dominating set of \(G\). Thus in this case, \({\rm pd}_{\alpha}(G)\leq\gamma\leq 4\), as desired. Hence we may assume in what follows that \(\gamma=5\). By Theorem 2.2, the graph \(G\) has packing number \(\rho(G)\geq\lceil\frac{n}{8}\rceil=2\). Let \(P\) be a maximum packing in \(G\), and so \(|P|=\rho(G)\geq 2\). Suppose that \(\rho(G)>2\), implying that \(\rho(G)=3\). In this case, \({\rm dom}_{G}(P)=|N_{G}[P]|=12\). Thus if \(v\) is any one of the two vertices in \(V(G)\setminus N_{G}[P]\) and \(S=P\cup\{v\}\), then the set \(S\) satisfies \({\rm dom}_{G}(S)\geq 13>\frac{7}{8}n\), and so \({\rm pd}_{\alpha}(G)\leq|S|=4\). Hence we may assume that \(|P|=\rho(G)=2\), for otherwise the desired result follows. Let \(X=V(G)\setminus N_{G}[P]\), and so \(|X|=6\). If a vertex in \(X\) has all three of its neighbors in the set \(X\), then we can add such a vertex to the set \(P\) to produce a packing of cardinality \(3\), contradicting our assumption that \(\rho(G)=2\). Hence, every vertex in \(X\) has at most two neighbors that belong to \(X\). Suppose that a vertex \(x\in X\) has two neighbors in the set \(X\). In this case, we let \(P_{x}=P\cup\{x\}\). The resulting set \(P_{x}\) satisfies \(|P_{x}|=3\) and \(\operatorname{dom}_{G}(P_{x})=4+4+3=11\). Let \(Z=V(G)\setminus N_{G}[P_{x}]\), and so \(|Z|=3\). If there is a vertex \(z\in Z\) with at least one neighbor in \(Z\), then the set \(S=P_{x}\cup\{z\}\) satisfies \(\operatorname{dom}_{G}(S)\geq 13\) and \(|S|=4\), and so as before \(\operatorname{pd}_{\alpha}(G)\leq|S|=4\). Hence, we may assume that \(Z\) is an independent set in \(G\). Thus, every vertex in \(Z\) has all three of its neighbors contained in the boundary \(\partial(P_{x})\) of the set \(P_{x}\). Denoting by \(\ell_{1}\) the number of edges between the sets \(Z\) and \(\partial(P_{x})\), we obtain \(\ell_{1}=3|Z|=9\). Since \(|\partial(P_{x})|=\operatorname{dom}_{G}(P_{x})-|P_{x}|=11-3=8\), by the Pigeonhole Principle at least one vertex \(v\) in the boundary \(\partial(P_{x})\) of \(P_{x}\) has at least two neighbors in \(Z\). Thus the set \(S=P_{x}\cup\{v\}\) satisfies \(\operatorname{dom}_{G}(S)\geq 13\) and \(|S|=4\), and so as before \(\operatorname{pd}_{\alpha}(G)\leq|S|=4\). Hence, we may assume that every vertex in \(X\) has at most one neighbor that belongs to \(X\), and therefore at least two neighbors that belong to the boundary \(\partial(P)\) of \(P\). Denoting by \(\ell_{2}\) the number of edges between the sets \(X\) and \(\partial(P)\), we obtain \(\ell_{2}\geq 2|X|=2\times 6=12\). However every vertex in \(\partial(P)\) has one neighbor in \(P\) and therefore at most two neighbors in \(X\), and so \(\ell_{2}\leq 2|\partial(P)|=2\times 6=12\). Consequently, \(\ell_{2}=12\), implying that \(\partial(P)\) is an independent set and each vertex in \(\partial(P)\) has exactly two neighbors in \(X\). Furthermore, each vertex in \(X\) has exactly two neighbors in \(\partial(P)\) and one neighbor in \(X\). In particular, the subgraph induced by the set \(X\) consists of three disjoint copies of \(P_{2}\), that is, \(G[X]=3P_{2}\). Let \(Y=\partial(P)\), and let \(H\) be the graph with vertex set \(X\cup Y\) and with edge set consisting of all edges in \(G\) between \(X\) and \(Y\). By our earlier observations, \(|X|=|Y|=6\). The resulting bipartite graph \(H\) has partite sets \(X\) and \(Y\) and is a \(2\)-regular graph of order \(12\). Thus, either \(H=2C_{6}\), or \(H=3C_{4}\), or \(H=C_{4}\cup C_{8}\), or \(H=C_{12}\). Let \(P=\{v_{1},v_{2}\}\). Let \(X=\{x_{1},x_{2},\ldots,x_{6}\}\) and \(Y=\{y_{1},y_{2},\ldots,y_{6}\}\). **Claim 1**: \(H\neq 2C_{6}\)_._ **Proof.** Suppose, to the contrary, that \(H=2C_{6}\). Renaming vertices in \(X\) and \(Y\) if necessary, we may assume that \(Q_{1}\colon x_{1}y_{1}x_{2}y_{2}x_{3}y_{3}x_{1}\) and \(Q_{2}\colon x_{4}y_{4}x_{5}y_{5}x_{6}y_{6}x_{4}\) are the two \(6\)-cycles in \(H\), and so \(H=Q_{1}\cup Q_{2}\). Renaming vertices if necessary, we may assume that \(v_{1}y_{1}\) is an edge of \(G\). Since \(v_{1}\) is adjacent to at most two vertices from the cycle \(Q_{2}\), we may assume, renaming vertices of \(Q_{2}\) if necessary, that \(v_{2}y_{4}\) is an edge of \(G\). Thus the graph \(F\) shown in Figure 2 is a spanning subgraph of \(G\). In this case, the set \(S=\{y_{1},x_{3},y_{4},x_{6}\}\) is a dominating set of \(F\) (where the vertices in \(S\) are shaded in Figure 2), and so \(\gamma\leq\gamma(F)=4\), a contradiction. \((\Box)\) **Claim 2**: \(H\neq C_{12}\)_._ **Proof.** Suppose, to the contrary, that \(H=C_{12}\). Renaming vertices in \(X\) and \(Y\) if necessary, we may assume that \(H\) is the cycle \(x_{1}y_{1}x_{2}y_{2}\ldots x_{6}y_{6}x_{1}\). The vertex \(v_{1}\) has three edges to \(Y\), implying that \(v_{1}\) has exactly one edge to at least one of the three sets \(\{y_{1},y_{4}\}\), \(\{y_{2},y_{5}\}\) and \(\{y_{3},y_{6}\}\). Renaming vertices if necessary, we may assume that \(v_{1}\) has exactly one edge to the set \(\{y_{1},y_{4}\}\). Further, we may assume that \(v_{1}y_{1}\) is an edge of \(G\), and so \(v_{1}y_{4}\) is not an edge of \(G\). Since every vertex in \(Y\) is adjacent to exactly one of \(v_{1}\) and \(v_{2}\), this implies that \(v_{2}y_{4}\) is an edge. Thus the graph \(F\) shown in Figure 3 is a spanning subgraph of \(G\). In this case, the set \(S=\{y_{1},x_{3},y_{4},x_{6}\}\) is a dominating set of \(F\) (see Figure 3), and so \(\gamma\leq\gamma(F)\leq 4\), a contradiction. \((\Box)\) **Claim 3**: _If \(H=3C_{4}\), then \(\operatorname{pd}_{\alpha}(G)\leq 4\)._ **Proof.** Suppose that \(H=3C_{4}\). Renaming vertices in \(X\) and \(Y\) if necessary, we may assume that \(Q_{1}\colon x_{1}y_{1}x_{2}y_{2}x_{1}\), \(Q_{2}\colon x_{3}y_{3}x_{4}y_{4}x_{3}\) and \(Q_{3}\colon x_{5}y_{5}x_{6}y_{6}x_{5}\) are the three 4-cycles in \(H\), and so \(H=Q_{1}\cup Q_{2}\cup Q_{2}\). Suppose that \(v_{1}\) is adjacent in \(G\) to a vertex from each of the three 4-cycles of \(H\). Renaming vertices if necessary, we may assume that \(N_{G}(v_{1})=\{y_{1},y_{3},y_{5}\}\). Since every vertex in \(Y\) is adjacent to exactly one of \(v_{1}\) and \(v_{2}\), this implies that \(N_{G}(v_{2})=\{y_{2},y_{4},y_{6}\}\). Thus the graph \(F\) shown in Figure 4(a) is a spanning subgraph of \(G\). In this case, the set \(S=\{v_{1},y_{2},y_{4},y_{6}\}\) is a dominating set of \(F\) (see Figure 4(a)), and so \(\gamma\leq\gamma(F)=4\), a contradiction. Hence, neither \(v_{1}\) nor \(v_{2}\) is adjacent in \(G\) to a vertex from each of the three 4-cycles of \(H\). Renaming vertices if necessary, we may assume that \(N_{G}(v_{1})=\{y_{1},y_{2},y_{3}\}\) and \(N_{G}(v_{2})=\{y_{4},y_{5},y_{6}\}\). By our earlier observations, \(G[X]=3P_{2}\). If \(x_{1}x_{2}\) is an edge, then the graph \(F\) shown in Figure 4(b) is a spanning subgraph of \(G\). In this case, the set \(S=\{v_{2},x_{2},y_{3},y_{5}\}\) is a dominating set of \(F\) (see Figure 4(b)), and so \(\gamma\leq\gamma(F)=4\), a contradiction. Hence, \(x_{1}x_{2}\notin E(G)\). By symmetry, \(x_{5}x_{6}\notin E(G)\). Suppose that \(x_{3}x_{4}\) is an edge. Renaming vertices in necessary, we may assume in this case that \(x_{1}x_{6}\) and \(x_{2}x_{5}\) are edges. Thus the graph \(G\) is determined and is shown in Figure 4(c). Figure 3: A spanning subgraph \(F\) of \(G\) in the proof of Claim 2 Figure 2: A spanning subgraph \(F\) of \(G\) in the proof of Claim 1 In this case, the set \(S=\{x_{2},x_{6},y_{3},y_{4}\}\) is a dominating set of \(G\) (see Figure 4(c)), and so \(\gamma\leq 4\), a contradiction. Hence, \(x_{3}x_{4}\notin E(G)\). The graph \(G\) is therefore determined. Renaming vertices if necessary, we may assume that \(G=G_{14.1}\), where \(G_{14.1}\) is the graph shown in Figure 4(d). We note that \(\gamma=5\). In this case, the set \(S=\{y_{1},y_{3},y_{5},v_{2}\}\) satisfies \(\mathrm{dom}_{G}(S)=13\) (the vertex \(y_{2}\) represented by the square in Figure 4(d) is the only vertex not dominated by \(S\)) and \(|S|=4\), implying that \(\mathrm{pd}_{\alpha}(G)\leq|S|\leq 4\). This completes the proof of Claim 3. (\(\Box\)) **Claim 4**: _If \(H=C_{4}\cup C_{8}\), then \(\mathrm{pd}_{\alpha}(G)\leq 4\)._ **Proof.** Suppose that \(H=C_{4}\cup C_{8}\). Renaming vertices in \(X\) and \(Y\) if necessary, we may assume that \(Q_{1}\colon x_{1}y_{1}x_{2}y_{2}x_{1}\) is the 4-cycle in \(H\) and \(Q_{2}\colon x_{3}y_{3}x_{4}y_{4}x_{5}y_{5}x_{6}y_{6}x_{3}\) is the 6-cycle in \(H\). **Claim 4.1**: _Both \(v_{1}\) and \(v_{2}\) are adjacent to exactly one vertex from the cycle \(Q_{1}\)._ **Proof.** Suppose, to the contrary, that \(v_{1}\) or \(v_{2}\), say \(v_{1}\), is adjacent in \(G\) to two vertices in the cycle \(Q_{1}\). Renaming vertices if necessary, we may assume that \(N_{G}(v_{1})=\{y_{1},y_{2},y_{3}\}\), and so \(N_{G}(v_{2})=\{y_{4},y_{5},y_{6}\}\). Recall that \(G[X]=3P_{2}\). If \(x_{1}x_{2}\) is an edge, then the graph \(F\) shown in Figure 5(a) is a spanning subgraph of \(G\). In this case, the set \(S=\{x_{1},x_{6},y_{3},y_{4}\}\) is a dominating set of \(F\) (see Figure 5(a)), and so \(\gamma\leq\gamma(F)=4\), a contradiction. Hence, \(x_{1}x_{2}\notin E(G)\). If \(x_{2}x_{6}\) is an edge, then the graph \(F\) shown in Figure 5(b) is a spanning subgraph of \(G\). In this case, the set \(S=\{x_{1},x_{6},y_{3},y_{4}\}\) is a dominating set of \(F\) (see Figure 5(b)), and so \(\gamma\leq\gamma(F)=4\), a contradiction. Hence, \(x_{2}x_{6}\notin E(G)\). By symmetry, \(x_{1}x_{6}\notin E(G)\). If \(x_{2}x_{5}\) is an edge, then the graph \(F\) shown Figure 4: Spanning subgraphs \(F\) of \(G\) in the proof of Claim 3 in Figure 5(c) is a spanning subgraph of \(G\). In this case, the set \(S=\{x_{1},x_{5},y_{3},y_{6}\}\) is a dominating set of \(F\) (see Figure 5(c)), and so \(\gamma\leq\gamma(F)=4\), a contradiction. Hence, \(x_{2}x_{5}\notin E(G)\). By symmetry, \(x_{1}x_{5}\notin E(G)\). Renaming \(x_{1}\) and \(x_{2}\) if necessary, we may assume that \(x_{1}x_{4}\) and \(x_{2}x_{3}\) are edges. The remaining edge in \(G[X]\) is therefore the edge \(x_{5}x_{6}\). Thus, the graph \(G\) is determined and is shown in Figure 5(d). In this case, the set \(S=\{v_{1},x_{3},x_{4},y_{5}\}\) is a dominating set of \(G\) (see Figure 5(d)), and so \(\gamma\leq 4\), a contradiction. This completes the proof of Claim 4.1. (\(\Box\)) By Claim 4.1, both \(v_{1}\) and \(v_{2}\) are adjacent to exactly one vertex from the cycle \(Q_{1}\). Renaming \(y_{1}\) and \(y_{2}\) if necessary, we may assume that \(v_{1}y_{1}\) and \(v_{2}y_{2}\) are edges. **Claim 4.2**: _The vertex \(v_{1}\) is adjacent to two vertices in \(Q_{2}\) at distance \(2\) in \(Q_{2}\)._ **Proof.** Suppose, to the contrary, that \(v_{1}\) is adjacent to two vertices in \(Q_{2}\) at distance \(4\). Renaming vertices if necessary, we may assume that \(v_{1}y_{3}\) and \(v_{1}y_{5}\) are edges. Thus, \(N_{G}(v_{1})=\{y_{1},y_{3},y_{5}\}\) and \(N_{G}(v_{2})=\{y_{2},y_{4},y_{6}\}\). Thus the graph \(F\) shown in Figure 6 is a spanning subgraph of \(G\). In this case, the set \(S=\{v_{1},y_{2},y_{4},y_{6}\}\) is a dominating set of \(F\) (see Figure 6), and so \(\gamma\leq\gamma(F)=4\), a contradiction. (\(\Box\)) By Claim 4.2, the vertex \(v_{1}\) is adjacent to two vertices in \(Q_{2}\) at distance \(2\) in \(Q_{2}\). Renaming vertices if necessary, we may assume that \(N_{G}(v_{1})=\{y_{1},y_{3},y_{4}\}\) and \(N_{G}(v_{2})=\{y_{2},y_{5},y_{6}\}\). If \(x_{1}x_{2}\) is an edge, then the graph \(F\) shown in Figure 7(a) is a spanning subgraph of \(G\). In this case, the set \(S=\{x_{1},x_{5},y_{3},y_{6}\}\) is a dominating set of \(F\) (see Figure 7(a)), and so \(\gamma\leq\gamma(F)=4\), a contradiction. Hence, \(x_{1}x_{2}\notin E(G)\). If \(x_{2}x_{3}\) is an edge, then the graph \(F\) shown in Figure 7(b) is a spanning subgraph of \(G\). In this case, the set \(S=\{x_{1},x_{3},y_{4},y_{5}\}\) is a dominating set of \(F\) (see Figure 7(b)), and so \(\gamma\leq\gamma(F)=4\), a contradiction. Hence, \(x_{2}x_{3}\notin E(G)\). By symmetry, \(x_{1}x_{3}\notin E(G)\). Figure 5: Spanning subgraphs \(F\) of \(G\) in the proof of Claim 4.1 If \(x_{2}x_{5}\) is an edge, then the graph \(F\) shown in Figure 7(c) is a spanning subgraph of \(G\). In this case, the set \(S=\{x_{1},x_{5},y_{3},y_{6}\}\) is a dominating set of \(F\) (see Figure 7(c)), and so \(\gamma\leq\gamma(F)=4\), a contradiction. Hence, \(x_{2}x_{5}\notin E(G)\). By symmetry, \(x_{1}x_{5}\notin E(G)\). Renaming \(x_{1}\) and \(x_{2}\) if necessary, we may assume that \(x_{1}x_{6}\) and \(x_{2}x_{4}\) are edges. The remaining edge in \(G[X]\) is therefore the edge \(x_{3}x_{5}\). Thus, the graph \(G\) is determined. Renaming vertices if necessary, we may assume that \(G=G_{14.2}\), where \(G_{14.2}\) is the graph shown in Figure 7(d). We note that \(\gamma=5\). In this case, the set \(S=\{y_{1},y_{3},y_{5},v_{2}\}\) satisfies \(\operatorname{dom}_{G}(S)=13\) (the vertex \(y_{4}\) represented by the square in Figure 7(d) is the only vertex not dominated by \(S\)) and \(|S|=4\), implying that \(\operatorname{pd}_{\alpha}(G)\leq|S|=4\). This completes the proof of Claim 4. (\(\Box\)) We now return to the proof of Theorem 3.1 one final time. As observed earlier, there are four possibilities for the graph \(H\), namely \(H=2C_{6}\) or \(H=3C_{4}\) or \(H=C_{4}\cup C_{8}\) or \(H=C_{12}\). By Claim 1, \(H\neq 2C_{6}\). By Claim 2, \(H\neq C_{12}\). By Claim 3, if \(H=3C_{4}\), then \(\operatorname{pd}_{\alpha}(G)\leq 4\). By Claim 4, if \(H=C_{4}\cup C_{8}\), then \(\operatorname{pd}_{\alpha}(G)\leq 4\). This completes the proof of Theorem 3.1. \(\Box\) Figure 6: A spanning subgraph \(F\) of \(G\) in the proof of Claim 4.2 Figure 7: Spanning subgraphs \(F\) of \(G\) in the proof of Claim 4 Partial domination in supercubic graphs We start this section by proving a useful lemma. We first present a key lemma, which allows us to grow a given set of vertices to a larger set that dominates more vertices. Recall that we refer to graphs \(G\) with \(\delta(G)\geq 3\) as supercubic graphs. **Lemma 4.1**: _Let \(k\) be a positive integer and \(G\) a supercubic graph of order \(n\). If \(S\subseteq V(G)\), \(U_{S}=V(G)\setminus N_{G}[S]\), and_ \[4|U_{S}|>k(n-|S|)\,,\] _then there exists a vertex in \(\partial(S)\cup U_{S}\) that dominates at least \(k+1\) vertices from \(U_{S}\)._ **Proof.** Consider the 'useful' vertex pairs \((x,y)\) such that \(y\in U_{S}\) and \(x\) dominates \(y\) (allowing \(x=y\)). Denote by \(p\) the number of useful pairs. As all vertices from \(U_{S}\) can be dominated by itself or one of its at least three neighbors, \(p\geq 4|U_{S}|\). Since \(y\in U_{S}=V(G)\setminus N_{G}[S]\), we have \(N_{G}[y]\cap S=\emptyset\). It follows that \(x\in\partial(S)\cup U_{S}\). To prove the statement, we suppose that there is no vertex in \(G\) which dominates more then \(k\) vertices from \(U_{S}\). Equivalently, every vertex \(x\in\partial(S)\cup U_{S}\) belongs to at most \(k\) different useful pairs \((x,y)\) (s.t. \(x\) is the first entry). We then conclude \[k(|\partial(S)|+|U_{S}|)=k(n-|S|)\geq p\geq 4|U_{S}|\] that contradicts the given condition and therefore proves the statement. \(\Box\) We are now in a position to prove that the \(\frac{7}{8}\)-partial domination number of a cubic graph \(G\) is at most one-third of the order of \(G\). In fact, our result states that the same is true for every supercubic graph. **Theorem 4.2**: _If \(G\) is a supercubic graph of order \(n\), then_ \[\mathrm{pd}_{\frac{7}{8}}(G)\leq\frac{1}{3}n\,.\] **Proof.** First suppose that \(G\) is the disjoint union of the components \(G_{1},\ldots,G_{k}\). It was already observed in [5] that \(\mathrm{pd}_{\alpha}(G)\leq\sum_{i=1}^{k}\mathrm{pd}_{\alpha}(G_{i})\) holds for each \(\alpha\). Therefore, it suffices to prove the statement for connected graphs. Let \(G\) be a connected graph of order \(n\) and of minimum degree \(\delta(G)\geq 3\). Let \(\alpha=\frac{7}{8}\) and \(\gamma=\gamma(G)\). We proceed further with two claims. **Claim 5**: _If \(n\leq 14\), then \(\mathrm{pd}_{\alpha}(G)\leq\frac{1}{3}n\)._ **Proof.** By Theorem 1.1, \(\gamma\leq\left\lfloor\frac{3}{8}n\right\rfloor\) holds, so \(\left\lfloor\frac{3}{8}n\right\rfloor\) vertices are enough to dominate the entire vertex set. Since \(\left\lfloor\frac{3}{8}n\right\rfloor=\left\lfloor\frac{1}{3}n\right\rfloor\) holds whenever \(n\leq 14\) and \(n\notin\{8,11,14\}\), it suffices to consider graphs of order \(8\), \(11\) and \(14\). Suppose first that \(G\) is cubic. Then only \(n\in\{8,14\}\) must be considered. If \(n=14\), then by Theorem 3.1 we have \(\mathrm{pd}_{\alpha}(G)\leq\frac{1}{3}n\). Hence we may assume that \(n=8\). If \(G\) is isomorphic to \(A_{1}\) or \(A_{2}\), then as illustrated in Figure 8, there exists a set \(S\) of two (shaded) vertices in \(G\) that dominates seven vertices. For any other cubic graph \(G\) of order \(8\) we have \(\gamma(G)\leq 2\) by Theorem 1.2. Hence \(\operatorname{pd}_{\alpha}(G)\leq 2=\frac{1}{4}n<\frac{1}{3}n\) for each cubic graph \(G\) of order \(8\). Assume in the rest that \(G\) is supercubic but not cubic. Hence there exists a vertex \(u\) of degree at least \(4\). If \(n=8\), a vertex \(u\) of maximum degree dominates \(|N_{G}[u]|=\operatorname{dom}_{G}(\{u\})\geq 5\) vertices. If \(\operatorname{dom}_{G}(\{u\})\geq 6\), then any undominated vertex \(u^{\prime}\notin N_{G}[u]\) can be added to the set and we have \(\operatorname{dom}_{G}(\{u,u^{\prime}\})\geq 7\). If \(\operatorname{dom}_{G}(\{u\})=5\), we apply Lemma 4.1 with \(k=1\) and \(S=\{u\}\). Since \(4|U_{S}|=4\times 3>8-1\), there exists a vertex \(u^{\prime}\) such that \(\operatorname{dom}_{G}(\{u,u^{\prime}\})\geq 7\). In both cases, \(\operatorname{dom}_{G}(\{u,u^{\prime}\})\geq 7\) implies \(\operatorname{pd}_{\alpha}(G)\leq 2=\left\lfloor\frac{1}{3}n\right\rfloor\). If \(n=11\), we want to prove that there are three vertices \(v_{1}\), \(v_{2}\), \(v_{3}\) that dominate at least \(10\) vertices in \(G\). Then, \(\operatorname{pd}_{\alpha}(G)\leq 3=\left\lfloor\frac{1}{3}n\right\rfloor\) will follow. Let \(v_{1}\) be a vertex of maximum degree. We have \(\operatorname{dom}_{G}(\{v_{1}\})\geq 5\). If \(\operatorname{dom}_{G}(\{v_{1}\})=5\) then, for \(S=\{v_{1}\}\), the inequality \(4|U_{S}|=24>2(n-|S|)=20\) holds and Lemma 4.1 implies the existence of a vertex \(v_{2}\) that dominates at least three vertices from \(U_{S}\). It follows that \(\operatorname{dom}_{G}(\{v_{1},v_{2}\})\geq 8\). If \(\operatorname{dom}_{G}(\{v_{1}\})=6\) then, by setting \(S=\{v_{1}\}\), we get \(4|U_{S}|=20>n-|S|=10\) that shows, by Lemma 4.1, the existence of a vertex \(v_{2}\) which dominates at least two new vertices. Again, we have that \(\operatorname{dom}_{G}(\{v_{1},v_{2}\})\geq 8\). If \(\operatorname{dom}_{G}(\{v_{1}\})\geq 7\), then \(v_{2}\) can be chosen as an arbitrary undominated vertex and \(\operatorname{dom}_{G}(\{v_{1},v_{2}\})\geq 8\) is achieved. For the choice of the last vertex, we consider two cases. If \(\operatorname{dom}_{G}(\{v_{1},v_{2}\})=8\), the set \(S=\{v_{1},v_{2}\}\) satisfies the condition in Lemma 4.1 with \(k=1\) and the existence of a vertex \(v_{3}\) which dominates at least two vertices from \(U_{S}\) follows. It means \(\operatorname{dom}_{G}(\{v_{1},v_{2},v_{3}\})\geq 10\) as required. If \(\operatorname{dom}_{G}(\{v_{1},v_{2}\})\geq 9\), then any undominated vertex can be chosen as \(v_{3}\) and we have \(\operatorname{dom}_{G}(\{v_{1},v_{2},v_{3}\})\geq 10\) again. If \(n=14\), we want to prove that there exist four vertices \(v_{1}\), \(v_{2}\), \(v_{3}\), \(v_{4}\) which together dominate at least \(13\) vertices. Then, \(\operatorname{pd}_{\alpha}(G)\leq 4=\left\lfloor\frac{1}{3}n\right\rfloor\) will follow. Let \(v_{1}\) be a vertex of maximum degree. If \(\operatorname{dom}_{G}(\{v_{1}\})=5\) and \(N(v_{1})\) is a dominating set in \(G\), then \(\operatorname{dom}_{G}(N(v_{1}))=14\) and we are ready. In the other case, \(\operatorname{dom}_{G}(\{v_{1}\})=5\) and \(N(v_{1})\) is not a dominating set in \(G\). Then there exists a vertex \(v_{2}\) such that \(\{v_{1},v_{2}\}\) is a packing in \(G\). If \(v_{2}\) is a vertex of degree \(3\), then \(\operatorname{dom}_{G}(\{v_{1},v_{2}\})=9\) and for \(S=\{v_{1},v_{2}\}\) and \(k=1\), the condition \(4\times 5>14-2\) holds and Lemma 4.1 implies the existence of a vertex \(v_{3}\) with \(\operatorname{dom}_{G}(\{v_{1},v_{2},v_{3}\})\geq 11\). If \(v_{2}\) is a vertex of degree at least \(4\), then \(\operatorname{dom}_{G}(\{v_{1},v_{2}\})\geq 10\) and \(\operatorname{dom}_{G}(\{v_{1},v_{2},v_{3}\})\geq 11\) can be easily achieved. For the choice of the last vertex, we consider two further subcases. If \(\operatorname{dom}_{G}(\{v_{1},v_{2},v_{3}\})=11\), we have \(4\times 3>14-3\) and Lemma 4.1 implies the existence of a vertex \(v_{4}\) with \(\operatorname{dom}_{G}(\{v_{1},v_{2},v_{3},v_{4}\})\geq 13\). If \(\operatorname{dom}_{G}(\{v_{1},v_{2},v_{3}\})\geq 12\) and there are undominated vertices, then we may choose such a vertex \(v_{4}\) and get \(\operatorname{dom}_{G}(\{v_{1},v_{2},v_{3},v_{4}\})\geq 13\). Figure 8: \(\frac{7}{8}\)-partial dominating sets in \(A_{1}\) and \(A_{2}\). In each, the vertex represented by the square is the only vertex not dominated by the two shaded vertices. By Claim 5, we may assume that \(n\geq 15\), for otherwise the desired result follows. Let \(D=\{v_{1},v_{2},\ldots,v_{\gamma}\}\) be a \(\gamma\)-set of \(G\) satisfying the Bollobas-Cockayne Lemma 2.1, and so \(\mbox{\rm epn}[v,D]\neq\emptyset\) for every vertex \(v\in D\). By Theorem 1.1, we have \(\gamma\leq\lfloor\frac{3}{8}n\rfloor\). If \(\gamma\leq\frac{1}{3}n\), then the set \(D\) is certainly an \(\alpha\)-partial dominating set of \(G\) of cardinality at most \(\frac{1}{3}n\). Thus in this case, \(\mbox{\rm pd}_{\alpha}(G)\leq|D|\leq\frac{1}{3}n\). Hence we may assume that \(\gamma>\frac{1}{3}n\), for otherwise the desired result is immediate. Using the vertices \(v_{1},\ldots,v_{\gamma}\) from \(D\), let \((V_{1},V_{2},\ldots,V_{\gamma})\) be a partition of the vertex set \(V(G)\) such that for all \(i\in[\gamma]\), the following properties hold: (i) \(v_{i}\in V_{i}\), (ii) \(\mbox{\rm epn}[v_{i},D]\subset V_{i}\), and (iii) \(V_{i}\subseteq N_{G}[v_{i}]\). As observed earlier, \(|\mbox{\rm epn}[v_{i},D]|\geq 1\), and so \(|V_{i}|\geq|\{v_{i}\}|+|\mbox{\rm epn}[v_{i},D]|\geq 2\) for all \(i\in[\gamma]\). Renaming the vertices \(v_{1},v_{2},\ldots,v_{\gamma}\) if necessary, we may assume that \(|V_{i}|\geq|V_{i+1}|\) for all \(i\in[\gamma-1]\), that is, \[|V_{1}|\geq|V_{2}|\geq\cdots\geq|V_{\gamma}|\geq 2. \tag{1}\] Let \(k_{1}=\lfloor\frac{1}{3}n\rfloor\) and let \(k_{2}=\gamma-k_{1}\). By assumption, \(\frac{1}{3}n<\gamma\). By Theorem 1.1, \(\gamma\leq\lfloor\frac{3}{8}n\rfloor\). Hence, \(\frac{1}{3}n<\gamma\leq\lfloor\frac{3}{8}n\rfloor\). By definition of \(k_{1}\) and \(k_{2}\) and by our earlier observations and assumptions, \[1\leq k_{2}=\gamma-k_{1}\leq\left\lfloor\frac{3}{8}n\right\rfloor-\left\lfloor \frac{1}{3}n\right\rfloor. \tag{2}\] Let \(S=\{v_{1},v_{2},\ldots,v_{k_{1}}\}\), and so \(|S|=k_{1}=\lfloor\frac{1}{3}n\rfloor\). Since \((V_{1},V_{2},\ldots,V_{\gamma})\) is a partition of the vertex set \(V(G)\), we note that the number of vertices dominated by the set \(S\) is at least the number of vertices in the sets \(V_{1}\cup\cdots\cup V_{k_{1}}\), that is, \[\mbox{\rm dom}_{G}(S)\geq\sum_{i=1}^{k_{1}}|V_{i}|. \tag{3}\] We proceed further with the following claim. **Claim 6**: _If \(|V_{k_{1}}|\geq 3\), then \(\mbox{\rm pd}_{\alpha}(G)\leq\frac{1}{3}n\)._ **Proof.** Suppose that \(|V_{k_{1}}|\geq 3\). In this case, by Inequalities (1) and (3), and by our assumption that \(n\geq 15\), we infer that \[\mbox{\rm dom}_{G}(S)\geq 3k_{1}=3\left\lfloor\frac{1}{3}n\right\rfloor\geq \left\lceil\frac{7}{8}n\right\rceil. \tag{4}\] By Inequality (4), we have \(\mbox{\rm dom}_{G}(S)\geq\frac{7}{8}n\), implying that the set \(S\) is an \(\alpha\)-partial dominating set of \(G\), and so \(\mbox{\rm pd}_{\alpha}(G)\leq|S|\leq\frac{1}{3}n\), yielding the desired result. (\(\Box\)) By Claim 6, we may assume that \(|V_{k_{1}}|=2\), for otherwise the desired result follows. With this assumption and by inequality (1), we note that \(|V_{i}|=2\) for all \(i\geq k_{1}\). Hence by Inequality (2), we have \[\sum_{i=k_{1}+1}^{k_{2}}|V_{i}|=2k_{2}\leq 2\left(\left\lfloor\frac{3}{8}n \right\rfloor-\left\lfloor\frac{1}{3}n\right\rfloor\right). \tag{5}\] Thus, by inequalities (3) and (5), and by our assumption that \(n\geq 15\), we infer that \[\mathrm{dom}_{G}(S)\geq\sum_{i=1}^{k_{1}}|V_{i}|=n-\sum_{i=k_{1}+1}^{k_{2}}|V_{i} |\geq n-2\left(\left\lfloor\frac{3}{8}n\right\rfloor-\left\lfloor\frac{1}{3}n \right\rfloor\right)\geq\left\lceil\frac{7}{8}n\right\rceil. \tag{6}\] By Inequality (6), we have \(\mathrm{dom}_{G}(S)\geq\frac{7}{8}n\), implying that the set \(S\) is an \(\alpha\)-partial dominating set of \(G\), and so \(\mathrm{pd}_{\alpha}(G)\leq|S|\leq\frac{1}{3}n\), yielding the desired result. \(\Box\) The bound in Theorem 4.2 is best possible in the sense that if \(\alpha>\frac{7}{8}\) and \(G\) is \(A_{1}\) or \(A_{2}\) (see Figure 1), then in this case \(\lceil\alpha\times n\rceil=8=n\), and at least three vertices are needed to dominate all vertices of \(G\). Thus in this example, \(\mathrm{pd}_{\alpha}(G)=3=\frac{3}{8}n>\frac{1}{3}n\). The same is true if every component of \(G\) is isomorphic to \(A_{1}\) or \(A_{2}\). Hence the value for \(\alpha\) in the statement of Theorem 4.2 cannot be increased in general in order to guarantee that the \(\alpha\)-partial domination number of a connected cubic graph is at most one-third its order. However if the connected cubic graph \(G\) has sufficiently large order \(n\), then we can improve the value \(\alpha=\frac{7}{8}\) given in Theorem 4.2 to a larger value of \(\alpha\). For example, if \(n\geq 28\), then \(\alpha=\frac{13}{14}\) suffices, as the following result shows. **Theorem 4.3**: _If \(G\) is a connected cubic graph of order \(n\geq 28\), then_ \[\mathrm{pd}_{\frac{13}{14}}(G)\leq\frac{1}{3}n\,.\] **Proof.** Let \(G\) be a connected cubic graph of order \(n\geq 28\) and let \(\alpha=\frac{13}{14}\). We adopt exactly our notation from the proof of Theorem 4.2. In particular, \(D\) is a \(\gamma\)-set of \(G\) satisfying Lemma 2.1. As before, by Theorem 1.2 we have \(\gamma\leq\lfloor\frac{5}{14}n\rfloor\). If \(\gamma\leq\frac{1}{3}n\), then \(\mathrm{dom}_{G}(D)=n\). Hence we may assume that \(\gamma>\frac{1}{3}n\), for otherwise the desired result is immediate. Let \(k_{1}\) and \(k_{2}\) be defined exactly as in the proof Theorem 4.2. If \(|V_{k_{1}}|\geq 3\), then \[\mathrm{dom}_{G}(S)\geq 3k_{1}=3\left\lfloor\frac{1}{3}n\right\rfloor\geq \left\lceil\frac{13}{14}n\right\rceil, \tag{7}\] implying that the set \(S\) is an \(\alpha\)-partial dominating set of \(G\). Thus, \(\mathrm{pd}_{\alpha}(G)\leq|S|\leq\frac{1}{3}n\), yielding the desired result. Hence we may assume that \(|V_{k_{1}}|=2\). With this assumption, we note that \(|V_{i}|=2\) for all \(i\geq k_{1}\). Thus proceeding exactly as before, we yield the inequality chain where recall that by supposition we have \(n\geq 28\). \[\mathrm{dom}_{G}(S)\geq\sum_{i=1}^{k_{1}}|V_{i}|=n-\sum_{i=k_{1}+1}^{k_{2}}|V_{ i}|\geq n-2\left(\left\lfloor\frac{5}{14}n\right\rfloor-\left\lfloor\frac{1}{3}n \right\rfloor\right)\geq\left\lceil\frac{13}{14}n\right\rceil. \tag{8}\] Once again, \(\mathrm{dom}_{G}(S)\geq\frac{13}{14}n\), implying that the set \(S\) is an \(\alpha\)-partial dominating set of \(G\). Thus, \(\mathrm{pd}_{\alpha}(G)\leq|S|\leq\frac{1}{3}n\). \(\Box\) Note that in the proof of Theorem 4.3 we used the inequality \(\gamma\leq\lfloor\frac{5}{14}n\rfloor\) which holds for every connected cubic graph of order at least \(10\). Hence we cannot avoid the assumption that \(G\) is connected. On the other hand, \(\gamma\leq\lfloor\frac{3}{8}n\rfloor\) holds for every supercubic graph, and we have the following result. **Theorem 4.4**: _If \(G\) is a supercubic graph of order \(n\geq 60\), then_ \[\mathrm{pd}_{\frac{9}{10}}(G)\leq\frac{1}{3}n\,.\] **Proof.** We can proceed along the same lines as in the proof of Theorem 4.3. The only difference is that now we cannot apply Theorem 1.2, instead we apply Theorem 1.1. Then (7) rewrites as \[\mathrm{dom}_{G}(S)\geq 3k_{1}=3\left\lfloor\frac{1}{3}n\right\rfloor\geq \left\lceil\frac{9}{10}n\right\rceil, \tag{9}\] which holds for \(n\geq 18\), while (8) rewrites as \[\mathrm{dom}_{G}(S)\geq\sum_{i=1}^{k_{1}}|V_{i}|=n-\sum_{i=k_{1}+1}^{k_{2}}|V_{ i}|\geq n-2\left(\left\lfloor\frac{3}{8}n\right\rfloor-\left\lfloor\frac{1}{3}n \right\rfloor\right)\geq\left\lceil\frac{9}{10}n\right\rceil, \tag{10}\] which holds for \(n\geq 60\). The conclusion follows. \(\Box\) ## 5 Closing remarks As a consequence of Theorems 1.1 and 1.2, we have the following result which characterizes the connected cubic graphs \(G\) of order \(n\) satisfying \(\gamma(G)=\frac{3}{8}n\). **Corollary 5.1**: ([19, 25]) _If \(G\) is a connected cubic graph of order \(n\), then \(\gamma(G)\leq\frac{3}{8}n\), with equality if and only if \(G\) is one of the two graphs \(A_{1}\) and \(A_{2}\) shown in Figure 1._ A natural problem is to characterize the graphs that achieve equality in the Kostochka-Stocker Theorem 1.2, that is, to characterize the connected cubic graphs \(G\) of order \(n\) satisfying \(\gamma(G)=\frac{5}{14}n\). Necessarily, for such graphs we have \(n=14k\) for some \(k\geq 1\). We show next that there are exactly four such graphs of order \(n=14\). We remark that the proof of Theorem 3.1 gave rise to two connected cubic graphs of order \(n\) satisfying \(\gamma(G)=5=\frac{5}{14}n\), namely the graphs \(G_{14.1}\) and \(G_{14.2}\) shown in Figures 4(d) and 7(d), respectively. (These two graphs are redrawn in Figure 9(c) and 9(b), respectively.) With a bit more work, one can readily establish two additional such graphs. In the second paragraph of the proof of Theorem 3.1, we consider the case when \(\rho(G)=3\). In this case, adding a vertex at distance \(3\) to a maximum packing immediately yielded an \(\frac{7}{8}\)-partial dominating set of \(G\) of cardinality \(4\), and therefore we assumed that \(\rho(G)=2\). However a more detailed analysis of the case when \(\rho(G)=3\) yields the generalized Petersen graph \(P(7,2)\) shown in Figure 9(d). In the fourth paragraph of the proof of Theorem 3.1, we considered the case when the set \(X=V(G)\setminus N_{G}[P]\) contains a vertex adjacent to two other vertices in \(X\). Since this case immediately yielded an \(\frac{7}{8}\)-partial dominating set of \(G\) of cardinality \(4\), we therefore assumed that this case does not occur. However a more detailed analysis of the case when a vertex in \(X\) has two neighbors in \(X\) yields the graph \(G_{14.3}\) shown in Figure 9(a). The proof details giving rise to these two additional graphs, namely \(P(7,2)\) and \(G_{14.3}\), are similar to our proof of Theorem 3.1, and are not given here. Moreover, the result was also verified by computer. **Theorem 5.2**: _If \(G\) is a connected cubic graph of order \(n=14\) satisfying \(\gamma(G)=5=\frac{5}{14}n\), then \(G\in\{G_{14.1},G_{14.2},G_{14.3},P(7,2)\}\)._ It is not known if the \(\frac{5}{14}\)-upper bound on the domination number of a connected cubic graph of order \(n\) given by Kostochka and Stocker [19] is achievable when \(n\) is large. We pose the following conjecture. **Conjecture 5.3**: _If \(G\) is a connected cubic graph of order \(n\) satisfying \(\gamma(G)=\frac{5}{14}n\), then \(G\in\{G_{14.1},G_{14.2},G_{14.3},P(7,2)\}\)._ The authors in [19] remark that the bound \(\gamma(G)\leq\lfloor\frac{5}{14}n\rfloor\) for a connected cubic graph of order \(n\geq 14\) is achievable for \(n\in\{14,16,18\}\). It would be interesting to find graphs of orders \(n\geq 20\) that achieve equality in this bound. Natural candidates are the generalized Petersen graphs \(P(p,2)\) of order \(n=2p\) whose domination numbers are known (see, [10]). **Theorem 5.4**: \((\)_[_10_]_\()\)\(\gamma(P(p,2))=p-\left\lfloor\frac{p}{5}\right\rfloor-\left\lfloor\frac{p+2}{5}\right\rfloor\) for all \(p\geq 3\)._ For \(p\in\{3,5,6,7,8,9,11,12\}\), we have \(p-\left\lfloor\frac{p}{5}\right\rfloor-\left\lfloor\frac{p+2}{5}\right\rfloor= \lfloor\frac{5}{7}p\rfloor\). Hence as a consequence of Theorem 5.4, we have the following result. **Corollary 5.5**: _For \(n\in\{6,10,12,14,16,18,22,24\}\), there exist connected cubic graphs \(G\) of order \(n\) satisfying \(\gamma(G)=\lfloor\frac{5}{14}n\rfloor\)._ As far as we are aware, \(P(12,2)\) is the largest currently known connected cubic graph of order \(n\) satisfying \(\gamma(G)=\lfloor\frac{5}{14}n\rfloor\). In addition, \(\gamma(P(12,2))=8=\frac{1}{3}n\). We close with the following question, for which we suspect the answer is no. **Question 5.6**: _Are there infinitely many connected cubic graphs \(G\) of order \(n\) satisfying \(\gamma(G)=\lfloor\frac{5}{14}n\rfloor\)?_ Figure 9: The four connected cubic graphs \(G\) of order \(n=14\) satisfying \(\gamma(G)=5\) ## Acknowledgements We are grateful to Gregor Rus for computer verification of Theorem 3.1. This work was supported by the Slovenian Research Agency (ARRS) under the grants P1-0297, J1-2452, and N1-0285. Research of the second author was supported in part by the University of Johannesburg and the South African National Research Foundation.
2309.11171
Coalescence of sessile polymer droplets: A molecular dynamics study
Droplet coalescence is ubiquitous in nature and the same time key to various technologies, such as inkjet printing. Here, we report on the coalescence of polymer droplets with different chain lengths coalescing on substrates of different wettability. By means of molecular dynamics simulations of a coarse-grained model, it is found that the rate of bridge growth is higher in the case of droplets with smaller contact angles (more wettable substrates) and decreases with the increase of the chain length of the polymers. Different behavior has also been identified in the dynamics of the approach of the two droplets during coalescence with the substrate wettability playing a more important role compared to the chain length of the polymers. While the dynamics of the droplet are greatly affected by the latter parameters, the density profile and flow patterns remain the same for the different cases. Thus, we anticipate that our work provides further insights into the coalescence of liquid polymer droplets on solid substrates with implications for relevant technologies.
Soheil Arbabi, Panagiotis E. Theodorakis
2023-09-20T09:29:55Z
http://arxiv.org/abs/2309.11171v1
# Coalescence of Sessile Polymer Droplets: A Molecular Dynamics Study ###### Abstract Droplet coalescence is ubiquitous in nature and the same time key to various technologies, such as inkjet printing. Here, we report on the coalescence of polymer droplets with different chain lengths coalescing on substrates of different wettability. By means of molecular dynamics simulations of a coarse-grained model, it is found that the rate of bridge growth is higher in the case of droplets with smaller contact angles (more wettable substrates) and decreases with the increase of the chain length of the polymers. Different behavior has also been identified in the dynamics of the approach of the two droplets during coalescence with the substrate wettability playing a more important role compared to the chain length of the polymers. While the dynamics of the droplet are greatly affected by the latter parameters, the density profile and flow patterns remain the same for the different cases. Thus, we anticipate that our work provides further insights into the coalescence of liquid polymer droplets on solid substrates with implications for relevant technologies. _Droplet Coalescence, Polymer Chains, Droplet Dynamics, Molecular Dynamics Simulation_ ## 1 Introduction Droplet coalescence is ubiquitous in nature and at the same time much relevant for various technologies, such as spraying and printing [1, 2], where the rate of this process can determine the efficiency of the application. The primary factor controlling coalescence is the interplay of viscous and inertial forces as droplets minimise their total surface-tension energy by coalescing [3]. Despite research in this area [4, 5, 6, 7, 8, 9, 10], droplet coalescence remains a fascinating phenomenon with many of its aspects calling for further investigations to reach adequate understanding of this phenomenon in various scenarios [4, 5, 6, 7, 8, 9, 10]. On the one hand, part of this gap in knowledge is due to device limitations, since experiments cannot capture the initial fast stages of droplet coalescence. On the other hand, the singularity at the contact point during the initial stages of coalescence presents challenges for numerical modelling [8, 10, 11], despite progress in this area [10], while hydrodynamic models are only applicable at the later stages of coalescence [12, 13, 14]. Apart from device and methodology limitations, further understanding at molecular scales is much desirable as applications often require greater control at nanoscales. Moreover, the role of a substrate in the coalescence of sessile droplets deserves further consideration despite research in this area by theory and experiment [15, 16, 17, 18, 19], especially in the context of complex liquids containing various additives, such as polymers and colloids. Droplet coalescence takes place in three stages, with the first being the initial droplet approach when the droplets are close enough to interact with each other and form in between the so-called bridge. Then, the bridge-growth stage follows, which eventually results in the final reshaping of the two droplets towards the equilibrium state of a single spherical-cap droplet, which is the state of minimum energy. From the perspective of fluid dynamics the initial bridge growth is generally driven by viscous forces, as a result of the interactions between molecules, while inertial forces dominate the coalescence process at the later stage [8, 11]. In the case of the viscous regime, a linear scaling in time \(b\,\propto\,t\) has been suggested for the bridge radius, \(b\), or logarithmic corrections \(t\ln t\), while a scaling \(b\,\propto\,\sqrt{t}\) has been proposed for the inertial regime [8]. However, the dynamics of the bridge growth is still under debate, for example, an inertially limited viscous regime has been reported [20, 21] or the proposition of a modified Ohnesorge number in the case of immiscible droplets for coupling the linear and power-law scalings [22]. All-atom molecular dynamics (MD) simulations [7] have described the initial stage of the bridge growth for water droplets, not achievable by continuum simulation or experiment. In particular, the formation of multiple precursor bridges at the pinch point were observed, which result from thermal capillary waves that exist at the droplets' surface. In this case, simulations suggest that multiple bridges that expand linearly in time develop at the surface and the transition to the classical hydrodynamics regime only takes place when the bridge radius exceeds a thermal length defined as \(l_{T}\,\approx\,\left(k_{B}T/\gamma\right)^{1/4}R^{1/2}\), where \(k_{B}\) is Boltzmann's constant, \(T\) the temperature, \(\gamma\) the liquid-vapour (LV) surface tension, and \(R\) the radius of the droplets. In the case of droplet coalescence on solid substrates, much less is known, despite the immediate implications for applications, for example, in coatings [23] and microfluidics [24] technologies. In particular, in the case of coalescence of low-viscosity droplets on a substrate it has been experimentally found that the the bridge height, \(b\), grows with time as \(t^{2/3}\) when the contact angle is below \(90^{\circ}\), while a scaling \(b\propto t^{1/2}\) has been observed for contact angles above \(90^{\circ}\)[16], which is the scaling found in the inertial regime for freely suspended drops [23, 8]. Moreover, a geometrical model that unifies the inertial coalescence of sessile and freely suspended drops and can capture the transition from the \(2/3\) to the \(1/2\) exponent in the case of sessile droplets has been proposed [16]. In addition, in the case of asymmetric coalescence, that is droplets with different contact angles, the theory predicts that the shape of the bridge can be described by similarity solutions of the one-dimensional lubrication theory, with the bridge growing linearly in time and exhibiting dependence on the contact angles [15]. In earlier experimental studies, a power-law growth at early times as \(t^{1/2}\) has been suggested for the symmetric case, while the growth rate appeared to be sensitive to both the radius and the height of the droplet with a scaling \(H/R\), where \(H\) is the height of the droplet from the substrate to its apex and \(R\) its radius [23]. Further experimental work on droplets with contact angles in the range \(10^{\circ}\)-\(56^{\circ}\) has found that the bridge growth scales as a power law with exponents in the range \(0.5061\) to \(0.8612\) with data deviating from the power law at longer times during coalescence for contact angles larger than \(24^{\circ}\). Moreover, a power law with an exponent \(0.2901\) has been found for the width of the bridge [17]. Finally, further experimental work has focused on analysing the morphology and dynamics of droplet coalescence on substrates [25]. Despite previous work on the coalescence of sessile droplets, many aspects of this phenomenon require further investigation. One of them is the role of viscosity in the coalescence for substrates with different wettability. Viscosity is expected to play a role, especially in the context of polymer droplets studied here, where in addition to surface-tension-effects differences, entanglement effects may also play a role for longer polymer chains or the polymer-polymer interactions close to the contact line in both the initial and later stages of coalescence. Here, we attempt to elucidate these points and fill in the gap in knowledge in this area by carrying out molecular dynamics simulations of a coarse-grained model for droplets comprised of polymer chains with different length on substrates of different wettability with equilibrium contact angles of individual droplets above and below \(90^{\circ}\). We find that the bridge length dynamics are much slower in the case of polymer droplets than what is observed for water droplets. Moreover, the coalescence process considerably slows down with the increase of polymers chain length. Furthermore, more wettable substrates have consistently faster bridge growth dynamics in comparison with the less wettable substrates. The wettability of the substrate also affects significantly the dynamics of the bridge angle and the approach of the coalescing droplets, while the viscosity of the droplets appears to have a smaller effect. In the following, we describe our simulation model and method. Then, we discuss our results, while in the final section we draw our conclusions. ## 2 Simulation model and methods Our system consists of two polymer droplets placed next to each other as shown in **Figure 1** to initiate their coalescence. Each droplet contains polymer chains with the same number of monomers (beads), \(N\). The polymer chains are modelled by the standanrd bead-spring model [26, 27, 28], where all beads interact with a truncated and shifted Lennard-Jones (LJ) potential, \[U_{\rm LJ}(r)=4\varepsilon_{\rm ij}\left[\left(\frac{\sigma_{\rm ij}}{r} \right)^{12}-\left(\frac{\sigma_{\rm ij}}{r}\right)^{6}\right]. \tag{1}\] This interaction is applied for beads within the cutoff distance \(r_{\rm c}\,=\,2.5\ \sigma\), where \(\sigma\) is the length unit. The interaction between polymer beads is \(\varepsilon_{\rm pp}=\epsilon\), where \(\epsilon\) is the unit of energy. The temperature of the system is, \(T\,=\,\epsilon/k_{B}\), where \(k_{B}\) is Boltzmann's constant. Moreover, consecutive beads along a polymer chain are tethered by the "finitely extensible nonlinear elastic" (FENE) potential, \[U_{\rm FENE}(r)=-0.5K_{\rm FENE}R_{0}^{2}\ln\left[1-\left(\frac{r}{R_{0}}\right) ^{2}\right], \tag{2}\] where \(r\) is the distance between two consecutive beads along the polymer chain, \(R_{0}\,=\,1.5\ \sigma\) expresses the maximum extension of the bond, and \(K_{\rm FENE}\ =\ 30\ \epsilon/\sigma^{2}\) is an elastic constant. The length of the polymer chain in this model in effect varies the viscosity of the droplets [29]. Here, the chain length is the same for both droplets in each system and is chosen in the range \(N\,=\,10\,-\,640\) beads. Since the total number of beads in each droplet is 57600, using longer chains would also require the increase of the overall size of the droplet, in order to ensure that the majority of the chains are not on the surface of the droplet, thus avoiding artifacts that may not apply in macroscopic droplets. Moreover, increasing \(N\) and the total number of beads in the droplets would result in longer times required for the equilibration of the initial droplets and the coalescence experiments to reach the final equilibrium stage. Still, it would be valuable to extend the range of \(N\) in future investigations and carry out a full scaling analysis of droplet properties on the chain length \(N\) and the overall droplet size. The wettability of the substrate by the droplet is controlled through the parameter \(\varepsilon_{\rm pw}\) of the 9-3 LJ potential, which describes the interaction of the polymer beads with an implicit, smooth wall [30], \[U_{\rm w}(z)=4\varepsilon_{\rm pw}\left[\left(\frac{\sigma_{\rm s}}{z}\right)^ {9}-\left(\frac{\sigma_{\rm s}}{z}\right)^{3}\right], \tag{3}\] where \(z\) is the normal (vertical) distance of the beads from the substrate within a cutoff distance \(z_{\rm c}\,=\,2.5\ \sigma\). Here, \(\sigma_{\rm s}=\sigma\). To evolve our system in time and control the temperature of the system, the Langevin thermostat is used as done previously [31]. The equation of motion for the coordinates \(\{r_{i}(t)\}\) of the beads of mass \(m\) (\(m\) is the unit of mass) \[m\frac{d^{2}{\bf r}_{i}}{dt^{2}}=-\nabla U_{i}-\gamma\frac{d{\bf r}_{i}}{dt}+ \Gamma_{i}(t). \tag{4}\] Figure 1: Evolution of the droplet coalescence on a solid substrate with a lower (a, \(\varepsilon_{\rm pw}\ \ \ \ \ =\ \ \ \ 1.1\ \epsilon\)) and a higher (b, \(\varepsilon_{\rm pw}\ \ =\ \ 2.5\ \epsilon\)) wettability as a function of time, \(t\), from the first permanent contact of the droplets at time \(t_{c}\), as indicated. Here, \(N\ =\ 10\) beads. Moreover, the angle, \(\theta\), and the bridge length, \(b\), are indicated. The implicit, smooth substrate modelled by the 9–3 LJ potential of Equation 3 is illustrated by a solid colour. is numerically integrated for each bead using the LAMMPS package [32]. In Equation 4, \(t\) denotes the time, \(U_{i}\) is the total potential acting on the \(i\)th bead, \(\gamma\) is the friction coefficient, and \(\Gamma_{i}(t)\) is the random force. As is well-known, \(\gamma\) and \(\Gamma\) are related by the usual fluctuation-dissipation relation \[<\Gamma_{i}(t).\Gamma_{j}(t^{{}^{\prime}})>=6k_{B}T\gamma\delta_{ij}\delta(t-t ^{{}^{\prime}}). \tag{5}\] Following previous work [31, 33, 34], the friction coefficient was chosen as \(\gamma\,=\,0.5\)\(\tau^{-1}\). Equation 4 was integrated using an integration time step of \(\Delta t\) = 0.01 \(\tau\), where the time unit is \(\tau\,=\,(m\sigma^{2}/\epsilon)^{1/2}\). A single droplet is first equilibrated for adequate time, so that the total energy has reached a minimum and properties, such as the mean contact angle and average shape of the droplet do not change with time. Then, the equilibrated droplet is cloned and positioned on the substrate as shown in Figure 1. In this case, the size of the box is chosen such to accommodate the two droplets avoiding the interaction of mirror images of the droplets due to the presence of periodic boundary conditions in all Cartesian directions. Moreover, the use of polymer droplets leads to the absence of vapour in the system [35], which greatly facilitates the analysis of the trajectories and maintaining the same thermodynamic conditions during the simulation of either the individual droplet or the two coalescing droplets. Different scenarios of substrate wettability were considered in our study, for which \(\varepsilon_{\rm pw}\) is 2.5 \(\epsilon\) or 1.1 \(\epsilon\). In this case, the equilibrium contact angles of the individual droplets before coalescence are 78\({}^{\circ}\) and 118\({}^{\circ}\), respectively. To estimate the contact angle of the droplet, a method that avoids a fitting procedure is used, which has been described in Reference [36]. We have also found that the equilibrium contact angles of the individual droplets do not show any statistically significant dependence on the length, \(N\), of the polymer chains. To analyse the bridge growth dynamics, snapshots of the system are frequently dumped, especially for the initial stage, typically every 250 integration time steps and beads are assigned to a three-dimensional grid with size 2.5\(\sigma\) in all directions. For the analysis of each snapshot, the center of the bridge is located in the middle of the grid, corresponding to the position \(x=0\) in the \(x\) direction of the coordinate system and any rotation of the droplets around the \(z\) axis has been removed. This facilitates our analysis and guarantees that our measurements of the bridge radius and all other properties (_e.g._ density profiles) remain consistent as coalescence proceeds. The snapshots of Figure 1 have been taken after performing the above procedure, which is manifested by the perfect alignment of the droplets along their long axis in the \(x\) direction and the bridge is also placed in the middle of the substrate on the \(x-y\) plane. The three dimensional grid is also used to calculate the profile of the number density of the droplets by considering a slab along the \(x-z\) plane in the \(x\) direction that passes through the center of the bridge. Further details regarding the calculation of the various properties are provided later during the discussion of the respective results. ## 3 Results and Discussion Figure 1 shows typical coalescence cases on substrates with different wettability, corresponding to contact angles of lower and greater than 90\({}^{\circ}\). A key parameter for characterising the dynamics of the coalescence process is the bridge length, \(b\), which is indicated for each case in Figure 1. When the substrate is less wettable (contact angle greater than 90\({}^{\circ}\)), the bridge initially forms above the substrate at the contact point between the LV interface of the coalescing droplets, and later comes into contact with the substrate as the coalescence process proceeds (Figure 1a). In contrast, in the case of more wettable substrates ( contact angles lower than 90\({}^{\circ}\)), the bridge grows onto the substrate from the beginning of the coalescence process. While the time that bridge is in contact with the substrate is expected to affect the dynamics of the droplets, in the case of more wettable substrates the interaction between the droplet and the substrate is also stronger. **Figure 2** presents our results for the dynamics of the bridge length, \(b\), on the two different substrates. Apart from the initial thermal regime [7], we find that in terms of the bridge length the dynamics of coalescence on both substrates can be described by a power-law behaviour (\(\sim t^{\beta}\)) with exponents that are clearly lower than 1/2 (contact angles greater than 90\({}^{\circ}\)) and 2/3 (contact angles lower than 90\({}^{\circ}\)), which Figure 3: Angle \(\theta\) (see Figure 1 and main text for further details) as a function of time, \(t\), counting from the time of first permanent contact of the coalescing droplets, \(t_{c}\). Data for polymer droplets with different chain length, \(N\), are shown, as indicated. The lines are a guide for the eye. Here, \(\varepsilon_{\rm pw}=2.5\ \epsilon\) and \(\varepsilon_{\rm pw}=1.1\ \epsilon\) (inset). Figure 2: Bridge length, \(b\), as a function of time, \(t\), counting from the time of first permanent contact of the coalescing droplets, \(t_{c}\). Data for polymer droplets with different chain length are shown, \(N\), as indicated. (a) \(\varepsilon_{\rm pw}\ \ =\ \ 1.1\ \epsilon\); (b) \(\varepsilon_{\rm pw}\ =\ 2.5\ \epsilon\). The power-law exponents (\(\sim\ t^{\beta_{b}}\)) are reported for the bridge length with values of \(\chi^{2}/{\rm ndf}\ \approx\ 1\), where ndf indicates the number of degrees of freedom. have been reported for water droplets [16]. Therefore, our results suggest that the rate of coalescence is slower in the case of polymer droplets compared to the case of water droplets. Moreover, the increase of the polymer chain length leads to gradually decreasing values of the power-law exponent for both types of substrates. However, exponents are clearly higher in the case of the more wettable substrate, which suggests that the coalescence process be faster when the attraction of the polymer chains to the substrate is stronger. Hence, an increased substrate wettability appears to accelerate the dynamics of the bridge growth, thus facilitating droplet coalescence throughout the range of \(N\) studied here. Moreover, we have identified the presence of a second regime at the final stages of the coalescence process and when almost the bridge has been fully developed in the case of less wettable substrates, an effect that is more pronounced for longer chain lengths \(N\). In summary, we find that an increasing chain length of the droplets will slow down the coalescence of polymer droplets and more wettable substrates will exhibit faster dynamics than less wettable substrates with power-law exponents, \(\beta_{b}\), significantly lower than what has been observed for sessile water droplets. **Figure 3** presents results for the angle \(\theta\) at the bridge (see Figure 1). A symmetric angle is defined for the second droplet of Figure 1 and the average of the two angles for each snapshot is considered as the value of the angle \(\theta\). To calculate the angle \(\theta\), one considers a horizontal plane that passes through the top of the bridge. Then, the angle is calculated based on the curvature of the droplets as discussed in a previous study, thus avoiding a fitting procedure [36]. Estimating the angles can in general be highly sensitive to the details of the definition of a sharp interface, as well as the fitting procedure [37, 38]. Moreover, models that could account for the disjoining pressure effects, for example, in the context of droplets on solid substrates, might perform better than fitting spherical caps to nanodroplets [37]. In general, our data for the angle \(\theta\) appear noisier than the data referring to the bridge length. One of the main reasons for this are the larger fluctuations on the droplets shape during the coalescence process. Hence, a discussion here can only focus on the dynamics of the angle \(\theta\), which seems to greater be affected at the earlier times of coalescence in the case of more wettable substrates, while curves seem to saturate for chain lengths \(N\,\geq\,80\) beads. Moreover, a faster rate of change appears in the case of the less wettable substrates. We have further explored the dynamics of the coalescence process by monitoring the distance \(X\) of the Figure 4: Distance \(X\) in the \(x\) direction between the centre of mass of coalescing droplets for cases of different chain length, \(N\), as indicated. Insets show the instantaneous velocity of approach \(u=dX/dt\). (a) \(\varepsilon_{\text{pw}}=1.1\;\epsilon\); (b) \(\varepsilon_{\text{pw}}=2.5\;\epsilon\). centre of mass of the coalescing droplets, and, also, calculated its derivative with time, which reflects the instantaneous velocity of approach of the droplets (**Figure 4**). Our data for \(\varepsilon_{\rm pw}\,=\,1.1\ \epsilon\) (less wettable substrate) show two different dynamics regimes with a transition between them that is more pronounced in the case of droplets with longer polymer chains (\(N\,\geq\,40\) beads). This transition seems to become smoother as the chain length decreases. Moreover, the instantaneous velocity, \(u\), of the approach of the droplets is higher at the initial stages of coalescence and then rather reaches a smaller value, which remains constant until the bridge has fully developed. This velocity appears to be similar for the different systems, independently of the chain length. In the case of the systems with \(\varepsilon_{\rm pw}\,=\,2.5\ \epsilon\), a different behaviour is observed. \(X\) steadily decreases, while the velocity, \(u\), obtains small values over the entire coalescing process with the initial instantaneous velocity of the approach of the droplets to exhibit a slightly higher (more negative velocity, since the distance \(X\) decreases) velocity. Hence, although the bridge growth dynamics is faster in the case of the more wettable substrate, the approach of the two droplets appears slower in the case of the more wettable substrate. Finally, we have calculated the density profiles of the droplets during coalescence. From the obtained results, we have not identified any noticeable changes in the density profiles for droplets of different chain length and substrates with different wettability. We have also analysed the flow patterns and they have also not revealed any noticeable differences for the different systems. Typical density profiles for various cases are presented in **Figure 5** at an initial stage of the coalescence process, when droplets come into contact, and at a later stage when the bridge has been clearly developed. Hence, while the dynamics of the coalescence process depends on the choice of the substrate and the chain length of the polymers, no noticeable changes in the patterns of the density and the flow are observed during coalescence for the various cases considered in our study. ## 4 Conclusions In this study, we have characterised the dynamics of the coalescence of polymer droplets with different chain lengths on substrates with different wettability, where the contact angle of individual droplets is less and above \(90^{\circ}\). The rate of coalescence is a key property and can be characterised by the growth rate of the bridge length. We find that polymer droplets overall show a slower rate of the bridge growth in comparison with what has been observed in the case of water droplets in experiments. Moreover, the Figure 5: Profiles of the number density along a cross-section in the \(x\) direction (\(x\,\,\,\,-\,\,\,\,z\) plane) of the coalescing droplets (\(N\,\,\,=\,\,\,\,640\) beads) at different stages (upper panels correspond to snapshots obtained at time \(t_{c}\), when the droplets come into permanent contact). (a, b) \(\varepsilon_{\rm pw}=1.1\ \epsilon\); (c, d) \(\varepsilon_{\rm pw}=2.5\ \epsilon\). dynamics are slower as the length of the polymer chains of the droplets increases. Also, we find that more wettable substrates will exhibit faster dynamics, which suggests that a stronger attraction between the droplet and the substrate will accelerate the bridge growth. In addition, we have characterised the dynamics of the approach of the two droplets based on the distance between the centre of masses of the coalescing droplets. The behaviour is different when the wettability of the substrate changes with two different regimes being more pronounced in the case of less wettable substrates. In this case, differences in the dynamics between droplets with different chain lengths have been also observed. While the dynamics of the coalescence can vary when the length of the polymer chains or the substrate wettability vary, the density and velocity profile patterns do not reveal any dependence on these parameters. Thus, we anticipate that our study provides insights in the coalescence of liquid polymer droplets on solid substrates. **Acknowledgements** This research has been supported by the National Science Centre, Poland, under grant No. 2019/34/E/ST3/00232. We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2022/015261.
2305.20079
Understanding and Measuring the Effects of Graphical Dimensions on Viewers' Perceived Chart Credibility
Journalists and visualization designers include visualizations in their articles and storytelling tools to deliver their message effectively. But design decisions they make to represent information, such as the graphical dimensions they choose and the viewer's familiarity with the content can impact the viewer's perceived credibility of charts. Especially in a context where little is known about sources of online information. But there is little experimental evidence that designers can refer to make decisions. Hence, this work aims to study and measure the effects of graphical dimensions and people's familiarity with the content on viewers' perceived chart credibility. I plan to conduct a crowd-sourced study with three graphical dimensions conditions, which are traditional charts, text annotation, and infographics. Then I will test these conditions on two user groups, which are domain experts and non-experts. With these results, this work aims to provide chart guidelines for visual designers with experimental evidence.
Hayeong Song, John Stasko
2023-05-31T17:54:25Z
http://arxiv.org/abs/2305.20079v1
# Understanding and Measuring the Effects of ###### Abstract Journalists and visualization designers include visualizations in their articles and storytelling tools to deliver their message effectively. But design decisions they make to represent information, such as the graphical dimensions they choose and the viewer's familiarity with the content can impact the viewer's perceived credibility of charts. Especially in a context where little is known about sources of online information. But there is little experimental evidence that designers can refer to make decisions. Hence, this work aims to study and measure the effects of graphical dimensions and people's familiarity with the content on viewers' perceived chart credibility. I plan to conduct a crowd-sourced study with three graphical dimensions conditions, which are traditional charts, text annotation, and infographics. Then I will test these conditions on two user groups, which are domain experts and non-experts. With these results, this work aims to provide chart guidelines for visual designers with experimental evidence. **Index Terms:** Human-centered computing--Visualization--Empirical studies in visualization; Human-centered computing--Visualization--Visualization application domains--Information visualization ## 1 Introduction Visualization designers, journalists, scholars, and system designers consider different visual representations to deliver a message or stories they produce, such as in storytelling tools. When designing, visual designers have to make a decision on what information (e.g., scientific evidence) to include and make design decisions to represent that information (e.g., infographics). These design decisions can impact viewers' perceived chart credibility. Prior works have shown that depending how information is visually represented can impact the viewer's perceived data quality [17, 18] and data interpretation. For example, when the title and visualization were misaligned, viewers trusted the chart less with even lead to biases in their data interpretation [9, 10]. Prior work has also shown that data is personal and people's background (e.g., where they are from, experiences, expertise) can impact people's perception and interpretation of charts [16]. However, there is little experimental evidence to guide designers as to what graphical dimensions and user characteristics (e.g., domain experts and novices) impact viewers' perceived chart credibility. Thus, in my work, I aim to understand and measure the effects of graphical dimensions on viewer's perceived chart credibility. Also, I aim to measure the effects of subject matter expertise on their perceived credibility on charts. Based on these findings I will compose chart design guidelines that visual designers can refer to with experimental evidence for creating visualizations. ## 2 Background and Related work Considering today's media environment online information is increasingly prevalent including newspapers, web blogs, and social media. Often time visualizations are used to represent this online information in storytelling tools or journalism to provide data-driven messages [8, 11, 21]. But online information is not easily identified, which makes boundaries between perceived sources vague and ambiguous [1, 13]. For example, online users view an article based on the search results that a search engine returns. But they view these online articles without contextual information and the origin of the source [6]. This can impact the perceived credibility of messages and charts. Thus, we need to understand the effects of graphical dimensions that can provide the contextual information of charts and user characteristics on viewers' perceived credibility. When designing a visualization, visual designers have to make a decision on what information (e.g., title, historical context of the message) to include and make design decisions to represent that information (e.g., text with annotation to provide additional context [9, 19], infographics). These design decisions can impact viewers' perception of credibility on charts. Prior studies have shown that embellished charts are likely to draw people's attention more and be more engaging, thus having viewers be more involved in understanding the message of the chart [20, 2]. But we understand very little about how these graphical dimensions and design choices impact viewers' perceived chart credibility. Prior work has also shown that based on people's familiarity with the content impacts how people learn and read content [3, 4, 5, 15], which can impact people's data interpretation and perception chart credibility. Based on their knowledge level of the content, they might differentially perceive the credibility of charts. ## 3 Proposed Research & Research Aims The core aim of this research is to understand and measure the effects of graphical dimensions and people's familiarity with the content on viewers' perceived chart credibility. **First, identify different graphical dimensions worth testing.** In my work, I plan to test text annotations, infographics, and traditional charts. These graphical dimensions were selected based on literature reviews. I also selected these test conditions because they are employed in realistic scenarios, such as in data journalism. In particular, I will test these on bar charts and line charts, as these are the basic types of charts that are commonly used. **Second, measure the effects of graphical dimension and viewer's familiarity with the content on the viewer's perceived chart credibility. I will design a study to measure viewers' perceived credibility of charts using credibility metrics(e.g., accuracy, fairness, trustworthiness) [7, 14]. I will conduct crowd-sourced studies to assess them.** **Lastly, compose design guidelines for visual designers with experimental evidence. We want to assess the effects of design choices on viewers' perceived chart credibility to propose chart design guidelines.** ## 4 Planned Methodology To understand and assess the impact of graphical dimensions and users' familiarity with content on viewers' perceived chart credibil ity, I will conduct a crowd-sourced study (e.g., MTurk). The study will be a within-subjects study where participants will see all of the test conditions. I will use mixed methods to collect both quantitative and qualitative data. I will recruit two groups, subject domain experts and non-experts, based on their expertise level with the content. I will use a screener to determine their expertise level. Participants will be asked to read charts and be asked to self-report their perceived chart credibility using credibility metrics [7, 14] (1 - very low, 7 - very high). Then I will collect why they reported those scales to understand how graphical dimensions or their expertise level impacted their subjective reportings. ### _Hypotheses_ These are the set of hypotheses I'd like to test in my study. * H1: Viewers will report the perceived chart credibility to be higher with richer annotations. * H2: Viewers will report the perceived chart credibility to be lower for infographics than charts with annotations and traditional charts. * H3: Viewers will report the infographics to be more engaging. ### _Study conditions_ I plan to test these conditions on bar charts and line charts as they are commonly used in various scenarios. I will focus on two factors, 1) graphical dimensions and 2) the viewer's familiarity with the content. For graphical dimensions, I will test 1) traditional charts, 2) text annotations, and 3) infographics. For text annotation, I will use different semantic levels of text annotations [12]. These conditions were selected based on realistic scenarios, where these conditions are employed and considered when designing charts for data-driven journalism and storyelling tools. Also, for viewers' familiarity with the content, I will test with subject domain experts and non-experts. ## 5 Progress so far & Next steps First, I identified graphical dimensions and user characteristics worth testing. Second, I defined perceived credibility and decided on potential tasks to measure viewers' perceived char credibility for my study. I also decided on potential data sets to use in my study. Lastly, for the next step, I am refining the study design and plan to create or select visualizations to use in the crowdsourced studies. For basic charts and text annotation conditions, I will create them with d3,js or Vega litc. For infographics, I will use the already created versions of visualization. ## 6 Challenges The following steps and challenges need to be addressed for this work to be successful. So, I look forward to getting feedback and input on the potential direction of the study and the potential experiment design would be useful. First, Identify optimal data and tasks to use for the study. I am planning to use domains that are frequently used in data journalism, such as sports and politics. Second, identify tasks and how to evaluate perceived credibility for crowdsourced study. ## 7 Expected Contributions My work will produce the following contributions to the research community: * Study of effects of graphical dimensions and subject expertise viewers level on viewer's perceived chart credibility. * Measurements of different graphical dimensions and user characteristics on viewer's perceived credibility on charts. * Chart design guidelines with experimental evidence that visual designers can refer to when making design decisions.
2309.14021
LORD: Low Rank Decomposition Of Monolingual Code LLMs For One-Shot Compression
Low Rank Decomposition of matrix - splitting a large matrix into a product of two smaller matrix offers a means for compression that reduces the parameters of a model without sparsification, and hence delivering more speedup on modern hardware. Moreover, unlike quantization, the compressed linear layers remain fully differentiable and all the parameters trainable, while being able to leverage the existing highly efficient kernels over floating point matrices. We study the potential to compress Large Language Models (LLMs) for monolingual Code generation via Low Rank Decomposition (LoRD) and observe that ranks for the linear layers in these models can be reduced by upto 39.58% with less than 1% increase in perplexity. We then use Low Rank Decomposition (LoRD) to compress StarCoder 16B to 13.2B parameter with no drop and to 12.3B with minimal drop in HumanEval Pass@1 score, in less than 10 minutes on a single A100. The compressed models speeds up inference by up to 22.35% with just a single line of change in code over huggingface's implementation with pytorch backend. Low Rank Decomposition (LoRD) models remain compatible with state of the art near-lossless quantization method such as SpQR, which allows leveraging further compression gains of quantization. Lastly, QLoRA over Low Rank Decomposition (LoRD) model further reduces memory requirements by as much as 21.2% over vanilla QLoRA while offering similar gains from parameter efficient fine tuning. Our work shows Low Rank Decomposition (LoRD) as a promising new paradigm for LLM compression.
Ayush Kaushal, Tejas Vaidhya, Irina Rish
2023-09-25T10:35:17Z
http://arxiv.org/abs/2309.14021v1
# LoRD: Low Rank Decomposition of ###### Abstract Low Rank Decomposition of matrix - splitting a large matrix into a product of two smaller matrix offers a means for compression that reduces the parameters of a model without sparsification, and hence delivering more speedup on modern hardware. Moreover, unlike quantization, the compressed linear layers remain fully differentiable and all the parameters trainable, while being able to leverage the existing highly efficient kernels over floating point matrices. We study the potential to compress Large Language Models (LLMs) for monolingual Code generation via Low Rank Decomposition (LoRD) and observe that ranks for the linear layers in these models can be reduced by upto 39.58% with less than 1% increase in perplexity. We then use LoRD to compress StarCoder 16B to 13.2B parameter with no drop and to 12.3B with minimal drop in HumanEval Pass@1 score, in less than 10 minutes on a single A100. The compressed models speeds up inference by up to 22.35% with just a single line of change in code over huggingface's implementation with pytorch backend. LoRD models remain compatible with state of the art near-lossless quantization method such as SpQR, which allows leveraging further compression gains of quantization. Lastly, QLoRA over LoRD model further reduces memory requirements by as much as 21.2% over vanilla QLoRA while offering similar gains from parameter efficient fine tuning. Our work shows Low Rank Decomposition (LoRD) as a promising new paradigm for LLM compression. 1 Footnote 1: We will release LoRDCoder at [https://huggingface.co/nolanoAI](https://huggingface.co/nolanoAI) ## 1 Introduction Code LLMs have become an integral component of Copilots that boost developer productivity (Peng et al., 2023) and in LLM based agents (Wang et al., 2023). These Code LLMs are as large as 34 Billion parameters for the publicly available models Roziere et al. (2023) and more than 175 Billion parameter for closed source ones Chen et al. (2021). There is not only a pressing need for reducing model size and running models at a lower cost, but also for increasing the inference speed. The latter is especially significant for Copilot based applications. Recently, several methods have been proposed to compress and speed up inference of LLMs. Quantization (Frantar et al., 2023; Dettmers et al., 2023) reduces the number of bits required per weight parameter of LLM by lowering the precision, and has shown significant model compression as well as speedups in low-batch decoding phases of LLMs Kim et al. (2023). Quantization has also been shown to generalize well to quantized models Shen et al. (2023). Pruning (Sun et al., 2023; Frantar and Alistarh, 2023) has offered another means of compression by removing connections from the neural network and hence sparsifying the weight matrices of the neural networks. Distillation Gu et al. (2023); Agarwal et al. (2023); Jung et al. (2023) method enables one to train a smaller model using a larger teacher model for supervision. While quantization and pruning methods that do not require re-training are viable means of compressing the model, distillation involves a significant amount of compute for retraining a smaller LLM, often from scratch. Here, we consider another compression paradigm of Low Rank Decomposition (LoRD), that does not require expensive retraining as in the case of distillation and covers up several deficiencies of the quantization and pruning compression method. Low Rank Decomposition factorizes a dense matrix of a neural network as a product of two smaller dense matrices. The LoRD model can leverage the highly optimized floating-point dense matrix multiplication kernels (NVIDIA, 2007; Blackford et al., 2002) that have been written over modern hardware. In contrast, quantized models require specialized kernels to be written, often different for each hardware backend in order to enable fast inference. Moreover, the neural network remaining fully-differentiable and all the parameters remaining trainable even after compression, unlike quantization. The LoRA Hu et al. (2022) layers of tuned models are also easier to merge back into floating point matrices compared to the quantized ones. Pruned models produce sparse matrix weights in the neural network. Matrix multiplication over sparse matrices is much slower than the resulting dense matrices in LoRD on most GPUs. Dense matrices, in addition avoid representation format overhead that sparse matrices incur from parameter reduction 2 and often requires specialized kernels for reducing this overhead Dettmers et al. (2023b). Dense matrix multiplication is also easier to implement than sparse matrix multiplication, especially over quantized models. Footnote 2: This overhead in sparse matrix occurs from having to store indices/bitmasks to indicate which values are present and not. This can be very significant at low levels of sparsity. PyTorch’s sparse formats (CSR, CSC, COO) all store indices at int64 format, and for moderate levels of sparsity (\(<\)50%), the sparse matrix takes up more space than a dense matrix with zero-ed out values. Several previous works have attempted to apply matrix decomposition methods like SVD, Tucker or Kronecker decomposition for compression (Ben Noach & Goldberg, 2020; Tahaei et al., 2022; Edalati et al., 2022). However, these have been limited to small language models like Bert (Devlin et al., 2019) and GPT2 (Radford et al., 2019), and have shown success only on narrow task-specific use cases or after retraining, often only with teacher-guided distillation supervision. These works have observed that weight matrices are not low rank and adapt methods like Singular Value Decomposition for data-aware decomposition of weights (Chen et al., 2021b; Hsu et al., 2022; Yu & Wu, 2023). We, adapt these approaches for Large Language Models (Billion+ Parameters) over python code, and show that these models can be low-rank decomposed to compress and speed up inference without the need for retraining with little to no performance degradation. We study low-rank decomposition across two families of code LLMs - StarCoder and CodeGen (SS2) for varying parameter sizes and establish the potential for reducing rank of models through decomposition. We then study these trends across different kinds of linear layers in a transformer block and observe the potential for upto 39.58% rank reduction with less than 1% change in perplexity. We propose various considerations for compressing the models and to achieve inference speedup on GPUs (SS3.1). Using these, we achieve compression of the StarCoder 16B model offering 31.67 HumanEval Chen et al. (2021a) Pass@1 score down to 13.2B parameter with similar performance of 31.57 HumanEval and down to 12.3B parameter with 29.22 HumanEval score (SS3.2). LoRD models, offer an inference speedup of as high as 22.35% with just one line of change in huggingface's (SS3.3). These LoRD models can be further compressed via the near-lossless quantization method of SpQR Dettmers et al. (2023b) to reduce it's precision to 8 and 4 bits without any further reduction in HumanEval performance (SS4.1). Finally, these decomposed models also reduce the memory requirements of adapter finetuning by 21.2% over QLoRA (SS4.2). ## 2 Code LLMs are Low Rank Decomposable ### Background Let an linear layer \(L\) of an LLM \(M\) with weight \(W\in\mathbb{R}^{d_{1}\times d_{2}}\) and bias \(b\in\mathbb{R}^{d_{1}\times 1}\). Let \(d_{min}=minimum(d_{1},d_{2})\) and \(d_{max}=maximum(d_{1},d_{2})\) A **Low Rank Decomposition** or Low Rank Factorization of a layer \(L\) would give us a new layer \(\tilde{L}\) with two weight matrices \(A\in\mathbb{R}^{r\times d_{2}}\) and \(B\in\mathbb{R}^{d_{1}\times r}\), and a bias \(\tilde{b}\in\mathbb{R}^{d_{1}\times 1}\), where \(r<<d_{min}\) such that for a \(n\) batch of input vectors \(X\in\mathbb{R}^{d_{2}\times n}\) the batch of output vectors \(Y\in\mathbb{R}^{d_{1}\times n}\) is, \[Y=\tilde{L}(X)=BAX+\tilde{b}\approx L(X)=WX+b \tag{1}\] Singular Value Decomposition (SVD) offers the best \(r\)-rank approximation of matrix \(W\in\mathbb{R}^{d_{1}\times d_{2}}\). First \(W\) can be decomposed as \(W=USV^{T}\), where \(U\in\mathbb{R}^{d_{1}\times d_{2}}\) and \(V\in\mathbb{R}^{d_{2}\times d_{2}}\) are orthogonal matrix and \(S\in\mathbb{R}^{d_{1}\times d_{2}}\) is a diagonal matrix with entries in decreasing order. Then, by taking top-k rank, we can decompose \(W\) as a product of two low ranked matrices \(W\approx BA\) as follows \[W=\underbrace{(U_{:,:r}S_{r,:r})}_{B\in\mathbb{R}^{d_{1}\times r}}\underbrace {(V_{:,:})}_{A\in\mathbb{R}^{r\times d_{2}}} \tag{2}\] where \(:_{a:,b}\) denotes a slice operation over a matrix gives its first \(a\) rows and \(b\) columns. Eigendecomposition is another decomposition method applicable to symmetric matrices. We can represent the eigendecomposition of a symmetric matrix \(W\in\mathbb{R}^{d_{1}\times d_{1}}\) as \(W=Q\Lambda Q^{T}\). Here \(Q\in\mathbb{R}^{d_{1}\times d_{1}}\) is an orthogonal matrix whose columns are the eigenvectors of \(W\), and \(\Lambda\in\mathbb{R}^{d_{1}\times d_{1}}\) is a diagonal matrix whose entries are the eigenvalues of \(W\) sorted in decreasing order. Similar to SVD, we can decompose \(W\) as a product of two low ranked matrices \(W\approx BA\) by retaining only the largest \(r\) eigenvalues (and corresponding eigenvectors) as follows: \[W=\underbrace{(Q_{:,:r}\Lambda_{r,:r})}_{B\in\mathbb{R}^{d_{1}\times r}}\underbrace {(Q^{T}_{:,:})}_{A\in\mathbb{R}^{r\times d_{1}}} \tag{3}\] Since \(Q\) is orthonormal and the eigenvalues \(\Lambda\) is sorted in descending order, \(Q_{:,:r}Q^{T}_{:,:r}\approx\mathbf{I}\) where \(\mathbf{I}\) is identity matrix of dimension \(d_{1}\). While SVD gives the optimal low-rank decomposition of matrix, in terms of Frobenius norm, but does not take input and output data distribution into account. Approaches like weighted SVD (Hsu et al., 2022) and SVD over both weight and data (Chen et al., 2021b) have been proposed but are prohibitively expensive to scale to larger models for their requirement of backpropagation over calibration dataset. SVD over very large weight matrices is also very computationally expensive. So, we instead leverage the observation that activations in transformers are low-ranked (Feng et al., 2022) and adapt the more heuristically driven approach of Atomic Feature Mimicking (AFM) (Yu and Wu, 2023) that creates low rank matrices conditioned on a small amount of calibration data. Specifically, consider the eigen-decomposition of Covariance over \(Y\) as \[\mathbb{E}[yy^{T}]-\mathbb{E}[y]\mathbb{E}[y]^{T}=\hat{Q}\Lambda\hat{Q}^{T} \tag{4}\] Here \(\hat{Q}\) is a matrix of its eigenvectors, hence \(\hat{Q}_{:,:r}\hat{Q}^{T}_{:,:r}\approx\mathbf{I}\). Using this, we can write the output vector \(Y\) as \(Y\approx\hat{Q}_{:,:r}\hat{Q}^{T}_{:,:r}Y\). By writing \(Y\) in terms of \(W\), \(X\) and \(b\) from Equation 1, we have: \[Y\approx\hat{Q}_{:,:r}\hat{Q}^{T}_{:,:r}WX+\hat{Q}_{:,:r}\hat{Q}^{T}_{:,:r}b \tag{5}\] Comparing to Equation 1, this gives us \(B=\hat{Q}_{:,:r}\in\mathbb{R}^{d_{1}\times r}\), \(A=\hat{Q}^{T}_{:,:r}W\in\mathbb{R}^{r\times d_{2}}\) and \(\tilde{b}=\hat{Q}_{:,:r}\hat{Q}^{T}_{:,:r}b\approx b\). This approach is also straightforward to adapt for LLMs like LLMa (Touvron et al., 2023), Falcon (Penedo et al., 2023), CodeLLaMa (Roziere et al., 2023) which do not have a bias term in the linear layer by setting \(\tilde{b}\) to zero vector. ### Experimental Settings We take our python calibration dataset from the stack (Koectkov et al., 2022) and consider the corresponding subset of the stack smol (Bigcode, 2022) as validation data. We filter out those sequences which are less than 1024 tokens or 10240 characters in length. We consider CodeGen and StarCoder model family of models. CodeGen mono models are present across 350M, 2B, 6B and 16B parameters and are CodeGen models that were further trained on only python code. StarCoder 16B is the StarCoderBase 16B model further trained on only python code from the stack dataset's train split. We also consider StarCoderBase at 3B and 7B parameter sizes in StarCoder family due to the lack of their monolingual counterparts. All our experiments were performed on a single A100 GPU in under an hour for each run. For studying the trends of increase in perplexity for a reduction in rank across difference model sizes, we set a fixed low-rank \(r\) for all the layers. Later we discuss how to achieve compression and inference speedup via low-rank decomposition in SS3 ### Change in Perplexity across Reduction in Rank Figure 0(a) and 0(b) show the trends of increase in perplexity across reduction in rank of the weight matrix of CodeGen and StarCoder models. For the largest models in both families, we observe only about a 1% increase in perplexity for 10% reduction in rank, and upto 35% reduction in rank for less than 10% increase in perplexity. The smallest model, CodeGen Mono 350M, however, can only be decomposed to 35% rank reduction for a similar drop in perplexity. We observe that the perplexity changes much slower for larger models as the % rank reduces, and hence can be compressed mode, similar to observations in quantization and pruning (Li et al., 2020). It should be noted that for most models, more than 50% leads to significant output quality degradation. ## 3 Compression and speedup through Decomposition In this section, we discuss how we adapt the Low Rank Decomposition (LoRD) for reducing the size of model and achieving inference speedup without a significant reduction in the output quality of the model. Following (Kim et al., 2023), we assume memory bandwidth is the bottleneck for inference, and thus speedups for decoding are directly proportional to the size of the transformer model. ### Achieving compression and inference speedup **Threshold for size reduction across rank reduction:** Consider a weight matrix \(W\in\mathbb{R}^{d_{1}\times d_{2}}\) of a transformer layer with low rank decomposed \(A\in\mathbb{R}^{r\times d_{2}}\) and \(B\in\mathbb{R}^{d_{1}\times r}\). The number of parameters before and after decomposition respectively are \(d_{1}d_{2}\) and \(r(d_{1}+d_{2})\). Therefore, if Figure 1: Perplexity vs %Reduction in Rank for Different Models. \(r>\frac{d_{1}d_{2}}{(d_{1}+d_{2})}\), (i.e a decomposition with small rank reduction), then the size of the model after decomposition can even be higher than the original models. Ideally, we would want the rank \(r<<\frac{d_{1}d_{2}}{(d_{1}+d_{2})}\) or \(r<<d_{min}\). **Matrix Aspect Ratio and Compression:** Let the ratio of the smaller dimension to the larger dimension of the matrix (i.e. the aspect ratio) be \(\alpha=\frac{d_{min}}{d_{max}}\). For square matrix, \(\alpha=1\) and for tall or fat matrices \(\alpha<<1\). We can rewrite, the percentage change in parameters from decomposition, in terms of percent change in rank \(\%\Delta r=100*\frac{d_{min}-r}{d_{min}}\%\) and aspect ratio as: \[100*\frac{r(d_{max}+d_{min})-d_{max}d_{min}}{d_{max}d_{min}}=100\alpha-(1+ \alpha)\%\Delta r \tag{6}\] It should be noted that change in parameters from decomposition can either be positive (the number of parameters increased after decomposition), or negative (the number of parameters decreased after decomposition). In order to achieve model compression and consequently inference speedups, one would want a very high negative percentage change in parameters. **Parity Point for Compression across Rank Reduction:** Using Eq. 6, one can observe that little reduction in rank, may lead to increase in model parameters instead of decreasing. For instance, square matrices (\(\alpha=1\)) will have 100% increase (i.e doubling in size), then \(\%\Delta r\to 0_{+}\) and only after the rank is reduced by more than 50%, will the **Parity Point** of the rank reduction be reached, that offers same or lesser number of a parameter in the decomposed layer as the original matrix. This parity point for tall or fat matrices (\(\alpha\to 0_{+}\)), can be achieved with a very small percent reduction in rank and can start giving a reduction in model size. For compression to be achieved, we would want to reduce the rank by an amount to cross this parity point threshold. However, reducing the rank by a lot can degrade performance significantly. So we must take the aspect ratio into account, in order to achieve compression without much reduction in rank (and hence no significant degradation in output quality) A transformer model had different aspect ratios across its various linear layers, \(\alpha=1.00\) for output projection after attention, \(\alpha=0.96\) for Multi-query attention (Shazeer, 2019) projections, \(\alpha=0.25\) for typical MLP projections with intermediate expansion factor of 4 as in the original transformer and as low as \(\alpha=0.12\) for the embedding and language model head projection of CodeGen 16B with 51200 vocab size. Figure 2 plots the % change in the size of the model across % reduction in rank for matrices with different aspect ratios. For square matrices and near square matrices, a small rank reduction doubles the size of the linear layer after decomposition, and only after its parity point of 50% reduction is the size after decomposition, the same as original matrix. By this extent of rank decomposition, the performance starts to significantly degrade, as seen in SS2.3. All the previous works on smaller models, address this by retraining the model (Yu and Wu, 2023; Chen et al., 2021; Hsu et al., 2022; Ben Noach and Goldberg, 2020), often via knowledge distillation supervision (Hinton et al., 2015; Sanh et al., 2019) on specific narrow tasks. However, retraining is infeasible for larger models. Thus, we skip matrices with very high aspect ratios such as output projection or multi-query attention for decomposition. In contrast, the weights in MLP achieve parity at only 20% rank reduction. While embeddings and LM Head can be compressed through decomposition, as they have been for smaller transformer models (Baevski and Auli, 2019; Lan et al., 2020), they contribute only a very small portion of the weight of the model. So, we do not consider decomposing these matrices. In order to reduce the aspect ratio of matrices, we **group layers** with the same input vector to have the same bottleneck matrix after decomposition. Doing so, enables re-use of computation, and sharing of weights, as well as bringing the aspect ratio down to achieve compression as lower rank reduction. Candidate layers for grouping include the query, key and value projection matrices Figure 2: Parity Point across various aspect ratios (\(\alpha\)) of the different linear layers in transformers. in multi-headed attention with aspect ratio reduced to \(\alpha=0.33\) and the gating layer in SwiGLU (Shazeer, 2020) with first linear linear of MLP in models like LLaMa (Touvron et al., 2023) with \(\alpha=0.1875\). Trends across different layers in a transformer block:In addition to considering the parity point into account for deciding which layers to decompose, we also additionally study the sensitivity of each of these layers to low rank decomposition across the large model in the two model families. Figure 3 shows the increase in perplexity vs reduction in model parameters for the two models. For both models, decomposing all the linear layers achieves the parity point much later than any one of these linear layers with low aspect ratio. For CodeGen, the attention weight matrix (query, key and values projection) offers least increase in perplexity for the biggest drop in parameter count, make this layer the most suitable candidate to be decomposed. It shows less than 1% increase in perplexity even after 39.58% rank reduction. We observe the mlp 2 (downscaling mlp) to be a better candidate for decomposition than mlp 1 (upscaling mlp) across both models. This makes mlp 2 to be a good candidate for low-rank decomposition over the StarCoder model. Hardware Considerations:On modern hardware accelerators like GPU and their corresponding software stack, matrix multiplication kernels are faster if their dimensions are divisible by a high factor of 2. So, we consider ranks at a reduction of approximately every 10%, rounded off to the nearest multiple of 128 in our experiments. ### Performance of compressed models We consider the largest models of StarCoder and CodeGen family (16B) and perform low-rank decomposition on both with varying ranks. We consider decomposing layers that offers most parameter reduction (SS3.1) with least increase in perplexity - mlp 2 for StarCoder and attention for CodeGen. We report the Pass@1 and Pass@10 scores over the Human Eval dataset (Chen et al., 2021) using the code-eval GitHub repo (Bacaj, 2023) in Table 1. We observe that StarCoder models can be low rank decomposed to 13.2B parameters (50% rank reduction) with no drop in Pass@1 performance and upto 12.3B parameters (62.5% rank reduction) with very little drop. CodeGen models shows similar trend in drop in Human Eval performance when measured in terms of rank reduction. However, in terms of parameter reduction count, while showing very little perplexity change with large reduction in rank (Fig. 2(a)), shows much more drop in its HumanEval score when measured in terms of parameter count reduction due to a higher aspect ratio of the matrix being decomposed. It should be noted that for certain compressed models, the Pass@1 even slightly improves over the base model. Similar trend of slight improvements from compression across various metrics and benchmarks has been observed in the case of other compression attempts (Frantar & Alistarh, 2023; Cerebras, 2022). Figure 3: Parameter Reduction vs perplexity for decomposition across various layers. ### Speedup from LoRD We next consider accessing the inference speedup (forward pass) of the models over the standard cuBLAS floating point kernels. We consider the standard Huggingface implementation (Wolf et al., 2020) of Starcoder with pytorch backend (Paszke et al., 2019) utilizing standard cuBLAS kernels on A100 GPUs. LoRD decomposed models were implemented by modifying just one line of code to replace an MLP with an extra linear layer 3. We benchmark over 1024 tokens and 512 tokens sequence, averaged across 10 runs with warm up of 3 runs. We plot relative time taken and model size across reduction in rank in Figure 4. Footnote 3: nn.Linear(in, out) -> nn.Sequential(nn.Linear(in, rank), nn.Linear(rank, out)) Inference speedups as high as 22.35% are observed for decomposed models. The lines in the graph are generally downward sloping, Therefore reduction in rank beyond 25% generally implies less inference time and reduction in model size. However, the underlying hardware (and pertaining software kernels) also significantly affect the speedup gains. We notice huge gains, whenever the rank is rounded off to a multiple of a very high power of 2 (like 4096 and 2560 at 33% and 58% rank reduction), despite very little reduction in model size. In contrast, for certain ranks which are multiples of a lesser power of 2 (like 3584 and 2304 at 41% and 62% rank reduction) are slower than those at slightly higher ranks. It is worth noting that affect of hardware inefficient matrix shape is less significant for longer tokens sequence of 1024 because the \(O(n^{2})\) attention overhead starts becoming more significant, especially in the absence of SoTA attention implementation techniques (Rabe and Staats, 2021; Dao et al., 2022; Dao, 2023) as in the case of Huggingface's implementations. ## 4 Combining LoRD with Quantization and LoRA ### Quantization While LoRD enables compression at same precision level, we study whether the decomposed models can be further compressing through quantization. Table 2 shows the HumanEval pass@1 results for the different LoRDCoder across 8 and 4 bit quantization levels, using the near lossless quantization technique of SpQR (Dettmers et al., 2023). We observe that the LoRD models can be combined with quantization for further compression, showing no performance drop for 8-bit and very little performance drop on 4-bit quantization for most models. Slight increase in HumanEval after quantization is also observed, similar to Pangu-Coder2 (Shen et al., 2023). ### Parameter Efficient tuning of LoRD models \begin{table} \begin{tabular}{|c|c|c||c|c|c|c|c|} \hline \multicolumn{3}{|c||}{Starcoder 16B} & \multicolumn{3}{|c||}{CodeGen 16B Mono} \\ \hline Model Type & Rank & \multicolumn{3}{|c||}{HumanEval Score} & \multicolumn{3}{|c|}{Model Type} & \multicolumn{3}{|c|}{Rank} & \multicolumn{3}{|c|}{HumanEval Score} \\ \hline & \multicolumn{1}{c|}{Pars @ 1} & \multicolumn{1}{c|}{Pars @ 10} & \multicolumn{1}{c|}{Pars @ 10} & \multicolumn{1}{c|}{Pars @ 1} & \multicolumn{1}{c|}{Pars @ 1} & \multicolumn{1}{c|}{Pars @ 10} \\ \hline Base Model & 6144 & 31.67 & 48.28 & Base Model & 6144 & 29.02 & 46.34 \\ LoRDCoder 14.9B & 4480 & 33.18 & 48.41 & LoRDCoder 15.9B & 4480 & 29.08 & 46.95 \\ LoRDCoder 14.5B & 4096 & 31.69 & 45.12 & LoRDCoder 15.6B & 4096 & 28.90 & 46.24 \\ LoRDCoder 13.2B & 3584 & 30.90 & 47.56 & LoRDCoder 15.1B & 3584 & 28.54 & 45.73 \\ LoRDCoder 13.2B & 3072 & 31.57 & 45.36 & LoRDCoder 14.7B & 3072 & 27.99 & 43.29 \\ LoRDCoder 12.6B & 2560 & 29.84 & 42.31 & LoRDCoder 14.3B & 2560 & 27.32 & 45.12 \\ LoRDCoder 12.3B & 2304 & 29.22 & 40.12 & LoRDCoder 14.1B & 2304 & 27.07 & 41.46 \\ \hline \end{tabular} \end{table} Table 1: Human Eval Score of LoRD across StarCoder and CodeGen. Figure 4: Time and Model size of StarCoder 16B across ranks. We next test the potential for using LoRD to further reduce the memory usage over existing parmeter-efficient techniques. We consider the code instruction dataset (Chaudhary, 2023) and filter those examples that pertains to python programming language. We use QLoRA (Dettmers et al., 2023a), which is an even more memory efficient version of LoRA (Hu et al., 2022) storing the weights in quantized format, for fine-tuning for 1 epoch. We compare results from fine-tuning two of the decomposed models LoRDCoder 13.2B and LoRDCoder 12.3B model to the StarCoder model. We observe a HumanEval pass@1 of 37.80 and 37.62 across LoRDCoder 13.2B and LoRDCoder 12.3B fine-tuning, competitive to the performance of 37.74 offered by StarCoder model. ## 5 Related Work There is a growing interest in compressing pretrained Large Language Models. Several recent attempts have been dedicated to the quantization of weights of LLMs (Frantar et al., 2023; Lin et al., 2023; Yuan et al., 2023; Park et al., 2022; Kim et al., 2023b; Chee et al., 2023; Li et al., 2023a) with tricks such as outlier separation (Dettmers et al., 2022; Dettmers and Zettlemoyer, 2022; Dettmers et al., 2023c; Wei et al., 2022; Kim et al., 2023a; Lee et al., 2023). Some attempts also quantize the activations (intermediate representations) in addition to weights to speed up computation time (Shao et al., 2023; Xiao et al., 2023). The works in quantization that are closest to us is the LowRank Compensation (LoRC) Strategy (Yao et al., 2023; Wu et al., 2023), where the difference of the quantized matrix to the original matrix is approximated by a product of low-rank matrices. Our work decomposes the entire matrix for compression. Pruning neural networks Liang et al. (2021), unlike quantization, reduces the number of parameters in a model by removing unimportant weights or connections. Several techniques have been proposed to scale pruning methods for LLMs (Sun et al., 2023a; Frantar and Alistarh, 2023; Ma et al., 2023). However, pruning as a means of compression is yet to become viable due to no speedups over sparse matrices without significant performance drop at extreme levels of sparsity or structured sparsity (Zhu et al., 2023). With low-rank decomposition, we propose an alternate method for reducing model parameters that offer speedup even at a little reduction in parameter count. Certain works have also attempted to (Ren and Zhu, 2023; Li et al., 2023b) to split a dense matrix as a sum of low-rank matrices and a sparse matrix. However, these methods require retraining and have been shown to work only for Language Models of less than a billion parameters. Low rank decomposition has been proposed for smaller language models like Bert or GPT2 before using SVD decomposition (Ben Noach and Goldberg, 2020) and Kronecker decompositions (Tahaei et al., 2022; Edalati et al., 2022). Hsu et al. (2022) modified SVD to be data aware based on approximate second-order gradient information. A better weighted SVD was proposed by (Hua et al., 2022). Chen et al. (2021b) proposed a data aware decomposition method with a provably optimal closed-form solution, utilizing a large amount of data points over specific tasks to decompose. Sev \begin{table} \begin{tabular}{|l|l|l|l|} \hline Model & Pass@1@FP16 & Pass@1@8-bit & Pass@1@4-bit \\ \hline LoRDCoder 14.9B & 33.18 & 33.17 & 32.01 \\ LoRDCoder 14.5B & 31.69 & 31.58 & 32.74 \\ LoRDCoder 13.8B & 30.90 & 31.10 & 30.73 \\ LoRDCoder 13.2B & 31.57 & 31.52 & 32.01 \\ LoRDCoder 12.6B & 29.84 & 29.87 & 30.22 \\ LoRDCoder 12.3B & 29.22 & 29.14 & 29.45 \\ \hline \end{tabular} \end{table} Table 2: Human Eval score of quantized LoRDCoder models. eral recent works (Yu & Wu, 2023; Feng et al., 2022) have shown that while the weight matrix of neural networks is not inherently low-rank, the intermediate representations are, thus propose to decompose based on representations. All these works have focused on small language models and require re-training. We proposed low-rank decomposition for compressing neural networks without the need for retraining. The factorization has also been used just for the embedding layers (Baevski & Auli, 2019; Lan et al., 2020), as they are good candidates due to their very low aspect ratio of 0.015, where a reduction of rank by even 5% would lead to reduction in number of parameters after decomposition. There is also a growing interest in fine-tuning large language models Taori et al. (2023); Chiang et al. (2023); Wang et al. (2023b); Sun et al. (2023b). With the large memory requirements for fine-tuning full parameters of the LLM, the more parameter-efficient fine-tuning methods like LoRA (Hu et al., 2022) are getting widely adopted. These methods freeze the original LLM weights, and attach two low-rank matrices or adapters, in a skip-connection (He et al., 2016) to the linear layers of the model. These parameter-efficient fine-tuning approaches have seen improvements in lower activation memory (Zhang et al., 2023) or by keeping non-trainable model weights at 4-bit precision (Dettmers et al., 2023a). Our work, while focused on compression through low-rank decomposition, can also enable more efficient fine-tuning, especially in conjunction with existing methods. ## 6 Conclusion We studied the compression of monolingual code generation models through a novel one-shot compression paradigm of low-rank decomposition. We analyse the change in perplexity with change in rank across the model families of StarCoder and CodeGen as well as their individual layers and observe that the rank of these models can be reduced by upto 39.58% with less than 1% change in perplexity. We then proposed considerations for one-shot compressing these models through Low Rank Decomposition (LoRD) in under 10 minutes. Consequently, we compress StarCoder 16B to 13.2B with no drop in HumanEval pass@1 and very little drop in HumanEval pass@1 to 12.3B parameters. With a minimal change in code over huggingface's default inference code of just one line, we gain speedups of up to 22.35%. The LoRD models are also compatible with near lossless quantization techniques of SpQR, which offers gains of quantization based compression in addition to ones from decomposition. The LoRD models also reduce memory requirements by as much as 21.2% over vanilla QLoRA fine-tuning. ## 7 Broader Impact and Future Work Our work on LoRD, compresses code LLMs which enables them to run on smaller GPUs including as consumer grade GPUs. This is especially of pressing importance for the next few years when the shortage of GPU supply is relative to the increasing demand in today's market. Moreover, faster inference helps reduce the GPU cycles, enabling lower running costs and lower power consumption for LLM inference. Our work helps reduce the carbon emissions incurred and moves towards a greener NLP. Through compression, our work also promotes inference at the edge, and therefore opening room for applications involving strict privacy requirements. Lower latency will also help improve the User Experience in applications like CoPilots where lag between suggestions can impact developer's productivity. Several of these benefits of LoRD such as lower cost and energy consumption are also applicable for fine-tuning use cases of LLMs. Our work opens up a new paradigm for compression via Low Rank Decomposition over Large Language Models in a single shot without the need for retraining. Since, LoRD models can leverage existing floating point kernels across BLAS and cuBLAS, in contrast to quantization, these are much easier to implement and reap inference benefits. Our study on hardware considerations for speedup also opens up the potential for tuning the rank of decomposed models to fit best on the target hardware and the accompanying GEMM kernels. While our study is limited to monolingual code LLMs, the low rank decomposition technique is general and not specific to code domain. Thus exploring its applicability to more general purpose models like LLaMa is a promising direction for the compression of transformer LLMs beyond quantization. Another interesting unexplored question is whether the LoRA or QLoRA modules fine-tuned on original models, can be plugged in as-is for the LoRD models without any performance drop.
2309.04232
Seeding Contradiction: a fast method for generating full-coverage test suites
The regression test suite, a key resource for managing program evolution, needs to achieve 100% coverage, or very close, to be useful. Devising a test suite manually is unacceptably tedious, but existing automated methods are often inefficient. The method described in this article, ``Seeding Contradiction'', inserts incorrect instructions into every basic block of the program, enabling an SMT-based Hoare-style prover to generate a counterexample for every branch of the program and, from the collection of all such counterexamples, a test suite. The method is static, works fast, and achieves excellent coverage.
Li Huang, Bertrand Meyer, Manuel Oriol
2023-09-08T09:37:11Z
http://arxiv.org/abs/2309.04232v1
# Seeding Contradiction: a fast method for generating full-coverage test suites ###### Abstract The regression test suite, a key resource for managing program evolution, needs to achieve 100% coverage, or very close, to be useful. Devising a test suite manually is unacceptably tedious, but existing automated methods are often inefficient. The method described in this article, "Seeding Contradiction", inserts incorrect instructions into every basic block of the program, enabling an SMT-based Hoare-style prover to generate a counterexample for every branch of the program and, from the collection of all such counterexamples, a test suite. The method is static, works fast, and achieves excellent coverage. Keywords:Testing Coverage Software verification Eiffel _Draft of article to be presented at ICTSS 2023 (International Conference on Testing Software and Systems) in Bergamo, 18-20 September 2023._ ## 1 Overview In the modern theory and practice of software engineering, tests have gained a place of choice among the artifacts of software production, on an equal footing with code. One particularly important rule is that every deployed program should come accompanied with a _regression test suite_ achieving high branch coverage and making it possible to check, after any change to the software, that previous functionality still works: no "regression" has occurred. Producing a high-coverage regression test suite is a delicate and labor-intensive task. Tools exist (RANDOOP [23], Pex [25], AutoTest [4], Korat [7]) but they are typically _dynamic_, meaning that they require numerous executions of the code. The Seeding Contradiction (SC) method and supporting tools presented in this article typically achieve 100% coverage (excluding unreachable code, which they may help detect) and involve no execution of the code, ensuring very fast results. The principal insight of Seeding Contradiction is to exploit the power of modern program provers, which attempt to generate a counterexample of program correctness. In normal program proving, we hope that the prover will _not_ find such a counterexample: a proof follows from the demonstrated _inability_ to _disprove_ the program's correctness. Switching the focus from proofs to tests, we may look at counterexamples in a different way: as test cases. We may call this approach _Failed Proofs to Failing Tests_ or FP-FT. Previous research (including by some of the present authors) has exploited FP-FT in various ways [20][13][14]. Seeding Contradiction extends FP-FT to a new goal: generating a full-coverage test suite, by applying FP-FT to _seeded_ versions of the program in which a branch has on purpose been made incorrect. For every such variant, the prover generates a counterexample exercising the corresponding branch. Combining the result for all branches yields a high-coverage test suite. In fact coverage is normally 100%, with the following provisions: * Some branches may be unreachable. Then by definition no test could cover them; the tool may help identify such cases. (Terminology: we will use the term **exhaustive coverage** to mean 100% coverage of reachable branches.) * Limitations of the prover may prevent reaching 100%. In our examples so far such cases do not arise. The method involves no execution of the code and on examples tried so far produces a test suite much faster than dynamic techniques (section 5). The current setup involves the AutoProof [26][3] verification framework for contract-equipped Eiffel [19] code, relying internally on the Boogie proof system [18][5] and the Z3 SMT solver [11]. It is generalizable to other approaches. The discussion is organized as follows. Section 2 presents the approach by considering a small example. Section 3 examines the theoretical correctness of that approach. Section 4 describes the extent to which we have applied it so far, and section 5 assesses the results. Section 6 discusses limitations of the current state of the work and threats to validity of the evaluation results. Section 7 reviews related work and section 8 presents conclusions and future work. ## 2 The method A simple code example will illustrate the essential idea behind Seeded Composition. ### Falsifying a code block Consider a small routine consisting of a single conditional instruction: simple (a: INTEGER) do if a > 0 then x := 1 else x := 2 end where x is an integer attribute of the enclosing class. In a Design-by-Contract approach intended to achieve correctness by construction, the routine might include the following postcondition part (with \(\Longrightarrow\) denoting implication): ensure a>0 >=>x =1 a <=0 ==>x =2 With or without the postcondition, how can we obtain a regression test suite that will exercise both branches? Various techniques exist, discussed in section 7 and generally requiring execution of the code. The Seeding Contradiction technique is, as noted, static (it does not involve executing the code); it assumes that we have a toolset for proving program correctness. Specifically, we rely on the AutoProof environment [26][3], with a tool stack presented in Fig. 1, in which the Boogie prover is itself based on an SMT solver, currently Z3. A characteristic of this style of proof is that it relies on a _disproof_ of the _opposite_ property: the SMT solver tries to construct at least one counterexample, violating the desired result. If it cannot find one, the proof is successful. In this work, as in previous articles using the general FP-FT approach [13][14], we are interested in a proof that actually fails: then the counterexample can be useful on its own, yielding a directly usable test. In contrast with the earlier FP-FT work, the proof that will fail is not a proof of the actual program but of a modified version, into which we have inserted ("seeded") incorrect instructions. In the example, we change the first branch, so that the routine now reads simple (a: INTEGER) do if a>0then checkFalseend -- This is the added instruction x := 1 -- The rest is unchanged. else x := 2 end end A "check C end" instruction (assert C in some other notations [17]) states that the programmer expects condition C to hold at the corresponding program point. Specifically, its semantics is the following, from both a dynamic perspective (what happens if it gets executed) and a static, proof-oriented perspective: * From a dynamic viewpoint, executing the instruction means: if condition C has value True at that point, the check instruction has no effect other than Figure 1: AutoProof tool stack evaluating C; if C evaluates to False and the programmer has enabled runtime assertion monitoring, as possible in EiffelStudio, execution produces a violated-assertion exception, usually implying that it terminates abnormally. * In the present discussion's static approach, the goal is to prove the program correct. The semantics of the check instruction is that it is correct if and only if the condition C always has value True at the given program point. If the prover cannot establish that property, the proof fails. In a general FP-FT approach, the key property is that in the static view, if the proof fails, an SMT-based prover will generate a **counterexample**. In the Seeding Contradiction approach, C is False: the proof _always_ fails and we get a counterexample exercising the corresponding branch -- exactly what we need if, as part of a regression test suite, we want a test exercising the given branch. For the simple code seeded with a check False end, such a counterexample will, by construction, lead to execution of the first branch (a > 0) of the conditional. If we have an efficient mechanism to turn counterexamples into tests, as described in earlier work [13][14], we can get, out of this counterexample, a test of the original program which exercises the first branch of the conditional. Such a generated test enjoys several interesting properties: * It can be produced even in the absence of a formal specification (contract elements such as the postcondition above). * Unless the enclosing code (here the routine simple) is unreachable, the test can be produced whether the program is correct or incorrect. * If the program is correct, the test will pass and is useful as a regression test (which may fail in a later revision of the program that introduces a bug). * Generating it does not require any execution. * That generation process is fast in practice (section 5). The next sections will show how to generalize the just outlined idea to produce such tests not only for one branch as here but for _all_ branches of the program, as needed to obtain an exhaustive-coverage regression test suite. ### Block variables To generalize the approach, the following terminology is useful. So far it has been convenient to talk informally of "branches", but the more precise concept is **basic block**, defined in the testing and compilation literature as a sequence of instructions not containing conditionals or loops. (This definition is for a structured program with no branching instructions. In a more general approach, a basic block is any process node -- as opposed to decision nodes -- in the program's flowchart.) "Block" as used below is an abbreviation for "basic block". The method illustrated on the simple example generates a test guaranteed to exercise a specific block of a correct program: seed the program by adding to the chosen block one check False end instruction. Then, as seen in the example, we run the prover and apply the FP-FT scheme: since the program is now incorrect, the proof fails and the prover generates a counterexample, which we turn into a runnable test guaranteed to exercise the given block in the original program. To generalize this approach so that it will generate a test suite exercising all blocks, a straightforward idea is "_Multiple Seeded Programs_" (MSP): generate such a seeded program for each of its blocks in turn; then run the prover on every such program, in each case producing a counterexample and generating a test from it. Subject to conditions in section 3 below, the MSP approach is correct, in the sense that together the generated tests exercise all reachable blocks. It is, however, impractical: for a single original program, we would need to generate a possibly very large number of seeded programs, and run every one of them through the prover. To obtain a realistic process, we can instead generate a single seeded program, designed to produce the same counterexamples as would all the MSP-generated programs taken together. A helpful property of a good counterexample-based prover is that it can deal with a program containing several faults and generate a set of counterexamples, each addressing one of the faults. In the example above, we can submit to the prover a single seeded program of the form simple (a: INTEGER) do if a > 0 then check False end x := 1 -- Instruction 1 else check False end x := 2 -- Instruction 2 end which will produce two counterexamples, one for each branch. We call this approach "RSSP" (Repeatedly Seeded Single Program). With AutoProof, the FP-FT tools generate tests with a = 1 and a = 0. (More precisely, the prover initially generates larger and less intuitive values, but a minimization technique described in earlier work [14] produces 1 and 0.) This approach does not suffice for more complex examples. Assume that after the conditional instruction the routine simple includes another conditional: -- This code comes after the above conditional (Instructions 1-2) if a\({}^{2}>\) a then x := 3 -- Instruction 3 else x := 4 -- Instruction 4 end With the program seeded as above, even if we insert a check False end into each of the two new blocks (before Instructions 3 and 4), we will get tests covering only two cases (1-4, 2-4), not four (1-3, 1-4, 2-3, 2-4) as needed. These two tests, a = 1 and a = 0, fail to cover Instruction 3. The reason is that the prover does not generate specific tests for the branches of the second conditional (3-4) since it correctly determines that they are unreachable as both branches of the first conditional (1-2) now include a check False end. They were, however, both reachable in the original! The test suite fails to achieve exhaustive coverage. The solution to this "_Seeded Unreachability_" issue is to make the check themselves conditional. In the seeded program, for every routine under processing, such as simple, we may number every basic block, from 1 to some N, and add to the routine an argument \(bn\) (for "block number") with an associated precondition require \(bn\geq 0\) -- See below why 0 and not 1. \(bn\leq\) N To avoid changing the routine's interface (as the addition of an argument implies), we will instead make \(bn\) a local variable and add an initial instruction that assigns to \(bn\), non-deterministically, a value between 0 and N. Either way, we now use, as seeded instructions, no longer just check False end but if\(bn=i\) then check False end end where \(i\) is the number assigned to the block. In the example, the fully seeded routine body for the extended version of simple with two conditionals, is (choosing the option of making bn a local variable rather than an argument): ``` bn:="Valuechosennon-deterministicallybetween0andN" ifa>0then ifbn=1thencheckFalseendend x:=1--Instruction1 else ifbn=2thencheckFalseendend x:=2--Instruction2 end ifa^2>athen ifbn=3thencheckFalseendend x:=3--Instruction3 else ifbn=4thencheckFalseendend x:=4--Instruction4 end ``` As in the previous attempt, there are four incorrect check False instructions, but all are now reachable for \(bn\) values ranging from 1 to 4. The prover generates counterexamples exercising all the paths of the original program (with appropriately generated values for its original variables). In this case there is only one relevant variable, \(a\); AutoProof's prover generates, for the pair \([bn,a]\), the test values [1, 1], [2, 0], [3, -1], [4, 0]. These four tests provide 100% branch coverage for the program and can serve as a regression test suite. We call this technique **Conditional Seeding**; it addresses the Seeded Unreachability issue. As noted above, we accept for \(bn\) not only values between 1 and \(N\) (the number of basic blocks) but also 0. This convention has no bearing on test generation and coverage but ensures that the behavior of the original program remains pos sible in the seeded version: for \(bn\) = 0, none of the seeded check False will execute, so the program behaves exactly as the original. If the original was correct, the prover will not generate any counterexample for that value. ## 3 Correctness The goal of a test-suite-generation strategy is to produce high-coverage test suites. The Seeding Contradiction strategy is more ambitious: we consider it correct if it achieves **exhaustive coverage** (as defined in section 1: full coverage of reachable branches). More precisely, we will now prove that SC is "coverage-complete" if the prover is "reachability-sound", "correctness-sound" and "counterexample-complete". 3.1 defines these concepts and 3.2 has the proof. ### Definitions and assumptions Establishing the correctness of SC requires precise conventions and terminology. A general assumption is the availability of an "FP-FT" mechanism which, as described in previous articles [13], can produce directly executable tests (expressed in the target programming language, in our case Eiffel) from counterexamples produced by the SMT-based prover. As a consequence, the rest of this discussion does not distinguish between the notions of counterexample and test.1 Footnote 1: Counterexamples that the prover generates at first can use arbitrary values, sometimes too large to be meaningful to programmers; as noted in 2.2, a minimization strategy is available to produce more intuitive values. The SC technique and its analysis are independent of such choices of counterexamples. The definition of basic block, or just **block** for short, appeared earlier (2.2). For simplicity, we assume that the programs are **structured**, meaning that they use sequences, loops and conditionals as their only control structures. Also, we consider that a conditional always includes exactly one "else" part (possibly empty), and that a loop has two blocks, the loop body and an empty block (corresponding to the case of zero iterations). Further, expressions, particularly conditional expressions used in conditional instructions, are side-effect-free. Thanks to these conventions, instruction coverage (also known as statement coverage) and branch coverage are the same concept, called just "coverage" from now on. A (possibly empty) block of a program is **reachable** if at least one set of input values will cause it to be executed, and otherwise (if, regardless of the input, it cannot be executed) _unreachable_. Reachability is an undecidable property for any realistic programming language, but that need not bother us since this work relies on a prover of which we will only require that it be **reachability-sound**: if a block is reachable, the prover will indeed characterize it as reachable. (The prover might, the other way, wrongly characterize a block as reachable when in fact it is not: with if cos\({}^{2}\) (x) + sin\({}^{2}\) (x) = 100 then y := 0 else y := 1 end, the prover might consider y = 0 as a possible outcome if it does not have enough built-in knowledge about trigonometric functions. That too-conservative determination does not endanger the SC strategy.) A program may contain instructions of the form check C end, with no effect on execution (as previewed in section 2). Such an instruction is **correct** if and only if the condition C will hold on every execution of the instruction. This property is again undecidable, and again we only need the prover to be **correctness-sound**: if it tells us that an instruction is correct, it is. (We hope the other way around too, but do not require it.) For the SC strategy we are interested in the trivial case for which C is False. Also for simplicity, we assume that all correctness properties are expressed in the form of check instructions; in particular, we replace any contract elements (preconditions, postconditions, loop invariants and variants, class invariants) by such instructions added at the appropriate places in the program text. With this convention, a **block** is correct if all its check instructions are, and a **program** is correct if all its blocks are. For a normally written program, this definition means that the program is correct in the usual sense; in particular, if it has any contracts, it satisfies them, for example by having every routine ensure its postcondition. The SC strategy, by adding check False end to individual blocks, makes these blocks -- and hence the program as a whole -- incorrect. A **test suite** is a collection of test cases for a program. A test suite achieves **exhaustive coverage** if for every reachable block in the program at least one of its test cases causes that block to be executed. (Note the importance of having a reachability-sound prover: if it could wrongly mark some reachable blocks as unreachable, it could wrongly report exhaustive coverage, which is not acceptable. On the other hand, if it is reachability-sound, it may pessimistically report less-than-exhaustive coverage for a test suite whose coverage is in fact exhaustive, a disappointing but not lethal result. This case does not occur in our examples thanks to the high quality of the prover.) A test-suite-generation method (such as Seeding Contradiction) is **coverage-complete** if the generated test suite achieves exhaustive coverage for any correct program. In other words, for each reachable basic block of a correct program, at least one test in the suite will execute the block. Finally, consider a prover that can generate counterexamples for programs it cannot prove correct. The prover is **counterexample-complete** if it generates a counterexample for every block that it determines to be reachable and incorrect. With these conventions, the correctness of the Seeding Contradiction method is the property (proven next) that _If the prover is reachability-sound, correctness-sound and counterexample, SC is coverage-complete._ ### Proof of correctness To establish that correctness holds, on the basis of the preceding definitions, we first establish the following two lemmas: 1. Any test case of a seeded program (the program modified by addition of check instructions as described above) yields, by omitting the bn variable, a test case of the original program, exercising the same basic block. 2. Any reachable block of the original program is reachable in the seeded one. The proof of both lemmas follows from the observation that the seeded program has the same variables as the original except for the addition of the bn variable, which only appears in the conditional check instructions and hence does not affect the behavior of the program other than by possibly causing execution of one of these instructions in the corresponding block. If bn has value i in such an execution, the execution of all blocks other than the block numbered i (if any -- remember that we accept the value 0 for bn), in particular the execution of any block in an execution path _preceding_ the possible execution of block i, proceeds exactly as in the original unseeded program. As a result: * Any test executing block number i in the seeded program for any i has, for all other variables (those of the original program), values that cause execution of block i in the original program too, yielding Lemma 1. * Consider a reachable block, numbered i, of the original program. Since it is reachable, there exists a variable assignment, for the variables of the original program, that causes its execution. That variable assignment complemented by bn = i causes execution of block i in the seeded program, which is therefore reachable, yielding Lemma 2. To prove that SC satisfies the definition of correctness (given at the end of 3.1): * Assume that the original program is correct; then the only incorrect instructions in the seeded program are the added conditional check instructions (the if C then check False end at the beginning of every block). * Consider an arbitrary reachable basic block B, of the original program. Because of Lemma 2, it is also reachable in the seeded program. * If the prover is reachability-sound, it indeed determines that block B is (in the seeded program) reachable. * If the prover is also correctness-sound,it determines that B's seeded check instruction is incorrect, and hence (by definition) that B itself is incorrect. * Then if it is counter-example-complete it will generate a counterexample that executes B in the seeded program. * By Lemma 1, that counterexample yields a test that executes block B in the original program. * As a consequence, by the definition of correctness above, the Seeding Contradiction strategy is correct. ### Correctness in practice To determine that SC as implemented is correct, we depend on properties of the prover: the definition assumes that the prover is reachability-sound, correctness-sound and counterexample-complete. To our knowledge, no formal specification exists for the relevant tools in our actual tool stack (Fig. 1), particularly Z3 and Boogie. In their actual behavior as observed pragmatically, however, the tools satisfy the required properties. ## 4 Implementation We have implemented Seeding Contradiction strategy in the form of a new option of the AutoProof program-proving framework, called "Full-coverage Test Generation" (FTG) 1. The implementation relies on the FP-FT [13][14] feature of AutoProof, which enables automatic generation of failed tests from failed proofs. The objective is to add the incorrect check instructions at the appropriate program locations so that the verification of the seeded program results in proof failures, yielding an exhaustive-coverage test suite as described above. Footnote 1: AutoProof including the FTG option is available for download at github.com/huangl223/ES-AP-Installation. Like the rest of AutoProof, seeding is modular: routine by routine. It is applied at the Boogie level, so that the Eiffel program remains untouched. The Boogie equivalent of the check instruction is written assert. Depending on the structure of the code for a routine r, five cases arise, reviewed now. **A - Plain Block**. If the body of r includes no conditional and hence has only one path, the SC strategy inserts a single assert false at the beginning of the body. Verification of r results in failure of the assertion; by applying FP-FT, we obtain a valid test case of r (whose test input satisfies the precondition). **B - Implicit else branch**. If r contains a conditional whose else branch is implicit, SC makes it explicit and produces a test case covering the branch. Fig. 2 shows an example: SC inserts two assert clauses, one in the then branch and the other in the else branch that it creates. Running the proof produces two counterexamples for the two injected assert clauses, hence two tests. **C - Cascading Branches**. If r has a series of branches placed sequentially, as in Fig. 3, the SC algorithm inserts an assert false clause in each branch. The resulting tests cover all branches. **D - Nested branches**. When conditionals are nested, SC only generates tests targeting the _leaf_ branches -- those with no embedded conditionals. This approach is sound since any program execution that exercises a leaf branch must also go through all the branches leading to it. Fig. 4 has three leaf branches for Figure 2: Instrumentation for r with implicit else branch. Left: original Eiffel code of r. Right, seeded Boogie code. \(\mathtt{B}_{i}\) (\(i\in\{0,\,1,\,2\}\)) is a basic block in Eiffel, \(\mathtt{c}\) a branch predicate evaluating to true or false, \(\mathcal{T}(\mathtt{B}_{i})\) the Boogie translation of \(\mathtt{B}_{i}\). blocks B2, B3 and B5. Any execution going through B2 and B3 will exercise B1; SC only inserts assert instructions for leaves (none for B1). **E - Sequential decisions**. If r has multiple successive decision instructions, as in Fig. 5, SC inserts the conditional assert false instructions as explained in 2.2. It declares a variable bn for the block number and adds "if (\(bn==i\)) assert false;". Since the value of bn is between 0 and N (number of target blocks), it adds a clause "requires bn\(\geq\)0 && bn\(\leq\)N" to the precondition of r. ## 5 Evaluation and comparison with dynamic techniques We performed a performance evaluation of Seeding Contradiction as implemented in AutoProof per the preceding section, comparing it to two existing test generation tools: IntelliTest [25] (previously known as Pex, a symbolic execution test-generation tool for.NET) and AutoTest [4], a test generation tool for Eiffel using Adaptive Random Testing, specifically ARTOO [10]). Figure 4: Instrumentation for nested branches Figure 3: Instrumentation for cascading branches: three assert false clauses are inserted for the three branches in r; note that the elseif instruction in Eiffel, together with the last else instruction, is mapped to a nested if –else instruction in Boogie. ### Comparison criteria and overview of the results The experiment applies all three tools to generate tests for 20 programs adapted from examples in the AutoProof tutorial5 and benchmarks of previous software verification competitions [27][6][15]. Table 1 lists their characteristics, including implementation size (number of Lines Of Code) and number of branches. Footnote 5: [http://autoproof.sit.org/autoproof/tutorial](http://autoproof.sit.org/autoproof/tutorial) The comparison addresses three metrics: coverage; time needed to generate the tests; size of the test suite. All code and results are available at [https://github.com/huangl223/ICTSS2023](https://github.com/huangl223/ICTSS2023). The examples are originally in Eiffel; we translated them manually into C# for IntelliTest. The experiment includes a test generation session for every example in every tool. For AutoTest, whose algorithms keeps generating tests until a preset time limit, it uses 10 minutes (600 seconds) as that limit; there is no time limit for the other two approaches. All sessions took place on a machine with a 2.1 GHz Intel 12-Core processor and 32 GB of memory, running Windows 11 and Microsoft.NET 7.0.203. Versions used are: EiffelStudio 22.05 (used through AutoProof and AutoTest); Boogie 2.11.10; Z3 solver 4.8.14; Visual Studio 2022 (integrated with IntelliTest). Table 2 shows an overview of the results. SC and IntelliTest handle the examples well, with coverage close to 100%; SC reaches exhaustive coverage (100% coverage of reachable branches) for all 20 examples and IntelliTest for 19 examples. AutoTest, due to its random core, achieves the lowest coverage, reaching exhaustive coverage for only 7 examples. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline \multicolumn{3}{c}{Account Clock Heater Lamp} & \multicolumn{1}{l}{Max Linear} & \multicolumn{3}{c}{Insertion Gnome} & \multicolumn{1}{l}{Square} & \multicolumn{1}{l}{Sum and Arithmetic} \\ & & & & & Search & Sort & Sort & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \hline LOC & 214 & 153 & 102 & 95 & 49 & 64 & 122 & 62 & 56 & 56 & 204 \\ Branches & 14 & 10 & 8 & 8 & 3 & 5 & 5 & 5 & 5 & 4 & 14 \\ \hline \hline Binary & Recursive & Dutch & Two & way & Two & way Quick & Selection & Bubble & Optimized & Total \\ search & binary search & flag & max & sort & sort & Sort & Sort & Sort & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ 74 & 89 & & 188 & 49 & 85 & 232 & 167 & 165 & 183 & **2409** \\ 5 & 7 & & 11 & 4 & 6 & 9 & 5 & 5 & 8 & **141** \\ \hline \hline \end{tabular} \end{table} Table 1: **Examples** Figure 5: Instrumentation for sequential conditionals To reach exhaustive coverage, SC performs significantly faster than the other two: it needs less than 0.5 seconds on average -- about 50 times less than IntellTest and 500 times than AutoTest. SC also generates the smallest test suite; the average size of the exhaustive-coverage test suite from IntelliTest is slightly larger than SC, and both are much smaller than AutoTest. The importance of minimizing the size of test suites has become a crucial concern [22]. ### Detailed results Table 3 shows coverage results. For each example, we executed the generated test suite and calculated coverage as the ratio of _number of exercised branches_ over _number of branches_. SC always reaches exhaustive coverage (the maximum possible for Lamp is 87.5% as it contains an unreachable branch). IntelliTest reaches exhaustive coverage for most examples but misses it for Account and Lamp. AutoTest's coverage varies from 50% to 100%. Occasionally, it performs better than IntelliTest, reaching the maximum 87.5% for Lamp against IntelliTest's 50%. Table 4 gives the time needed to produce the test suite in the various approaches, using the following conventions: * For SC, time for test generation includes two parts: proof time (for AutoProof) and time for extracting tests from failed proofs (time for FP-FT). * For AutoTest, the time is always the 10-minute timeout, chosen from experience: within that time, test generation of examples usually reaches a plateau. * IntelliTest does not directly provide time information. We measure duration manually by recording the the timestamps of session start and termination. In Table 4 results, SC is the fastest of the three, with all its test generation runs taking less than 1 second. For IntelliTest, test generation takes less than \begin{table} \begin{tabular}{c c c c} \hline Metrics & SC & IntelliTest & AutoTest \\ \hline Avg. branch coverage & 99.37\% & 97.15\% & 81.2\% \\ Number of examples reaching exhaustive coverage & 20 & 19 & 7 \\ Avg. time for reaching exhaustive coverage (s) & 0.487 & 27 & 259 \\ Avg. number of generated tests for reaching exhaustive coverage & 6.26 & 10.47 & 623.28 \\ \hline \end{tabular} \end{table} Table 2: **Overall result** \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & Account & Clock & Heater & Lamp & Max & Linear & Insertion & Gnome & Square & Sum and max \\ & search & binary search & flag & max & sort & sort & Sort & Sort & \\ \hline 100\% & 100\% & 100\% & 100\% & 87.5\% & 100\% & 100\% & 100\% & 100\% & 100\% \\ IntelliTest & 92.85\% & 100 \% & 100\% & 50\% & 100\% & 100\% & 100\% & 100\% & 100\% \\ AutoTest & 78.6\% & 70\% & 62.5\% & 87.5\% & 66.7\% & 100\% & 80\% & 60\% & 100\% & 100\% \\ \hline Arithmetic & Binary & Recursive & Dutch & Two way & Two way Quick & Selection Bubble & Optimized \\ & search & binary search & flag & max & sort & sort & Sort & Sort & gnome sort \\ \hline 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% \\ 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% \\ 100\% & 100\% & 85.7 \% & 72.7 \% & 75\% & 83.3\% & 100\% & 80\% & 80\% & 50\% \\ \hline \hline \end{tabular} \end{table} Table 3: **Result: branch coverage** 40 seconds for most examples, but three of them out of 20 require more than one minute. For AutoTest, test generation time varies from 1.71 seconds for Square root to more than 20 minutes for Sum and max. Another important criterion, when a tool covers all the branches of a program, is how many redundant tests it produces. Table 5 presents the sizes of the generated test suites of the three tools when reaching exhaustive coverage. From a software engineering viewpoint, particularly for the long-term health of a project, a smaller size achieving the same coverage is better, since it results in a more manageable test suite giving the project the same benefits as a larger one. Among the three tools, SC generates the fewest tests. In most cases, the number of tests is the same as the number of blocks: as each generated test results from a proof failure of an incorrect instruction, seeded at one program location, each test covers just the corresponding block and introduces no redundancy. If nested branches are present, the size of the test suite can actually be less than the number of branches: SC only generates tests targeting the innermost branches (the leaf nodes of the control structure), as explained in section 4; each test going through these branches automatically covers all its enclosing branches. Intellitest also generates small test suites, but is slower. The reason is Intellitest's use of concolic testing [24], which tests all feasible execution paths: since a a branch can occur in several paths, a test will often identify a branch that was already covered by a different path. AutoTest, for its part, produces much larger test suites: as an Adaptive Random Testing tool, it often generates multiple test cases covering the same branches. Tables 2 to 5 provide evidence of the benefits of the approach (subject to the limitations examined in the next section): SC is fast and efficient; it uses less than 1 second to produce an exhaustive-coverage test suite with the fewest number of test cases. Other observations: * AutoTest does not guarantee that the test inputs satisfy the routine's precondition, while SC and IntelliTest always generate precondition-satisfying test inputs. The reason is that SC and IntelliTest rely on the results of constraint solving, where the routine's precondition is encoded as an assumption and will always be satisfied. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & Account & Clock & Heater & Lamp & Max Linear & Insertion & Gnome & Square & Sum and max \\ & & & & & Search & Sort & Sort & root & \\ \hline SC & 0.56 & 0.44 & 0.85 & 0.39 & 0.37 & 0.36 & 0.42 & 0.52 & 0.26 & 0.37 \\ IntelliTest & 9.58 & 7.44 & 8.06 & – & 8.19 & 9.63 & 11.77 & 10.89 & 12.86 & 10.99 \\ AutoTest & – & – & – & 233.03 & – & 21.95 & – & – & 1.71 & 1322.61 \\ \hline \hline Arithmetic & Binary & Recursive & Dutch & Two way & Two way Quick & Selection & Bubble & Optimized \\ & search & binary search & flag & max & sort & sort & Sort & Sort & gnome sort \\ \hline 0.415 & 0.44 & 0.48 & 0.43 & 0.52 & 0.39 & 0.90 & 0.50 & 0.59 & 0.54 \\ 32.98 & 99.29 & 13.07 & & 31.36 & 9.59 & 80.91 & 111.57 & 17.81 & 14.74 & 12.32 \\ 14.49 & 150.86 & – & & 330.89 – & – & 78.37 & – & – & – \\ \hline \hline \end{tabular} \end{table} Table 4: **Result: time (in seconds) to reach maximum coverage** * The SC approach is has a prerequisite: the program under test has to be proved correct (the proof of the original program has no failure), while AutoTest and IntelliTest have no such constraint. * As to the values of the generated test inputs, IntelliTest and AutoTest always apply small values that are easy to understand. SC initially produces test inputs that may contain large values; its "minimization" mechanism [14] corrects the problem. ## 6 Limitations and threats to validity The setup of the SC approach assumes a Hoare-style verification framework (of which Boogie is but one example), and the availability of a test generation mechanism that supports generating test cases from proof failures. We have not studied the possible application of the ideas to different verification frameworks, based for example on abstract interpretation or model checking. The current version of SC is subject to the following limitations: * SC is not able to handle programs with non-linear computations (such as derivation and exponentiation); this restriction comes from the underlying SMT solver. * SC does not support the more advanced parts of the Eiffel system, in particular generic classes. Data structures are limited to arrays and sequences. These limitations will need to be removed for SC to be applicable to industrial-grade programs. The following considerations may influence the generalization of the results achieved so far: * The number of repeated experiments increased the potential threats to internal validity. We hope that further experiments with large number of iterations will provide more conclusive evidence. * Although a few of the examples classes that we processed so far are complex and sophisticated, most are of a small size and not necessarily representative of industrial-grade object-oriented programs. In the future, we intend to use the EiffelBase library6, which has yielded extensive, representative results in the \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Account Clock Heater Lamp Max Linear Search & \begin{tabular}{c} Insertion Gnome \\ Sort \\ \end{tabular} } & \multicolumn{3}{c}{\begin{tabular}{c} Square \\ Sort \\ \end{tabular} } & Sum and max \\ \hline SC & 13 & 10 & 8 & 7 & 3 & 3 & 3 & 4 & 3 \\ IntelliTest & 13 & 13 & 8 & – & 4 & 7 & 5 & 7 & 5 & 5 \\ AutoTest & – & – & – & 656 & – & 127 & – & – & 18 & 1784 \\ \hline \hline Arithmetic Binary search & \begin{tabular}{c} Recursive \\ binary search \\ \end{tabular} & \begin{tabular}{c} Dutch \\ binary search \\ \end{tabular} & \begin{tabular}{c} Two way \\ flag \\ \end{tabular} & \begin{tabular}{c} Two way \\ flag \\ \end{tabular} & \begin{tabular}{c} Two way \\ sort \\ \end{tabular} & \begin{tabular}{c} Selection \\ sort \\ \end{tabular} & \begin{tabular}{c} Bubble \\ Sort \\ \end{tabular} & \begin{tabular}{c} Optimized \\ genome sort \\ \end{tabular} \\ 14 & 4 & 7 & 9 & 2 & 5 & 9 & 5 & 4 & 7 \\ 25 & 6 & 15 & 27 & 4 & 9 & 18 & 12 & 8 & 8 \\ 531 & 905 & – & – & – & – & 342 & – & – & – \\ \hline \hline \end{tabular} \end{table} Table 5: **Result: number of generated tests to reach exhaustive coverage** evaluation of AutoProof and AutoTest, and exhibits considerable variety and complexity in terms of size (according to various metrics), richness of program semantics, and sophistication of algorithms and software architecture. ## 7 Related work Previous work has taken advantage of counterexamples generated by failing proofs, but for other purposes, in particular automatic program repair [21] and generation of failing tests [20][13]. These techniques work on the original program and not, as here, on a transformed program in which _incorrect_ instructions have been inserted with the express purpose of making the proof fail. The earliest work we know to have applied this idea [1][2] generates tests for low-level C programs using Bounded Model Checking (BMC) [16], producing test suites with exhaustive branch coverage. A more recent variant, for Java bytecode, is JBMC [8]. In contrast with SC, each verification run only activates one assertion at a time, producing one counterexample. This approach is conceptually similar, in the terminology of the present work (2.2), to the "MSP" (Multiple Seeded Programs) technique, although the C version [1] uses compile-time macros, one for each block, to avoid the actual generation of multiple programs. In contrast, the present work uses RSSP (Repeatedly Seeded Single Program), relying on a single _run-time_ variable representing the block number. BMC-based approaches rely on the correctness of the _bound_ of the execution trace: if the bound is not set correctly, some branches might not be covered, requiring more verification runs to obtain a better bound. Other techniques that apply constraint solving for generating inputs includes test generations based on symbolic execution, such as Pex/IntelliTest [25], KLEE [9], PathCrawler[28]. None of the strategies proposed guarantees exhaustive branch coverage; they can achieve it when a systematic test generation strategy, rather than one based on heuristics or randomization, is applied. A very recent development (published just as the present work was being submitted) is DTest, a toolkit [12] for generating unit tests for Dafny programs, applying ideas similar to those of SC. As the generated Dafny tests are not directly executable, test generation requires transformation of Dafny programs and tests into a mainstream language. In contrast, the present approach works directly on Eiffel programs. The DTest coverage results cited in the referenced article are 100% on only 2 of its examples, and go down to as low as 58% on the others. One should not draw definite conclusions from these figures, since the examples are different, their program sizes too (more precisely, most of the examples are of comparable sizes, but the cited work has three between 1100 and 1900 LOCs, which we have not handled yet), and the article does not mention any presence of unreachable code (which makes it impossible to distinguish between full coverage and exhaustive coverage). It should be noted, however, that the article also makes no mention of the "Seeded Unreachability" issue discussed in section 2.2; in fact, it states that "_DTest enters a loop where it systematically injects trivially failing trap assertions (meaning assert false)_", a technique which generally leads, for any program with a non-trivial control structure, to Seeded Unreachability and hence to decreased coverage. That omission may be the reason for the relatively low coverage results reported in the article. The Conditional Seeding technique of SC, introduced by the present work, addresses Seeded Unreachability and has made it possible to reach exhaustive coverage in all examples so far. In addition, to obtain small test suites, DTest seems to require a separate minimization strategy, which takes from 8 to 1860 seconds on the cited examples, far beyond the times of running SC. In discussing minimization, the authors appear to come close to recognizing the Seeded Unreachability issue, without using the Conditional Seeding technique, when they write that "_we determine the feasibility of a path via a query to the SMT solver, in which a trap assertion is added that fails only if all the blocks along the path are visited_", a technique that is "_exponential in the number of SMT queries (running on all benchmarks_ [cited in the article] _would take weeks_)_". SC does not appear to need any such technique. ## 8 Conclusions and future work The approach presented here, Seeding Contradiction (SC), automatically generates test suites that achieve exhaustive branch coverage very fast. The presentation of the approach comes with a proof of correctness, defined as the guarantee that the generated test suite achieves exhaustive coverage (full coverage of reachable branches). While technical limitations remain, the evaluation so far demonstrates the effectiveness and efficiency of the SC approach through the comparison with two existing test generators IntelliTest and AutoTest, in terms of achieved coverage, generation time, and size of the test suite. Ongoing work includes handling larger examples, processing entire classes instead of single routines, providing a mechanism to generate tests covering branches that a given test suite fails to cover, and taking advantage of the SC strategy to identify dead code. **Acknowledgements**. We are particularly grateful, for their extensive and patient help, to Yi Wei (AutoTest) and Jocelyn Fiat (EiffelStudio and AutoProof). The paper benefitted from perceptive comments by the anonymous referees on the original version.
2309.10247
Measuring The Soft Excess Region Size Relative to the Corona in AGN With NICER
The soft excess is a significant emission component in the Soft (<1 keV) X-ray spectra of many AGN. It has been explained by disk reflection, a warm corona and other models. Understanding its origin is crucial for the energy budget of AGN emission, and for using it to study the inner accretion disk. Here, we track the weeks-to-months variability of several AGN that show different levels of soft excess strength with NICER. We use the variability time scales to compare the relative size of the soft excess emission region to the corona producing the hard X-ray emission above 1 keV. We find that the size of the soft excess emission region relative to the corona is not the same for the three sources studied. For TON S180, the soft excess region is comparable in size to the hard corona. While for MRK 335 and 1H0707-495, the soft excess region is larger than the corona by a factor of 2-4. This is the first time the relative sizes are quantified independently of the assumptions of the spectral models.
A. Zoghbi, J. M. Miller
2023-09-19T01:54:20Z
http://arxiv.org/abs/2309.10247v1
# Measuring The Soft Excess Region Size Relative to the Corona in AGN With NICER ###### Abstract The soft excess is a significant emission component in the Soft (\(<1\) keV) X-ray spectra of many AGN. It has been explained by disk reflection, a warm corona and other models. Understanding its origin is crucial for the energy budget of AGN emission, and for using it to study the inner accretion disk. Here, we track the weeks-to-months variability of several AGN that show different levels of soft excess strength with _NICER_. We use the variability time scales to compare the relative size of the soft excess emission region to the corona producing the hard X-ray emission above 1 keV. We find that the size of the soft excess emission region relative to the corona is not the same for the three sources studied. For TON S180, the soft excess region is comparable in size to the hard corona. While for MRK 335 and 1H0707-495, the soft excess region is larger than the corona by a factor of 2-4. This is the first time the relative sizes are quantified independently of the assumptions of the spectral models. X-ray active galactic nuclei (2035), Active galactic nuclei (16), Seyfert galaxies (1447), Supermassive black holes (1663), Black hole physics (159) 0000-0002-4002]A. Zoghbi 0000-0002-4882-7888]J. M. Miller 0000-0002-4882-7888]A. Zoghbi ## 1 Introduction A puzzling feature the X-ray spectra of AGN is the so-called soft excess. This is a strong excess of emission that is observed above the extrapolation of the power-law from the hot corona to energies below 1 keV (e.g. Walter & Fink, 1993; Bianchi et al., 2009). The origin of this feature is still debated (Garcia et al., 2019; Petrucci et al., 2020). It is featureless, with a shape that can generally be described by a blackbody with a temperature of \(\sim\)0.5 keV (Gierlinski & Done, 2004). Early models attributed the soft excess to the tail of the disk emission Leighly (1999) or to a smeared blend absorption lines (Gierlinski & Done, 2004). In recent years, two models are often discussed in the literature. In the first, the excess is produced by the sum of recombination lines and bremsstrahlung from the heated surface of the disk that is illuminated by the hot corona. When produced close enough to the black hole, relativistic effects smear and broaden the emission lines so it appears featureless (Crummy et al., 2006; Walton et al., 2013; Garcia et al., 2019). Reverberation time lags observed in many sources are a natural consequence of this model (Zoghbi et al., 2010; De Marco et al., 2013). In the second model, the soft excess is produced by Comptonization of thermal disk photons by a warm (\(\sim 1\) keV) and optically thick (\(\tau\sim 10-20\)) layer of gas at the surface of the disk (Magdziarz et al., 1998; Czerny et al., 2003; Done et al., 2012; Petrucci et al., 2013). The warm corona is distinct from the hot corona producing the primary hard X-ray emission, but they must be heated by accretion power. Both these models describe the observed time average spectra equally well, with the main discussion focusing on whether they are physically consistent (Garcia et al., 2019; Petrucci et al., 2020; Ballantyne & Xiang, 2020). These models predict different sizes for the soft excess region. In this work, we report on a monitoring experiment with _NICER_ that tracks the variability of the soft excess in several sources, and use the variability to infer the size of the soft excess region relative to the corona that emits at hard energies (\(>2\) keV). The combination of the large effective area and monitoring capability of _NICER_ allow for this experiment to be conducted for the first time. ## 2 Analysis ### Data The long term variability of 4 Narrow Line Seyfert 1 galaxies is presented. Two of which were part of the original _NICER_ proposal (TON S180 and PG 1404+226). Two others (1H0707-495 and MRK 335) that had public data on the _NICER_ archive were also included. Our analysis includes all public _NICER_ observations available on Dec 2022. We use analysis tools available in heasoft v6.31.1. Cleaned events files were generated from the unfiltered data using nicerl2. The spectra and the corresponding response and area files were then generated using nicerl3-spect. Background spectra were generated for every observation using all three background estimators. We find that the Scorpeon and the 3C50 models were consistent, while the space weather model over-estimates the background. We report the results from using the Scorpeon model. Using the 3C50 model gives similar resul The net exposures of the resulting spectra are typically in the range of 0.6-3 ks. 1H0707-495 had a handful of observations with exposures \(>10\) ks. The number of observations for each source are: 20 (1H0707-495), 80 (MRK 335), 29 (TON S180), and 26 (PG 1404+226). To visualize the spectral shape, Figure 1 shows the combined spectra for each of the 4 objects, produced by combining the individual spectra using the addspec tool. These are \(EF(E)\) spectra that have been unfolded against a constant model to factor out the effective area of the detector. The fact that the instrument response is averaged over many months when combining the spectra results in some spurious instrument features around 2 keV. These do not affect our analysis because we are interested in the flux from broad spectral components. Figure 2 shows the total (0.3-10 keV) count rate light curves. Figure 1 shows that the 4 objects span a range in both flux and strength of the soft excess component. Figure 2 shows that our targeted program for TON S180 and PG 1404+226 span about a year and with some uniform sampling compared to 1H0707-495. MRK 335 has the largest number of observations. In order to characterize the long term variability of the soft excess relative to the hard component from the corona dominating above 2 keV, we model the spectra from the individual observations with a model consisting of a power-law and a disk Figure 1: Combined NICER spectra from all observations of each sources. These are \(EF(E)\) spectra that have been unfolded against a constant model to factor out the effective area of the detector. Note that this unfolding leaves some instrumental features around 2 keV. blackbody component. The power-law models the spectrum of the corona, dominating above 2 keV, while the disk blackbody models the soft excess. This model is used because first, we are only interested in measuring the fluxes of the two components, and second, the 0.8-3 keV does not allow for more complex models. We find that this modeling is sufficient for the purpose of measuring the fluxes needed in this work. Using other models for the soft excess have little effect on the final results. We note that although PG 1404+226 shows a strong soft excess relative the hard component (Figure 1), the latter is too faint to enable spectral modeling of the individual spectra above 2 keV. We therefore drop it from subsequent analysis that compares the variability of the soft excess component to the hard corona. For some observations of 1H0707-495 and MRK 335, the signal above 2 keV was only high enough to constrain the flux of the power-law component, but not the spectral index. So for these two sources, we fix the spectral index of the power-law to the value measured from fitting the total spectrum with a similar model. For TON S180, the signal is high enough to allow for both the flux and spectral index to be measured in the individual spectra. Figure 2: The count rate light curves in the 0.3–10 keV band from all observations. The x-axes is in units of modified Julian date, and they all span the same \(\sim 580\) days. After the spectral modeling of each observation, we obtain a light curve for the 0.3-10 keV fluxes for the blackbody (\(F_{bb}\)) and power-law \(F_{p}\) components, the blackbody temperature, and for the case of TON S180, for the photon index too. We then proceed by calculating and modeling the power spectral density (PSD) from these light curves. ### Power Spectra The light curves are not evenly-sampled. We use the likelihood estimation method from Zoghbi et al. (2013). We specifically use the fqlag package v0.3.4 (Zoghbi, 2023). We first estimate the power values at 10 logarithmically-spaced frequency bins in the observed frequency range. The lowest and highest frequency limit are \(1/T\) and \(0.5/\Delta T_{\rm min}\), where \(T\) is the length of the observing campaign, and \(\Delta T_{\rm min}\) is the minimum time separation between neighboring observations. Also, as recommended by package documentation, we include two buffer frequency bins at both end of these limits to minimize biases, which are ignored when reporting the PSD. Because the frequency bins may not be large enough to ensure the errors in the measured PSD values are always Gaussian, and in order to allow for subsequent fitting of the PSD, we do not just use a single value estimate of the PSD (e.g. median and a standard deviation), we instead run Monte Carlo Markov Chains (MCMC) and empirically measure the probability density of the PSD values at every frequency bin. These probability density estimates are then approximated by a flexible standard probability density function (PDF). After trying several general functions, we found that the Johnson SB distribution provides an excellent approximation to the MCMC distributions. All the values were individually inspected to ensure the approximation is adequate. To characterize the measured PSDs, we fit them with three models: a power-law, a bending power-law and zero centered Lorentzian. The bending power-law has an index that smoothly changes between two values at some break frequency (McHardy et al., 2004). We tested fixing the lower index at both 0 and 1. Neither of them provided a significant improvement over the other two models (power-law and zero-centered Lorentzian). In the end, we found that the power-law model was sufficient to describe the PSD of both \(F_{p}\) and \(F_{bb}\) for both MRK 335 and TON S180, while the zero-centered Lorentzian provided a significantly better fit for 1H0707-495. In other words, a frequency break is significantly detected only on 1H0707-495. A plot of the PSD shapes in the three cases are shown in the top panel of Figure 3. We note that if we use the relation between black hole mass and variability time scale from McHardy et al. 2006, and mass and Luminosity estimates from the literature for MRK 335 and TON S180, we estimate the break frequencies to be: \(log\nu\sim\)-0.8 and 1.4 days\({}^{-1}\), respectively, which are higher than our cadence allows. For MRK 335, there is a hint of a break at the top of Figure 3, consistent with the expected value, but it is not significant. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**1H0707-495**} & \multicolumn{1}{|c|}{**MRK 335**} & \multicolumn{1}{|c|}{**TON S180**} \\ \hline \multirow{3}{*}{\(F_{p}\)} & \(A\) & \(2.57\pm 0.17\) & \(-0.87\pm 0.16\) & \(-2.26\pm 0.30\) \\ \cline{2-5} & \(\nu_{b}\) & \(-0.41\pm 0.19\) & - & - \\ \cline{2-5} & \(\alpha\) & - & \(-0.76\pm 0.11\) & \(-1.22\pm 0.25\) \\ \hline \multirow{3}{*}{\(F_{bb}\)} & \(A\) & \(1.96\pm 0.13\) & \(-1.37\pm 0.15\) & \(-2.19\pm 0.29\) \\ \cline{2-5} & \(\nu_{b}\) & \(-0.99\pm 0.16\) & - & - \\ \cline{1-1} \cline{2-5} & \(\alpha\) & - & \(-0.90\pm 0.15\) & \(-1.23\pm 0.20\) \\ \hline \end{tabular} \end{table} Table 1: Summary of the PSD fit parameters. For 1H0707-495, a zero-centered Lorentzian is fitted to the PSD, so the amplitude \(A\) (in units of RMS \(\times\) days) and break frequency \(\nu_{b}\) (in units of log days\({}^{-1}\)) are reported. For MRK 335 and TON S180, a power-law model is fitted, so the amplitude \(A\) and index \(\alpha\) are reported. Figure 3: Results of the PSD modeling for the fluxes of the blackbody (\(F_{bb}\)) and the coronal power-law components (\(F_{p}\)) for 1H0707-495, MRK 335 and TON S180. \(T\)op: The measured PSD values at 10 frequency bins as calculated using fqlag. The values and errors are the median and standard deviation from the MCMC chains. The y-axis is in units of RMS-normalized PSD and x-axis is the logarithm of the the Fourier frequency. \(B\)ottom: Smoothed density estimates for the probability density of the key parameter of interest, resulting from modeling the PSDs in the top panel by either a a zero-centered Lorentzian model (1H0707-495) or power-law model (MRK 335 and TON S180). For the former case, the logarithm of the break frequency is plotted, while for the latter, the integrated variability power (integral under the power-law PSD) in percentage units is plotted. In the bottom of Figure 3, we plot the probability distribution of a summary parameter that characterizes the variability of the flux of the two components. In the case of the 1H0707-495, we have a direct measure of the break frequency, so we use it. For MRK 335 and TON S180, no break in the PSD is measured, so we use the total integrated RMS variability (i.e. the integral under the PSD Vaughan et al., 2003). For each source, we compare the variability of the blackbody (\(F_{bb}\)) and power-law (\(F_{p}\)) spectral components. ## 3 Results & Discussion Section 2.2 (summarized in Figure 3), presented one way to characterize the variability from the fluxes of the soft excess component, modeled with a blackbody, and the hard coronal component, modeled with a power-law. It is not straight forward to convert these measurement to physical units. However, we can focus on the _relative_ scale of the parameters for the two spectral components. Figure 4: Probability density of the relative size of the soft excess to the hot corona. The relative size is obtained as the ratio of the summary parameter for soft excess flux (\(F_{bb}\)) and the corona (\(F_{p}\)). For 1H0707-495, the summary parameters is the PSD break frequency, while for the TON S180 and MRK 335, we use the total integrated RMS variability. The \(1\sigma\) single value estimates from these probability densities are: \(3.7^{+2.7}_{-1.5}\), \(2.2^{+1.0}_{-0.6}\) and \(0.9^{+1.0}_{-0.4}\) for 1H0707-495, MRK 335 and TON S180, respectively. Figure 4 shows the ratio between the summary parameters for the soft excess component (\(F_{bb}\)) and the corona component (\(F_{p}\)). The summary parameter is the characteristic time scale (i.e. the break frequency) for the case 1H0707-495, and the total integrated variability power in the case of the MRK 335 and TON S180. The 1\(\sigma\) single value estimates from these probability densities are: \(3.7^{+2.7}_{-1.5}\), \(2.2^{+1.0}_{-0.6}\) and \(0.9^{+1.0}_{-0.4}\) for 1H0707-495, MRK 335 and TON S180, respectively. For the case of 1H0707-495, both the break frequency and the total variability power can be measured. To check for consistency, we also measure the relative size using the variability power, we find: \(4.1^{+2.3}_{-1.5}\), which is similar to the value measured from the break frequency, supporting the robustness of the result. With the assumption that the parameter characterizing the variability (either the break frequency or the integrated RMS power) scale with the size of the emission region, the ratio in Figure 4 maps directly to the relative size of the emission regions producing the soft excess and the hard corona. This assumption is reasonable, and is justified by the many scaling relations observed in accreting black holes. This includes the scaling of variability time-scale with black hole mass (McHardy et al., 2006), the normalized excess variance (\(\sigma_{\rm{NXV}}^{2}\)) scaling with black hole mass (Papadakis, 2004; Ponti et al., 2012; Akylas et al., 2022), and the RMS variability on long time scales in optical light curves also scaling with mass (MacLeod et al., 2010). Note that what we refer to as size here is the radial location of the emission region, which is equivalent to region scale size given the usual assumption of symmetry in accretion disks. The plot in Figure 4 shows that the soft excess region is comparable in size to the hard corona in TON S180, while it is larger by a factor of 2-4 for the case of 1H0707-495 and MRK 335. This estimate does not make any assumptions about the nature of the soft excess emission, and so they provide new insight into the nature of the soft excess. A key requirement for the relativistic reflection model to explain the smooth soft excess, is that the emission has to originate very close to the black hole (only a few gravitation radii; \(r_{g}\) from the horizon). This is needed to blur out emission lines from O and Ne that are produced below 1 keV in in a partially-ionized gas. This in turn requires the illuminating corona to be very compact (Fabian et al., 2009; Zoghbi et al., 2010; Wilkins and Fabian, 2012; Walton et al., 2013; Jiang et al., 2019). So the primary X-ray source has to be very compact (only a few \(r_{g}\)), and the reflection is also very compact (Illustrated in the left panel of Figure 5). So to a first order, the reflector is expected to be _comparable_ in size to the primary corona. We confirmed this by running ray-tracing simulations around a spinning black hole. Although, we leave the detailed modeling to future work, our initial simulations show that for a standard lamp-post, the relative size is undefined because the corona is assumed to be a point source. If the an extension is added to the corona, we find that the size of the reflection region (as measured for example by the radius that encompasses 90% of the emission) is comparable to the size of the corona, consistent with emissivity profile studies (Wilkins & Fabian, 2012). The geometry for the warm corona model may not be as well defined, and it is typically an assumption in the model. In some models (e.g. Petrucci et al., 2013) (Illustrated in the middle panel of Figure 5), an optically-thin hot corona (\(kT\sim 100\) keV; \(\tau\sim 1\)) is present in the inner parts of the accretion flow, producing the hard emission. The outer accretion flow is a vertically structured accretion disk, with cold and optically thick matter in the deeper layers, while the upper layers are composed of an optically-thick warm corona (\(kT\sim 1\) keV; \(\tau\sim 10-20\)) that is powered by internal heating. The prediction from these geometries is that the soft excess region is _larger_ in size than the hot corona. Other models (e.g. Done et al., 2012) assume that the emission thermalizes to a (color temperature corrected) blackbody only at large radii (Illustrated in the right panel of Figure 5). At smaller radii the gravitational energy is split between powering optically thick Comptonized disk emission, forming the soft X-ray excess, and an optically thin corona above the disk, forming the hot corona tail at higher energies. Similar to the reflection case, the prediction here is that the soft excess region is comparable in size to the hot corona. Other variants of the model (Kubota & Done, 2018) assume an inner hot corona up to \(R_{\rm hot}\), then warm corona up to \(R_{\rm warm}\) and then the standard cold disk. In that specific study, a model for the spectral energy distribution can be fitted to the observations, and the different radii can be inferred. Those result suggest a relative size of \(R_{\rm warm}/R_{\rm hot}\sim 2-4\) for the sources in that study, which is similar to what we measure for 1H0707-495 and MRK 335. The geometry is slightly different from what Figure 5: Illustration of the different configurations that correspond to the different models proposed for the soft excess. The left scenario is for a relativistic reflection model, where the hot corona (blue) illuminates the disk producing emission lines, which when produced very close to the black hole are broadened and smeared out. The other two images are for the case of warm corona: A layer above the disk that is heated by internal dissipation that is distinct from the hot corona (Ponti et al., 2013), and a co-located hot and warm corona that radiate inward of some radius \(R_{\rm corona}\), while a standard disk emits outside it (Done et al., 2012). we sketched in Figure 5, but it would fall under the middle sketch where the warm corona is outside the hot corona. Our results suggest that for TON S180, where the size of the soft excess region is comparable to the hot corona, the relativistic reflection model or the warm corona that sandwiches a hot corona are possible geometries. On the other hand, for 1H0707-495 and MRK 335, a warm corona that is larger than the hot corona appears to be more consistent with the data. In our discussion of the relative sizes, we are ignoring the detailed geometries (e.g. spherical vs flat) and viewing angle. For a simple spherical emission regions, there is only scale size that controls the variability. For other shapes, say a torus, there are different scales in different directions, but the variability should be dominated by the largest scale, regardless of our viewing angle. Consequently, the flux may depend on the angle, but the variability timescale does not. We plan to continue monitoring these and other sources with a soft excess to obtain further constraints. The code used to produce these results is available on Zenodo (Zoghbi, 2023). The data products are available on the Open Science Framework site (Zoghbi, 2023). The material is based upon work supported by NASA under award numbers 80GSFC21M0002 and 80NSSC23K0333. NICER(XTI), HEASARC
2310.04437
Extended superposition theorem under power grid topological changes
Standard superposition theorem has been the basis in the last decades for many power system problems decomposition involving changes in nodal injections, from productions and loads. Its application scope has however been limited to fixed grid topology and breaks as soon as a topology change happens in the grid. For instance, it cannot be applied to compute N-2 power flows simply from N-1 security analysis. Topological changes also become a flexibility used more and more frequently for congestion management. Studying the effect of combinatorial topological changes is hence of interest, but so far very computation intensive. In this paper, we propose an extension of the superposition theorem to varying grid topologies. We demonstrate it under the DC approximation for all topological changes, namely line disconnection and reconnection, bus splitting and merging. We finally apply it to two use cases related to the above mentioned, effectively extending its scope of application.
Antoine Marot, Benjamin Donnot, Noureddine Henka, Sami Tazi
2023-09-30T19:35:11Z
http://arxiv.org/abs/2310.04437v1
# Extended superposition theorem under power grid topological changes ###### Abstract Standard superposition theorem has been the basis in the last decades for many power system problems decomposition involving changes in nodal injections, from productions and loads. Its application scope has however been limited to fixed grid topology and breaks as soon as a topology change happens in the grid. For instance, it cannot be applied to compute N-2 power flows simply from N-1 security analysis. Topological changes also become a flexibility used more and more frequently for congestion management. Studying the effect of combinatorial topological changes is hence of interest, but so far very computation intensive. In this paper, we propose an extension of the superposition theorem to varying grid topologies. We demonstrate it under the DC approximation for all topological changes, namely line disconnection and reconnection, bus splitting and merging. We finally apply it to two use cases related to the above mentioned, effectively extending its scope of application. superposition theorem, topology change, decomposition, power flow ## I Introduction Superposition theorem (ST) in electrical circuit is a century-old scientific foundation presented in every reference textbook [1]. It has helped make tremendous progress in the design, analysis and control of electrical circuits, with regards to power flows in power grids in particular. But so far, it has not been explicitly extended to electrical circuits with changing topologies. At the time of steep Energy Transition requiring more flexible operations [2], topological changes such as node splitting will happen more frequently. Tools with better performance under these conditions are needed [3]. Such changes usually requires to recompute the costly underlying matrix factorization of classical power flow methods. Our ST extension aims at bypassing this computation. There exists preliminary work in the direction of this extended ST in the form of Line Outage Distribution Factors (LODFs) [4, 5, 6]. Indeed [7] first proposed generalized LODF for multiple outages to be computed in an efficient form by solving a system of linear equations in the dimension of line outages. Bus Splitting Distribution Factors (BSDFs) were proposed lately [8] to compute unitary bus splitting change from PTDF. Yet none of these works make the connection with an underlying ST and only deal with specific cases. In the following, we propose to first extend ST to line reconnection, bus splitting and merging. We further unify them under a single extended ST, using only minimal necessary information. Instead of a pure calculus demonstration as in previous works, we rely mostly on grid equivalent model interplay as well on the existing ST. This comes with interpretability benefit. Every kind of topological change can eventually fall into a joint and unique linear system to solve in the dimension of unitary changes. We finally share numerical results of the extended ST implementation, to show its equivalence to power flow solvers in terms of accuracy under all topological change canonical cases. We finally run experiments over two applications, namely security analysis following a topological change and topological remedial action search. We analyze the gains of this method in terms of speed-up and interpretability with comparison to existing baseline methods. ## II Extended Superposition theorem We consider a grid with production and loads at electrical bus bar nodes, represented by the nodal injection vector \(Bbus\), and branches connected under a given grid topology we denote by \(T\). Our aim is to compute power flows \(PF(Bbus,T)\). We will restrict ourself in this paper to the DC approximation [9] and work under the assumption that the grid remains connected and does not get split apart in multiple components. ### _Definition_ Given a linear combination of changes in nodal injection vectors \(\delta Bbus^{i}\) that adds up to a reference nodal injection vector \(Bbus^{ref}\) to sum up to a target vector \(Bbus^{tgt}\), the standard ST decomposes the resulting grid state, namely a power flow \(PF(Bbus^{tgt},T)\), as the linear combination of grid states of single input elements for a fixed grid topology \(T\): \[PF(Bbus^{tgt},T)=PF(Bbus^{ref},T)+\sum_{i}PF(\delta Bbus^{i},T)\\ \text{with }Bbus^{tgt}=Bbus^{ref}+\sum_{i}\delta Bbus^{1} \tag{1}\] This superposition theorem have proven very useful to decompose the problem analytically, allowing for either more efficient computations or better interpretability when analyzing some grid phenomena. We aim at transposing this handy tool to topological changes, eg. change in the topology \(T\). Similarly we propose to extend it in the following to topological changes. Starting from \(T_{ref}\) as the reference topology to which we apply topological changes \(\tau_{i}\) in indifferent order, we reach a target topology \(\mathcal{T}^{tgt}\). In this case, as illustrated in Figure 1, we will demonstrate the following extended ST: \[PF(Bbus^{ref},\mathcal{T}^{tgt}) =\alpha PF(Bbus^{ref},\mathcal{T}^{ref})+\] \[\sum_{i}\beta^{i}PF(Bbus^{ref},\mathcal{T}^{i})\] \[\text{with }\mathcal{T}^{tgt}=\mathcal{T}^{ref}\odot\tau_{1} \odot...\odot\tau_{N_{r}}\text{ and }\alpha=1-\sum_{i}\beta^{i}\] Note that this decomposition is a weighted linear combination instead of a pure linear one as before. This requires to compute these weights. Finding the betas stems from solving a linear system of dimension the number of considered changes \(N_{\tau}\). Yet only minimal information is needed for this: * \(\text{Ifo}_{l}^{T^{ref}}\), resp. \(\text{Ifo}_{l}^{T_{k}}\), as state variables from reference power flow state \(PF(T^{ref})\), resp. from each unitary topological change \(PF(T^{\tau_{k}})\) state, as in the ST equation. * the nature of \(\tau_{k}\), which assets are impacted by it and how. Beyond this, no other knowledge such as underlying grid properties, complete adjacency matrix or complete topology information is needed. We will eventually demonstrate that the linear system to solve for computing the beta coefficients is of the form : \[\begin{bmatrix}1&1&1-\frac{\text{Ifo}_{l_{1}}^{T^{\tau_{2}}}}{\text{Ifo}_{l_{1} }^{T^{ref}}}&...&1-\frac{\text{Ifo}_{l_{1}}^{T^{\tau_{n}}}}{\text{Ifo}_{l_{2} }^{T^{ref}}}\\ 1-\frac{\text{Ifo}_{l_{2}}^{T^{\tau_{1}}}}{\text{Ifo}_{l_{2}}^{T^{ref}}}&1&... &1-\frac{\text{Ifo}_{l_{2}}^{T^{\tau_{2}}}}{\text{Ifo}_{l_{2}}^{T^{ref}}}\\...&...&...&...\\ 1-\frac{\text{Ifo}_{l_{1}}^{T^{\tau_{1}}}}{\text{Ifo}_{l_{1}}^{T^{ref}}}&1- \frac{\text{Ifo}_{l_{2}}^{T^{\tau_{2}}}}{\text{Ifo}_{l_{2}}^{T^{ref}}}&...&1 \end{bmatrix}\begin{bmatrix}\beta 1\\ \beta 2\\...\\ \beta n\end{bmatrix}=\begin{bmatrix}1\\ 1\\ 1\\ 1\end{bmatrix} \tag{2}\] If\(\text{Ifo}_{l_{i}}^{T}\) is either \(pf_{l_{i}}^{T}\) or \(\Delta\theta_{l_{i}}^{T}\) depending on the nature of \(\tau_{k}\). ### _two equivalent models_ We now describe an equivalent grid state model for disconnected lines that will be the foundation for extended ST demonstrations. Disconnecting a line leading to Topology \(T_{O}\) is actually equivalent of virtually injecting a cancelling flow \(cf_{l}^{T_{O}}\) (or \(Bcf_{l}^{T_{O}}\) in nodal injection vector form) at that line \(l\)[7, 10] while keeping this line, as well as other lines to disconnect, virtually connected as in Figure 2. Given Fig. 1: Extended ST example on IEEE14, starting from a meshed topology (top left) to which you apply 2 node splitting actions at substations 4 and 5 (bottom right). Displayed ST coefficients are derived from initial and unitary action states. \begin{table} \begin{tabular}{l l} **symbols** & \\ \(\downarrow\) & line \(l\) disconnected \\ \(=l\) & line \(l\) reconnected \\ \(\odot\) & composition symbol \\ **Sets** & \\ \(C\) & set of lines \(l\) to reconnect \\ \(O\) & set of lines \(l\) in outage \\ \(C_{\setminus o}\) & set of lines \(l\) to reconnect except line \(o\) \\ \(C_{1,2}\) or \(C_{1,2,3}\) & set of 2 or 3 lines \(l\) to reconnect \\ \(O_{1,2}\) or \(O_{1,2,3}\) & set of 2 or 3 lines \(l\) in outage \\ \end{tabular} \end{table} TABLE I: Notation \(N_{O}\) disconnected lines from \(T^{ref}\), this equivalence can be represented by: \[PF(Bbus^{ref},T_{O})=PF(Bbus^{ref}+\sum_{l\in O}Bcf_{l}^{T_{O}},T^{ref}) \tag{3}\] The grid topology in the virtual injection model is hence \(T^{ref}\) as the lines remain virtually connected. These models are equivalent in that they result in the same power flows. Phases at nodes also remain the same under the same reference phase node. Indeed, as a quick check, all connected line flows remains the same, hence the difference of phases \(\Delta\theta_{l}\) at their extremity remain the same. From the reference node, applying identical phase difference hops by hops to neighboring line extremities gives you the same phases for each nodes eventually. In this equivalent model, there exist a ohm's law induced virtual power flow \(vf_{l}^{T_{O}}\) at line \(l\) still virtually connected: \[vt_{l}^{T_{O}}=\sigma_{l}\Delta\theta_{l}^{T_{O}} \tag{4}\] This is based on the fact that phases remain the same in the disconnected case and virtually connected case. The related flows to virtual injections \(Bcf^{T_{O}}\) cancel \(vf_{l}^{T_{O}}\) out in the end. ### _Line disconnections_ Let's first consider the restricted topological changes case of line disconnections \(o\in O\). We depart from a reference topology in which those lines are connected \(T_{C}^{ref}\) and reach a topology \(T_{O}^{tgt}\) with all lines disconnected. The proposed extended ST in this case has been already indirectly demonstrated in Generalize LODF paper [7]. Indeed they compute multi-outage power flow solving a linear system of LODF, which is composed of reference and unitary outage power flow states. However our new demonstration from scratch here will pave the way for the next demonstrations too. To derive the linear system of equations to solve, that will further (re)demonstrate the extended ST theorem, we work with the equivalent virtual injection models. In the one-line disconnection case, the cancelling flow in this model is \(cf_{l}^{T^{r^{l}}}=-pf_{l}^{T^{ref}}=-vt_{l}^{T^{r^{l}}}\). In the multiple line disconnection case, virtually injected cancelling flows on other lines to disconnect (but virtually connected in the equivalent model) induce additional virtual flows on the line of interest as can be derived from standard ST(1): \[PF(B_{bus}^{ref},T_{O}^{tgt}) =PF(B_{bus}^{ref}+\sum_{l\in O}Bcf_{l}^{T_{O}^{tgt}},T^{ref})\] \[=PF(B_{bus}^{ref},T^{ref})+\sum_{l\in O}PF(Bcf_{l}^{T_{O}^{tgt}},T^{ref}) \tag{5}\] To derive the new powerflow terms, we will make use of line outage distributions factors aka LODFs: \[LODF_{o,l}=\frac{pf_{l}^{T^{r^{\setminus o}}}-pf_{l}^{ref}}{pf_{l}^{ref}} \tag{6}\] Note that LODF remain constant for a given topology such as \(T^{ref}\). In the equivalent modeling, as its topology is \(T^{ref}\), we can hence reuse the same LODF computed in the reference topology. Using \(LODF\), \(PF(Bcf_{l}^{T_{O}^{tgt}},T^{ref})\) is simply: \[PF(Bcf_{l}^{T_{O}^{tgt}},T^{ref})=LODF_{l}\times cf_{l}^{T_{O}^{tgt}} \tag{7}\] Which eventually leads by substitution to the equation: \[PF(B_{bus}^{ref},T_{O}^{tgt})=PF(B_{bus}^{ref},T_{C}^{ref})+\sum_{l\in O} LODF_{l}\times cf_{l}^{T_{O}^{tgt}} \tag{8}\] As \(pf_{l}^{T_{O}}=0\) for disconnected lines, this lead to the system of \(N_{o}\) independant cancelling flow equations: \[pf_{l}^{ref}+\sum_{o\in O}LODF_{o,l}\times cf_{o}^{T_{O}}=0\text{ for all }l\in O \tag{9}\] The virtual induced flow \(vt_{l}\) on line \(l\) is then: \[vf_{l}^{T_{O}}=pf_{l}^{ref}+\sum_{o\in O,o\neq l}LODF_{o,l}\times cf_{o}^{T_{ O}} \tag{10}\] From the initial flow \(pf_{l}^{ref}\), the contribution of other cancelling flows are added up to result in this induced flow. Given LODF definition from (6) the extended ST is: \[PF(T_{O}^{tgt}) =PF(T_{C}^{ref})+\sum_{l\in O}(PF(T^{r_{\setminus l}})-PF(T_{C}^ {ref}))\frac{cf_{l}^{T_{O}^{tgt}}}{pf_{l}^{T_{C}^{ref}}q}\] \[=\alpha\times PF(T_{C}^{ref})+\sum_{l\in O}\beta^{l}\times PF(T^{ \setminus_{l}})\] \[\text{with }\alpha=(1-\sum_{l\in O}\frac{cf_{l}^{T_{O}^{tgt}}}{pf_{l}^{ T_{O}^{ref}}})\text{ and }\beta^{l}=\frac{cf_{l}^{T_{O}^{tgt}}}{pf_{l}^{T_{C}^{ref}}} \tag{11}\] Fig. 2: Left, Power Flow in the reference topology for the two lines to disconnect. Right, the two lines disconnected with two equivalent models: the usual one with physical line disconnections on top, and the cancelling flow model at the bottom with lines still virtually connected where we find that: \[\alpha=1-\sum_{l\in O}\beta^{l} \tag{12}\] Substituting \(\beta^{l}\) for \(cf_{l}^{T_{O}^{gt}}\) in (9), we have: \[pf_{l}^{ref}+\sum_{o\in O}LODF_{o,l}\times\beta^{o}pf_{o}^{ref}=0\text{ for all }l\in O \tag{13}\] And reusing LODF definition from (6) we recover (2): \[pf_{l}^{ref}=\beta^{l}\times pf_{l}^{ref}+\sum_{o\in O,o\neq l}( \frac{pf_{l}^{ref}-pf_{l}^{T^{\setminus o}}}{pf_{o}^{ref}})\beta^{o}pf_{o}^{ ref}\] \[1=\beta^{l}\times 1+\sum_{o\in O,o\neq l}(1-\frac{pf_{l}^{T^{ \setminus o}}}{pf_{l}^{ref}})\beta^{o}\text{ for all }l\in O \tag{14}\] We see from (8) or (11) that it solely relies here on reusing known quantities \(PF(T^{ref})\), \(PF(T^{\setminus o})\) (or \(LODF\) equivalently), and requires to solve an additional linear system of equations of the size of the number of topological changes, that is the number of line disconnections here. ### _Line reconnections_ In this section, we are considering line reconnections \(\tau_{\neg l}\) topological changes from a reference topology \(\mathbf{T}_{O}^{ref}\) in which those lines were initially not connected. So we are changing state in a reverse order from Figure 2: the initial state is the one with disconnected lines as on the right and the final state is with connected lines as on the left. Here we cannot really reuse the linear system (9) as is, as there is not equivalent of \(pf_{l}^{ref}\) for initially disconnected lines \(l\) in this case. However we can reuse the extended ST for line disconnections from previous section II-C, to prove it for reconnections such that: \[PF(T_{C}^{gt})=\alpha_{C}PF(T_{O}^{ref})+\sum_{l_{i}\in C}\beta _{C}^{l_{i}}PF(T_{O}^{\tau_{\neg l_{i}}})\\ \text{with }\alpha_{C}=1-\sum_{l_{i}\in C}\beta_{C}^{l_{i}} \tag{15}\] #### Iii-D1 Demonstration We first demonstrate that if the ST exists, there is a unique decomposition, and them show its existence. UnicityFrom (15), we can deduce that the \(\beta\) coefficient would have a unique value as simple as: \[\beta_{C}^{l_{i}}=\frac{pf_{f_{o}^{gt}}^{l_{i}}}{pf_{f_{o}^{l_{i}}}^{l_{i}}} \tag{16}\] This stems from the fact that flow at line \(l_{i}\) is non null only in topologies \(\mathbf{T}_{C}^{gt}\) and \(\mathbf{T}_{O}^{\tau_{\neg l_{i}}}\) in which the line is connected. For all the other topologies, \(l_{i}\) is disconnected with null flow. Hence all other terms for \(l_{i}\) index are null in extended ST for line reconnections. Even if we have not yet computed \(pf_{f_{o}^{gt},2,3}^{l_{i}}\), this proves the uniqueness of \(\beta\) coefficients, as well as \(\alpha\) by transitivity, given the uniqueness of powerflows. Existence two line reconnections caseWe can start by reusing (11) with reverse roles of reference and target topologies: \[PF(\mathbf{T}_{O_{1,2}}^{ref})=\alpha_{O_{1,2}}\times PF(\mathbf{T}_{O_{1,2}}^{gt})+ \sum_{i\in\{1,2\}}\beta_{O_{1,2}}^{l_{i}}\times PF(\mathbf{T}_{O_{1,2}}^{\tau_{ \neg l_{i}}}) \tag{17}\] This time \(PF(\mathbf{T}_{O_{1,2}}^{gt})\) is the superposed state we are looking for. In the 2 lines reconnection case, the unitary reconnection grid topologies \(\mathbf{T}_{O_{1,2}}^{\tau_{\neg l_{i}}}\) are the same as the unitary disconnection \(T_{O_{1,2}}^{\tau_{\neg l_{i}}}\) as \(\mathbf{T}_{C_{1,2}}^{ref}\circ\tau_{\neg l}=\mathbf{T}_{O_{1,2}}^{ref}\circ\tau_{ \neg l}\). Hence \(PF(T_{O_{1,2}}^{\tau_{\neg l_{i}}})=PF(T_{O_{1,2}}^{\tau_{\neg l_{i}}})\). Rearranging (17) leads to: \[PF(T_{C_{1,2}})=\frac{1}{\alpha_{O_{1,2}}}(PF(\mathbf{T}_{O_{1,2}}^{ ref})-\sum_{i\in\{1,2\}}\beta_{O_{1,2}}^{l_{i}}\times PF(\mathbf{T}_{O_{1,2}}^{ \tau_{\neg l_{i}}}))\\ PF(T_{C_{1,2}})=\alpha_{C_{1,2}}\times PF(\mathbf{T}_{O_{1,2}}^{ ref})+\sum_{i\in\{1,2\}}\beta_{C_{1,2}}^{l_{i}}\times PF(\mathbf{T}_{O_{1,2}}^{ \tau_{\neg l_{i}}})\\ \text{with }\alpha_{C_{1,2}}=\frac{1}{\alpha_{O_{1,2}}}\text{ and }\beta_{C_{1,2}}^{l_{i}}=\frac{-\beta_{O_{1,2}}^{l_{i}}}{\alpha_{O_{1,2}}} \tag{18}\] From (12), we have: \[\frac{1}{\alpha_{O_{1,2}}}=1+\sum_{i\in\{1,2\}}\frac{-\beta_{O_{1,2}}^{l_{i}} }{\alpha_{O_{1,2}}} \tag{19}\] So we recover: \[\alpha_{C_{1,2}}=1-\sum_{i\in\{1,2\}}\beta_{C_{1,2}}^{l_{i}} \tag{20}\] This works out when \(\alpha_{O_{1,2}}\) is non null. As \(\alpha_{C}\) and \(\beta^{l_{i}}\) are all properly defined from (16) and cannot be infinite, \(\alpha_{O_{1,2}}\) is indeed never null. For more than two lines reconnections, we can recursively apply (18). Three line reconnections case and more with recursionWe start by reusing ST for line disconnections as previously: \[PF(T_{O_{1,2,3}}^{ref})=\alpha_{O_{1,2,3}}PF(T_{O_{1,2,3}}^{gt})+\sum_{i}\beta _{O_{1,2,3}}^{l_{i}}PF(T_{O_{1,2,3}}^{\tau_{\neg l_{i}}}) \tag{21}\] Applying (18) for two line reconnections, we have for instance here: \[PF(T_{C_{1,2,3}}^{\tau_{\neg l_{1}}}) =PF(T_{O_{1,2,3}}^{\tau_{\neg(l_{2},l_{3})}})\] \[=\alpha_{C_{2,3}}PF(T_{O_{1,2,3}}^{ref})+\sum_{i\in\{2,3\}}\beta_{C _{2,3}}^{l_{i}}PF(\mathbf{T}_{O_{1,2,3}}^{\tau_{\neg l_{i}}}) \tag{22}\] Substituting (22) in (21) we obtain: \[PF(T_{O_{1,2,3}}^{ref})=\alpha_{O_{1,2,3}}PF(T_{C_{1,2,3}}^{gtgt})\\ +\sum_{i\in\{1,2,3\}}\beta_{O_{1,2,3}}^{l_{i}}(\alpha_{C_{\backslash i }}PF(T_{O_{1,2,3}}^{ref})+\sum_{j\in\{1,2,3\}\backslash i}\beta_{C_{\backslash i }}^{l_{j}}PF(\mathbf{T}_{O_{1,2,3}}^{\tau_{\neg l_{i}}})) \tag{23}\] To finally reach the ST equation: \[\begin{split} PF(\mathbf{T}_{C_{1,2,3}}^{tgt})=\alpha_{C_{1,2,3}}PF(T_{ O_{1,2,3}}^{ref})+\sum_{l\in\{1,2,3\}}\beta_{C}^{l_{1}}pF(T_{O_{1,2,3}}^{\tau=l_{i}}) \\ \text{with }\alpha_{C_{1,2,3}}=\frac{1-\sum_{i\in\{1,2,3\}}\beta_{ O_{1,2,3}}^{l_{i}}\alpha_{C_{\backslash i}}}{\alpha_{O_{1,2,3}}}\\ \text{and }\beta_{C_{1,2,3}}^{l_{i}}=\frac{-\beta_{O_{1,2,3}}^{l_{i}} \sum_{j\in\{1,2,3\}\backslash i}\beta_{C_{\backslash i}}^{l_{j}}}{\alpha_{O_{1,2,3}}}\end{split} \tag{24}\] Reusing (20) we have: \[\alpha_{C_{\backslash i}}=1-\sum_{j\in\{1,2,3\}\backslash i}\beta_{C_{ \backslash i}}^{j} \tag{25}\] Which we substitutes in (24) to recover: \[\alpha_{C_{1,2,3}}=1-\sum_{i\in\{1,2,3\}}\beta_{C_{1,2,3}}^{l_{i}} \tag{26}\] By applying the same recursion, we find for any number of line reconnection the existence of a extended ST: \[\begin{split} PF(B_{bus},T_{C})=\alpha_{C}PF(B_{bus},T_{O}^{ref})+ \sum_{l_{i}\in C}\beta_{C}^{l_{i}}PF(B_{bus},T^{\tau=l_{i}})\end{split} \tag{27}\] #### Iii-B2 Linear system of equations to solve We will use the extended ST we just demonstrated in (27) to actually determine the linear system of equations to solve. It will be based on phases at line extremities of interest. The quantity we indeed know in the reference state and unitary reconnection states are virtual induced flows such as \(vt_{l}^{T_{O}}=\sigma_{l}\Delta\theta_{l}^{T_{O}}\). Our objective is to derive equations based on these terms. Let's hence consider once again the equivalent model of working with the fully connected topology all along. Lines switched off remains virtually connected which induces a flow \(vt_{l}\), while cancelling flows are virtually injected on them to result in a null flow as in Figure 2: \[pf_{I}^{T_{O}}=vt_{l}^{T_{O}}+cf_{I}^{T_{O}}=0 \tag{28}\] From the ST Theorem in (27), we have \(N_{C}=card(C)\) equations for each line \(l\in C\) to reconnect: \[\begin{split} pf_{I}^{T_{C}}-\beta_{C}^{l}pf_{I}^{T^{\tau=l_{i}}}& =0\\ &=\alpha_{C}pf_{I}^{T_{O}}+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{C} ^{l_{i}}pf_{I}^{T^{\tau=l_{i}}_{O}}\end{split} \tag{29}\] To make \(vt_{l}\) terms appear, we substitute (28) leading to: \[\alpha_{C}(cf_{I}^{T_{O}}+vf_{I}^{T_{O}})+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{ C}^{l_{i}}(cf_{I}^{T^{\tau=l_{i}}_{O}}+vf_{I}^{T^{\tau=l_{i}}_{O}})=0 \tag{30}\] Rearranging it as a sum of cancelling flows and a sum of virtual flows, we get: \[(\alpha_{C}cf_{I}^{T_{O}}+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{C}^{l_{i}}ef_{I }^{T^{\tau=l_{i}}_{O}})+(\alpha_{C}vf_{I}^{T_{O}}+\sum_{l_{i}\in C,l_{i}\neq l} \beta_{C}^{l_{i}}vf_{I}^{T^{\tau=l_{i}}_{O}})=0 \tag{31}\] We now demonstrate that unknown virtually injected flows \(cf_{I}\) cancels out in the ST equation, so that we remain with only shown \(vt_{l}\) terms to derive our linear system of equations. Equation 27 can be rewritten as: \[\begin{split} PF(B_{bus},T_{C})=\alpha_{C}PF(B_{bus}+Bcf^{T_{O}},T_{C})\\ +\sum_{l_{i}\in C}\beta_{C}^{l_{i}}PF(B_{bus}+Bcf^{T^{\tau=l_{i} }_{O}},T_{C})\end{split} \tag{32}\] Using the standard ST Theorem, we have for instance: \[\begin{split} PF(B_{bus}+Bcf^{T_{O}})=PF(B_{bus})+PF(Bcf^{T_{O}} )\end{split} \tag{33}\] So we can rearrange (32), with left hand side null given (26): \[(1-\alpha_{C}-\sum_{l\in C}\beta_{C}^{l_{i}})PF(B_{bus})=0 \tag{34}\] So right hand side is also null: \[\begin{split} PF(\alpha_{C}Bcf^{T_{O}})+\sum_{l\in C}PF(\beta_{ C}^{l_{i}}Bcf^{T^{\tau=l_{i}}_{O}})=0\\ PF(\alpha_{C}Bcf^{T_{O}}+\sum_{l\in C}\beta_{C}^{l_{i}}Bcf^{T^{ \tau=l_{i}}_{O}}_{O})=0\end{split} \tag{35}\] Null power flows all over the grid is only possible if all nodal injections are null. This leads to: \[\begin{split}\alpha_{C}Bcf^{T_{O}}+\sum_{l\in C}\beta_{C}^{l_{i} }Bcf^{T^{\tau=l_{i}}_{O}}=0\\ \alpha_{C}cf_{I}^{T_{O}}+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{C}^{l _{i}}cf_{I}^{T^{\tau=l_{i}}_{O}}=0\text{ for all 1}\end{split} \tag{36}\] This ends our demonstration that virtually injected cancelling flows at each line to reconnect cancel out through the superposition of grid states in ST. From (31) we are left with our linear system of equations: \[\begin{split}\alpha_{C}vf_{I}^{T_{O}}+\sum_{l_{i}\in C,l_{i}\neq l }\beta_{C}^{l_{i}}vf_{I}^{T^{\tau=l_{i}}_{O}}=0\\ \alpha_{C}\Delta\theta_{l}^{T_{O}}+\sum_{l_{i}\in C,l_{i}\neq l} \beta_{C}^{l_{i}}\Delta\theta_{l}^{T^{\tau=l_{i}}_{O}}=0\\ \alpha_{C}+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{C}^{l_{i}}\frac{ \Delta\theta_{l}^{T^{\tau=l_{i}}}_{O}}{\Delta\theta_{l}^{T_{O}}}=0\end{split} \tag{37}\] assuming \(\Delta\theta_{l}^{T_{O}}\) is always non null at extremity of disconnected lines. By resusing (26), we have \[(1-\sum_{l_{i}\in C}\beta_{C}^{l_{i}})+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{C}^{l _{i}}\frac{\Delta\Theta_{l}^{\tau=l_{i}}}{\Delta\Theta_{l}^{T_{O}}}=0 \tag{38}\] \[\beta_{C}^{l}\times 1+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{C}^{l_{i}}(1-\frac{ \Delta\Theta_{l}^{\tau=l_{i}}}{\Delta\Theta_{l}^{T_{O}}})=1 \tag{39}\] where we recover a system of equation in the form of (2). ### _Combined line disconnections and reconnections_ We finally demonstrate that combined line disconnection and reconnections fall under a single linear system to solve. At first sight, the two kind of linear systems seem indeed compatible as they both fall under the same linear system form (2) as from equations (14) and (39). #### Iv-E1 ST Existence Let's first deal with the single combination case and generalize it to the multi-combinatorial case. **Combined one line disconnection one line reconnection case** As for the two lines reconnection case, we can start by reusing (11) with reverse roles: reference and target topologies become unitary change topologies and conversely. Here we start with \(l_{1}\) connected and disconnect it, and \(l_{2}\) disconnected that we reconnect. So the topology with both lines disconnected is \(T^{\gamma_{\downarrow_{1}}}\) and the topology with both lines connected is \(T^{\pi_{l_{2}}}\) such that: \[PF(T^{\gamma_{\downarrow_{1}}})=\alpha_{O_{1,2}}PF(T^{\pi_{l_{2}}})+\beta_{O_ {1,2}}^{l_{1}}PF(T^{tgt})+\beta_{O_{1,2}}^{l_{2}}PF(T^{ref}) \tag{40}\] The topology corresponding to \(\beta_{O_{1,2}}^{l_{1}}\), \(T^{tgt}\), is the one for which \(l_{1}\) only is disconnected, and similarly for \(\beta_{O_{1,2}}^{l_{2}}\). By rearranging equation (40), we retrieve the extended ST theorem: \[\begin{split} PF(T^{tgt})&=\frac{1}{\beta_{O_{1,2} }^{l_{1}}}PF(T^{\gamma_{\downarrow_{1}}})\frac{-\alpha_{O_{1,2}}}{\beta_{O_{1,2}}^{l_{2}}}PF(T^{\pi_{l_{2}}})\frac{-\beta_{O_{1,2}}^{l_{2}}}{\beta_{O_{1,2 }}^{l_{1}}}PF(T^{ref})\\ &=\alpha PF(T^{ref})+\beta^{l_{1}}PF(T^{\gamma_{\downarrow_{1}}} )+\beta^{l_{2}}PF(T^{\pi_{l_{2}}})\\ \text{with }\alpha=\frac{-\beta_{O_{1,2}}^{l_{2}}}{\beta_{O_{1,2} }^{l_{1}}}\text{, }\beta_{{}^{l_{1}}}^{l_{1}}=\frac{1}{\beta_{O_{1,2}}^{l_{1}}}\text{, }\beta_{{}^{l_{2}}}^{l_{2}}=\frac{-\alpha_{O_{1,2}}}{\beta_{O_{1,2}}^{l_{2}}} \end{split} \tag{41}\] As \(\alpha_{O_{1,2}}=1-\beta_{O_{1,2}}^{l_{1}}-\beta_{O_{1,2}}^{l_{2}}\)), we recover: \[\begin{split}\alpha&=\frac{(\alpha_{O_{1,2}}+\beta _{O_{1,2}}^{l_{1}}-1)}{\beta_{O_{1,2}}^{l_{1}}}\text{ }=\frac{\alpha_{O_{1,2}}}{\beta_{O_{1,2}}^{l_{1}}}+1-\frac{1}{\beta_{O_{1,2}} ^{l_{1}}}\\ &=1-\beta^{l_{1}}-\beta^{l_{2}}\end{split} \tag{42}\] **Multi combinationatorial case** By sucessive combinatorial recursion using extended ST for line disconnections (11), reconnections (15) and one reconnection - one disconnection combined (41), extended ST is derived similarly to the demonstration for multi line reconnections: \[\begin{split} PF(T^{tgt})=\alpha PF(T^{ref})+\sum_{l_{i}\in C} \beta_{C}^{l_{i}}PF(T^{\pi_{l_{i}}})\\ +\sum_{l_{j}\in O}\beta_{O}^{l_{j}}PF(T^{\gamma_{\downarrow_{j }}})\\ \text{with }\alpha=1-\sum_{l_{i}\in C}\beta_{C}^{l_{i}}-\sum_{l_{j} \in O}\beta_{O}^{l_{j}}\end{split} \tag{43}\] #### Iv-E2 Linear System to solve We reuse (43), first along line reconnection indices, and then along line disconnection indices, to derive our set of equations. As in (29) for line reconnections only, we have here: \[\begin{split}\text{for all }1\in\text{C, }pf_{l}^{T^{tgt}}+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{C}^{l_{i}}pf_{l}^{T^{\pi_{l}}}&=0\\ \alpha pf_{l}^{T^{ref}}+\sum_{l_{i}\in C,l_{i}\neq l}\beta_{C}^{l _{i}}pf_{l}^{T^{\pi_{l}}l_{i}}+\sum_{l_{j}\in O}\beta_{O}^{l_{j}}pf_{l}^{T^{ \pi_{l}}l_{j}}&=0\end{split} \tag{44}\] In the second equation, all power flows for line \(l\) are null as the line is disconnected in all those states. But by reapplying the reasoning of cancelling flow equivalent model from equation (31) up to equation (39) we derive a first set of \(N_{C}\) independant equations: \[\begin{split}\beta_{C}^{l}\times 1+\sum_{l_{i}\in C,l_{i}\neq l} \beta_{C}^{l_{i}}(1-\frac{\Delta\Theta_{l}^{T^{\pi_{l}}=l_{i}}}{\Delta\Theta_{l }^{T^{ref}}})+\sum_{l_{j}\in O}\beta_{O}^{l_{j}}(1-\frac{\Delta\Theta_{l}^{T^{ \gamma_{l}}l_{j}}}{\Delta\Theta_{l}^{T^{ref}}})=1\end{split} \tag{45}\] Now for equations related to each line disconnections we also have: \[\begin{split}\text{for all }1\in\text{O, }pf_{l}^{T^{tgt}}&=0\\ \alpha pf_{l}^{T^{ref}}+\sum_{l_{i}\in C}\beta_{C}^{l_{i}}pf_{l}^ {T^{\pi_{l}}l_{i}}+\sum_{l_{j}\in O}\beta_{O}^{l_{j}}pf_{l}^{T^{\gamma_{l}}l_{ j}}&=0\\ \alpha pf_{l}^{T^{ref}}+\sum_{l_{i}\in C}\beta_{C}^{l_{i}}pf_{l}^ {T^{\pi_{l}}l_{i}}+\sum_{l_{j}\in O,l_{j}\neq l}\beta_{O}^{l_{j}}pf_{l}^{T^{ \gamma_{l}}l_{j}}&=0\end{split} \tag{46}\] The last equation stems from the fact that \(pf_{l}^{T^{\gamma_{l}}l}=0\). Those power flow quantities are known as the line is connected in these states. We hence derive a last set of \(N_{O}\) independant equations by replacing \(\alpha\) with \(\beta\)s: \[\begin{split}\beta_{O}^{l}\times 1+\sum_{l_{i}\in C}\beta_{C}^{l_{i}}(1- \frac{pf_{l}^{T^{\pi_{l_{i}}}}}{pf_{l}^{T^{ref}}})+\sum_{l_{j}\in O,l_{j} \neq l}\beta_{O}^{l_{j}}(1-\frac{pf_{l}^{T^{\gamma_{l}}l_{j}}}{pf_{l}^{T^{ ref}}})=1\end{split} \tag{47}\] Note than when \(pf_{l}^{ref}\) is non null, \(\frac{pf_{l}^{T^{\gamma_{j}}}}{pf_{l}^{T^{ref}}}\) can alternatively be used instead of \(\frac{\Delta\Theta_{l}^{\gamma_{j}}}{\Delta\Theta_{l}^{T^{ref}}}\) and conversely when \(\Delta\Theta_{l}^{T^{ref}}\) is non null. The overall linear system to solve is again of the form of (2). ### _Node splitting and merging topological changes_ A node splitting change can be modelled through a non-impedent virtual line disconnection [11] in between the two target nodes and conversely a node merging as a virtual line reconnection. Physically, you could represent this virtual line as a coupling breaker open or close between 2 bus bars that represents the nodes. Previous ST demonstrations and systems of linear equations directly apply to those changes as no hypothesis or usage of grid properties such as line impedance were made, with only reliance on grid state knowledge. For node merging, \(\Delta\theta_{nodes}\) between the two nodes to be merged can be used in the equations. For node splitting, the flow \(pf_{nodes}^{T}\) through the non-impedent line virtually connecting the two virtual nodes, not yet split, needs to be computed. It can be done based on line flows at the substation which results in a residual power flow at each virtual node: \[pf_{nodes}^{T}=\sum_{l_{i}\in node_{1}}pf_{l_{i}}^{T} \tag{48}\] ## III Experiments & Analysis In this section, we validate the accuracy of extended ST implementation and discuss its interests in practice through experiments. The source code for extended ST is publicly available in Github 1 and uses of Grid2Op framework [12]. Footnote 1: [https://github.com/marota/Topology_Superposition_Theorem.git](https://github.com/marota/Topology_Superposition_Theorem.git) ### _ST Numerical validation_ To evaluate the accuracy of the extended ST method, we select the combined actions of disconnecting and connecting lines, and splitting and merging buses, for the simple IEEE 14 grid as shown in table II (and run in getting started notebook). Configuration n\({}^{\circ}\)3 is the same as for Figure 1. For combination of same action type or multi action types, we solve the linear systems of extended ST and find the displayed beta coefficients. Using ST equation, we further retrieve the same flow values as usual DC power flows in all those cases with at least 4 decimal accuracy. ### _Interpretability of Combined Actions_ Given the complexity of interactions between power grid structures, it is important to understand the behaviour of a topological action. Such understanding makes the selection of corrective or preventive action easier, which can facilitate and accelerate the operator's daily work. To understand how the extented superposition theorem helps in this problem, we select two disconnection use cases: * Two lines from separate clicks (\(l_{1-2}\) and \(l_{11-12}\)), which are electrically distant from each other. * Two lines from the same click(\(l_{1-2}\) and \(l_{1-3}\)), which are electrically near to each other. As mentioned previously, the power flow through the remaining transmission lines can be calculated using the \(\beta\)s coefficients. We have defined the values of \(\beta\) for each case in table III. Note that when the disconnected lines come from different clicks, \(\beta\)s is close to unity (\(\beta\simeq 1\)), which means that flow redistribution to any remaining power line amounts to disconnecting each line independently from the other. Consequently, when the values of \(\beta\) are identical to the identity, the actions performed are electrically distant and can be considered as independent actions. In the second case, where the actions are applied within the same click, the corresponding \(\beta\) values deviate from the identity. This is due to the proximity of the topological changes that are interacting with one another. Therefore, the interpretation of \(\beta\)s can help clarify the independence of the applied action or the predominance of some actions. The same interpretation can be used for line reconnections, bus splits or merges, and even for mixed actions between all these elements. ### _Remedial Action search_ When looking for remedial actions, one topological change might not be enough and you might need to combine few of them. Topological change in table II could happen to be possible remedial actions. We hence see on a small grid the speed-up factors we can obtain while computing their combinatorial effect compared to state of the art power flow LightSim2Grid [13]. As the grid gets bigger, the speed-up factor increases since the computation time will remain similar for ST but solving the power flow requires new adjacency matrix factorization computation. The speed-up however decreases while the number of unitary topological changes increases as we can see for configuration n\({}^{\circ}\)4. ### _Topological Action Security Analysis_ Security analysis is an application at the core of power system operations. It has been quite optimized over the years, reusing for instance the same matrix factorization for all N-1 contingency computations. However when applying a topological change, one might need to be able to assess its robustness and recompute the security analysis quickly. The extended ST can help to reduce the computation time required for security analysis joined with topological actions. Addressing a smaller equation system to resolve compared to the entire power flow analysis yields quicker results than alternative approaches based on power flow. Therefore, for each grid under test, we select two random topological actions that do not break the grid. We then calculate the line outage security analysis for these two actions. Figure 3 compares the computation time for the resulting security analysis. LightSim solver, once the topological change is computed and applied once, resuses the corresponding matrix factorization to optimize computation. This is hence a challenging baseline. From a grid size of 100 buses or more, the proposed approach exhibits a higher computational speed compared to \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{n\({}^{\circ}\)} & \multicolumn{6}{c|}{topological change \(\tau\)} & \multirow{2}{*}{\(\begin{bmatrix}\text{speed}\\ \text{-up}\end{bmatrix}\)} \\ & \(l_{1-3}\) & & \(l_{9-10}\) & \(sub_{4}\) & \(sub_{5}\) & \multirow{2}{*}{\(\begin{bmatrix}\text{-up}\\ \text{-up}\end{bmatrix}\)} \\ & O & C & O & C & S & M & S & M \\ \hline 0 & 1.02 & & 1.09 & & & & & 5.0 \\ \hline 1 & 0.98 & & 0.92 & & & & 0 & 3.2 \\ \hline 2 & & & & & 0.62 & & 0.92 & 0 & 4.0 \\ \hline 3 & & & & & 1.69 & & 1.14 & 6.6 \\ \hline 5 & & 1.19 & 0.45 & & 0.69 & & & 1.59 & 2.2 \\ \hline \end{tabular} \end{table} TABLE II: For each combined topological change of single type (line outage or reconnection, bus splitting or merging) or multi-types on IEEE14, the ST coefficients and speedup factor compared to lightsim PF are computed numerically. \begin{table} \begin{tabular}{|c|c|} \hline \(\beta\) & Lines from different clicks & Lines from the same click \\ \hline \(\beta_{1}\) & 1.02 & 1.52 \\ \(\beta_{2}\) & 1.00 & 1.63 \\ \hline \end{tabular} \end{table} TABLE III: The coefficient of \(\beta\)s for disconnected lines in different and the same clicks. power-flow simulators and is scaling linearly in the number of contingencies after this stage. We can observe that when considering a similar number of topological changes (a contingency plus topological action), the proposed technique solves an identical equation system regardless of the grid's size, explaining this scalability. It becomes at least an order of magnitude faster for grids of 1000 buses and more. The computation time for security analysis can be improved through parallelization. This is achievable due to the independence of equation systems that need to be solved for each line outage simulation. Additionally, the interpretability of \(\beta\) parameters allows for further enhancements in the computation time of security analysis. Topological actions locally change the power flow distribution. If full grid topology information is available, a straightforward heuristic can leverage this aspect by commencing the computation of security analysis for the nearest electrical lines where the topological action was applied and subsequently ending the computation when all \(\beta\) values equal 1. One can then reduce the range of resimulating outage lines to consider when a topological action is applied. We could expect to probably reduce the refresh of security analysis to about 50 contingencies in general, and not the entire grid. On the IEEE 118 grid, this could account for an increased speed-up of a factor 3 for instance, by leveraging all aspect of extended ST. This here ends our short tour of first application examples of extended ST. ## IV Conclusion In this paper, we have demonstrated the existence and unicity of an extended superposition theorem for all type of unitary topological changes and their mixture. We have seen the speed-up and interpretability it can bring to power flow computations and their analysis. In particular applications for remedial action and security analysis can already benefit for it. We believe it can be of very generic use and a foundation for improvements in many applications as well as integrated in optimization formulations such as in optimal power flow. Future work will aim at revisiting some recent applications that involves topological changes in light of extended ST, such as grid segmentation [15, 16], leap net proxy power flow [17] or topological expert system [18]. It could result in better interpretation of choices or results, and possibly help improving the respective implementations.
2309.15459
GAMMA: Graspability-Aware Mobile MAnipulation Policy Learning based on Online Grasping Pose Fusion
Mobile manipulation constitutes a fundamental task for robotic assistants and garners significant attention within the robotics community. A critical challenge inherent in mobile manipulation is the effective observation of the target while approaching it for grasping. In this work, we propose a graspability-aware mobile manipulation approach powered by an online grasping pose fusion framework that enables a temporally consistent grasping observation. Specifically, the predicted grasping poses are online organized to eliminate the redundant, outlier grasping poses, which can be encoded as a grasping pose observation state for reinforcement learning. Moreover, on-the-fly fusing the grasping poses enables a direct assessment of graspability, encompassing both the quantity and quality of grasping poses.
Jiazhao Zhang, Nandiraju Gireesh, Jilong Wang, Xiaomeng Fang, Chaoyi Xu, Weiguang Chen, Liu Dai, He Wang
2023-09-27T07:52:53Z
http://arxiv.org/abs/2309.15459v2
# GAMMA: Graspability-Aware Mobile MAnipulation ###### Abstract Mobile manipulation constitutes a fundamental task for robotic assistants and garners significant attention within the robotics community. A critical challenge inherent in mobile manipulation is the effective observation of the target while approaching it for grasping. In this work, we propose a graspability-aware mobile manipulation approach powered by an online grasping pose fusion framework that enables a temporally consistent grasping observation. Specifically, the predicted grasping poses are online organized to eliminate the redundant, outlier grasping poses, which can be encoded as a grasping pose observation state for reinforcement learning. Moreover, on-the-fly fusing the grasping poses enables a direct assessment of graspability, encompassing both the quantity and quality of grasping poses. This assessment can subsequently serve as an observe-to-grasp reward, motivating the agent to prioritize actions that yield detailed observations while approaching the target object for grasping. Through extensive experiments conducted on the Habitat and Isaac Gym simulators, we find that our method attains a good balance between observation and manipulation, yielding high performance under various grasping metrics. Furthermore, we discover that the incorporation of temporal information from grasping poses aids in mitigating the sim-to-real gap, leading to robust performance in challenging real-world experiments. ## I Introduction Autonomous mobile manipulation has been an essential research area in robotics [1, 2], leading to diverse applications, _e.g._, manufacturing, warehousing, construction, and household assistance [3, 4, 5, 6]. For research on mobile manipulation, a challenging but popular task setting is to require the agent to actively observe and explore an unseen environment with the goal of manipulating a target object. Originated from the unseen nature of the environment, the agent can't directly plan a trajectory to reach and grasp the object. Instead, its has to rely on online observations and scene priors to make a success, posing many research questions to the area. Many existing works tackle this problem via combining 3D reconstruction and scene geometry analysis with motion planning [7, 8, 9]. However, such approaches usually suffer from huge computational costs for modeling the scene geometry and entail complicated heuristic designs tailored to specific robots. Recently, reinforcement learning (RL) based approaches [10, 11, 12, 13] have gained more attention due to their simplicity and efficiency. Exemplar works [11, 3] use visibility and reachability as scene priors, which can drive the agent to observe and approach the target object. They propose to learn and use them in policy's input states as well as reward, which has significantly improved the policy performance. In this work, we focus on an RL-based approach and propose a novel scene prior, _graspability_, to advance mobile manipulation policy learning. We define graspability as a complete set of valid grasping poses of the target object. Compared with reachability and visibility, which provide circuitous guidance to the agent for graspings, graspability offers more direct and informative guidance for effective grasping guidance. Note that graspability contains the full information of the valid target object graspings, online estimating graspability in an unseen environment is highly non-trivial, _e.g._, online observations during mobile manipulation often include many occlusions as well as large overlaps, leading to noises and redundancy in grasping pose predictions [14]. We thus propose Fig. 1: We present a graspability-aware mobile manipulation approach powered by an online grasping pose fusion framework that enables a temporally consistent grasping observation and efficient grasping. an online grasping pose fusion module, which dynamically fuses the redundant grasping poses and removes the outlier poses. This fusion process yields high-quality graspability estimation that achieves high precision and recall of valid grasping poses. To facilitate agent learning, we propose the following two ways to fully utilize the estimated graspability. First, we propose to encode graspability into states and use it in the policy input, endowing the agent with the awareness of grasping goals. We find that our graspability-aware agent can thus learn to move its base and arm more intelligently. Second, we propose to use the number of grasps and the distance-to-grasp information in graspability as RL reward, encouraging the agent to gain more observations of valid grasping poses. We also introduce a weight schedule that combines these two rewards to balance the observation goal and the grasping goal. This reward motivates the agent to prioritize extensive observations in the initial stages, subsequently shifting its focus to grasp the target object. Through extensive experiments on two mainstream simulators, Habitat [15] and Isaac Gym [16], which include a diverse range of environments and objects, we demonstrate that our method outperforms mainstream methods on both abstract grasping metric and realistic grasping pose metric. Moreover, real-world evaluations of our approach further showcase the robustness and effectiveness of our methodology. _The code will be released to benefit the community._ In summary, the contributions of our work include: * We propose an online grasping fusion module to fuse predicted grasping poses to obtain temporally consistent grasping poses for erasability observation. * We design an observe-to-grasp reward to effectively encourage agents to execute actions that balance both observation and grasping. * We present a graspability-aware mobile manipulation RL system, achieving robust performance on both simulators and real-world environments. ## II Related work **Traditional mobile manipulation methods.** For decades, the field of robotics has experienced significant growth in the advancement of mobile manipulation methods [1, 2]. Traditional researches [17, 7, 18] leverage scene analysis and motion planning, aiming to devise strategies for efficient task execution. However, these approaches assume access to explicit secure information regarding the environments, such as detailed maps with obstacle locations [17, 9, 19], precise object coordinates [18, 7, 8, 20]. **Learning-based mobile manipulation methods.** Mobile manipulation agents are trained to possess the capability to observe and interact within various scenes. One of the primary capabilities of these agents is to observe the target [21, 22], achieved by encouraging the agent to obtain multiple observations of the target object. Another crucial capability is to maneuver its arm to approach the target object, often referred to as reachability [10, 11, 23]. Regarding graspingability, advanced methods do not rely on predicted grasping poses [24] or object pose estimation [25, 26], as these approaches lack temporal perception of graspability. In this paper, we introduce an online grasping pose fusion module to fuse the predicted grasping poses for encoding graspability states, enabling our method to be graspability-aware and showcasing enhanced performance. ## III Problem Statement and Method Overview **Mobile manipulation task.** Given a target object location \(p_{\text{goal}}\), the robot is tasked with navigating through an unknown environment to effectively approach and grasp the target object. We follow the mainstream setup presented in [11, 10]. The robot is equipped with a mobile base, an arm, and a parallel gripper. Two RGB-D cameras are mounted: one to the head of mobile base (\(D_{\text{head}},I_{\text{head}}\)) and the other to the gripper (\(D_{\text{grip}}\),\(f_{\text{grip}}\)), where \(d\) and \(c\) represent the depth image and color image, respectively. The robot utilizes a 3-DoF configuration for its mobile base in SE(3), coupled with an \((x+1)\)-DoF arm. In detail, \(x=6\) for the Spot arm and the Unitree Z1 arm, and \(x=7\) for the Fetch robots, further augmented by a 1-DoF gripper for object grasping. **Overview.** Figure 2 provides an overview of our proposed graspability-aware mobile manipulation approach. To facilitate graspability, our method processes the depth image \(I_{\text{grip}}^{d}\) and leverages an off-the-shelf grasping module, GSNet [14], to predict grasping poses (Section IV-A). These predicted grasping poses are then fused online (Section IV-B) and encoded as the graspability state \(\mathcal{S}_{\text{grasp}}\). Subsequently, our method can learn the graspability-aware mobile manipulation policy \(\pi(\mathcal{A}_{\text{base}},\mathcal{A}_{\text{arm}},\mathcal{A}_{\text{ grip}}|\mathcal{S}_{\text{grasp}},\mathcal{S}_{\text{visual}},\mathcal{S}_{\text{ state}})\) through reinforcement learning, incorporating visual information \(\mathcal{S}_{\text{visual}}\) and state information \(\mathcal{S}_{\text{state}}\) (Section V-A). In our method, the policy generates a 3-DoF SE(3) velocity for mobile base control \(\mathcal{A}_{\text{base}}\), a 6-DoF residual adjustment for current arm joints \(\mathcal{A}_{\text{arm}}\), and a 1-DoF switch to control the gripper \(\mathcal{A}_{\text{grip}}\). During the RL training process, a composite observe-to-grasp reward is employed, incorporating both the grasping observation reward, \(g_{\text{go}}\), and the gripper-to-grgraping poses reward, \(g_{\text{gg}}\). This reward system motivates the robot to prioritize actions based on meticulous observations, guiding it towards more optimal grasping poses (Section V-B). For ease of description, the notations used in this paper default to the world coordinate system. ## IV Grasapability Estimation To obtain accurate and complete graspability of an object, we propose to predicts grasping poses at each timesteps (see IV-A) and online fuses them together while eliminating invalid ones (see IV-B). ### _Grasping pose prediction_ During mobile manipulation, our graspability-aware agent constantly captures observations from RGB-D cameras and performs online predictions of grasping poses. At each time step \(t\), the agent moves according to the policy, and then online obtains new observations of the scene. To obtain grasping poses \(\mathcal{G}^{(t)}\) based on the agent's observation, we leverage GSNet [14], which is trained on a billion-scale real-world dataset [27] and demonstrates robust performance in novel scenes [28, 29]. Given the target object's location \(p_{\text{goal}}^{(t)}\), we can extract a sphere-shaped region-of-interest (RoI) point cloud \(P_{\text{roi}}^{(t)}\) from the 3D points \(P_{\text{grip}}^{(t)}\). These points are obtained by back-projecting the depth map \(D_{\text{grip}}^{(t)}\) and then be transformed to the world coordinate system. Therefore, the \(P_{\text{roi}}^{(t)}\) can be formulated as follows: \[P_{\text{roi}}^{(t)}=\{P_{\text{grip}}^{(t)}\in\mathbb{R}^{3}\mid\|P_{\text{grip }}^{(t)}-p_{\text{goal}}\|_{2}<\tau\}, \tag{1}\] where \(\tau\) represents the maximum distance from the target object's location. Empirically, we set \(\tau=10\) cm. Then we can utilize GSNet \(G(\cdot)\) to predict the grasping poses as follows: \[\mathcal{G}_{\text{pred}}^{(t)}=G(P_{\text{roi}}^{(t)})=\{q_{i}^{(t)},p_{i}^{( t)},s_{i}^{(t)}\}_{i=1:n}, \tag{2}\] \(q\), \(p\), and \(s\) correspond to the orientation (represented by quaternion), position, and score of the predicted poses, respectively. Additionally, \(n\) denotes the number of grasping poses, which may be zero if the quality of \(P_{\text{grip}}^{(t)}\) is low. ### _Online Grasping Fusion Module._ Note that these predicted grasping poses \(\mathcal{G}_{\text{pred}}^{(t)}\) from each timestep can be noisy due to occlusions and overlapping significantly with previous ones. As a result, directly combining all predicted grasping poses \(\mathcal{G}_{\text{pred}}^{(0)}\cup...\cup\mathcal{G}_{\text{pred}}^{(t)}\) to obtain graspability may lead to many errors and a high degree of redundancy, posing further challenges to policy learning. To address this, we introduce an online grasping fusion module, which is designed to maintain temporally consistent grasping pose observations. An illustration of the fusion module is presented in Figure 3. To store and track the predicted grasping poses, our fusion module first partitions the 3D space into a uniform grid in the center of the target object location \(p_{goal}\). Here, we utilize a \(64\times 64\times 64\) cube grid (3cm voxel). For efficient memory usage, we use an off-the-shelf indexing table algorithm [30] to dynamically allocate the voxels. For each grasping pose \(\mathcal{G}_{\text{fused}}^{(t)}=\{o_{i}^{(t)},p_{i}^{(t)},s_{i}^{(t)}\}_{i=1:n}\), its corresponding voxel \(v\) can be found by \(v_{x/y/z}^{min}<p_{i,x/y/z}^{(t)}<v_{x/y/z}^{max}\). where \(x/y/z\) represents the comparisons across the three axes. With this approach, each voxel stores a collection of grasping poses, allowing for easy identification of neighboring grasping poses within a specified 3D range. However, these grasping poses are unorganized (lacking spatial information to one another) and redundant, requiring further refinement by merging grasping poses. Recognizing that grasping poses within a voxel are more sensitive to orientation than to translation [29], our method retains only those grasping poses that exhibit a considerable angular difference compared to other existing poses within the same voxel. Specifically, our method iteratively calculates the angle between new grasping poses \(\mathcal{G}_{\text{pred}}^{(t)}\) and fused grasping poses \(\mathcal{G}_{\text{fused}}^{(t-1)}\) belonging to the same voxel. If the angle exceeds \(\tau_{\text{angle}}\), the new grasping pose will be added to the voxel grasping set. Note that, we empirically set the \(\tau_{\text{angle}}=\pi/4\) through experiments. Moreover, grasping poses are mainly located within voxels that are close to the target objects, making the grasping pose query highly efficient. Given such a fused grasping grid, we can effi ciently traverse saved grasping poses to identify the \(\{q_{\text{fused}}^{(t-1)},p_{\text{fused}}^{(t-1)},s_{\text{fused}}^{(t-1)}\}\) with the smallest angle differences. If the angle is less than \(\tau_{\text{angle}}\), we utilize a weighted average with the weight determined by the score of grasping poses, represented as \(w=s_{\text{new}}/(s_{\text{fused}}+s_{\text{new}})\): \[p_{\text{fused}}^{(t)} =(1-w)p_{\text{fused}}^{(t-1)}+wp_{\text{pred}}^{(t)}, \tag{3}\] \[q_{\text{fused}}^{(t)} =\frac{\sin((1-w)\theta)}{\sin(\theta)}q_{\text{fused}}^{(t-1)}+ \frac{\sin(w\theta)}{\sin(\theta)}q_{\text{pred}}^{(t)},\] \[s_{\text{fused}}^{(t)} =s_{\text{fused}}^{(t-1)}+s_{\text{pred}}^{(t)},\] where the \(q_{\text{fused}}\) is renormalized to \(1\) to satisfy the quaternion constraint. Note that, the updated orientation \(q_{\text{fused}}\) may break the angle distance constraints, therefore new fused grasping pose will then be compared with other grasping poses within the same voxel until all the grasping poses satisfy the angle distance threshold. We find that the recursive grasping pose fusion is a rare occurrence due to the sparse distribution of grasping poses (typically containing approximately \(4\) grasping poses) in our implementation. This fusion operation leads to complete and accurate fused grasping pose results. **Valid grasping pose identification.** Due to occlusions arising from observational viewpoints, the prediction outcomes may include invalid grasping poses. These grasping poses could be distant from or oriented away from the target object, as depicted in Figure 3. To remove the outlier grasping poses, we follow the basic fact that the grasping pose should be densely and tangentially distributed along the target objects. Hence, we design a grasping pose consistency verification algorithm to evaluate the density of the grasping cluster. Specifically, our method connects neighboring grasping poses to form a 'grasping cluster' based on two criteria: (1) _Distance_: The grasping poses should locate in adjacent voxels. (2) _Orientation_: The angular difference between orientations should be less than \(1.5\tau_{\text{angle}}\). Upon establishing these connections, clusters containing fewer grasping poses than \(\tau_{\text{count}}\) are eliminated. These smaller clusters typically consist of outlier or error-prone poses. The resulting set preserves only the high-quality grasping poses for graspability observation. ## V Grasapability-aware policy learning. ### _Graspability states_ Taking the output grasping poses (a.k.a. graspablity) from the online grasping fusion module, we propose to encode them as a part of the state for RL. Note that this encoding is highly non-trivial because the grasping poses are unordered high-dimensional vectors. As a part of the pose, the quaternion \(q\) that represents grasping pose orientation is discontinuous among SO(3) manifold [31, 32], further creating difficulties. To tackle these challenges, we first convert quaternions into continuous 6D rotation representation \(F(\cdot)\)[31]; and then leverage an order-invariant neural network PointNet [33] (composed by three MLP layers \(M\) and a maxpooling layer) for state encoding: \[S_{\text{grasp}}=maxpooling\{M(p_{\text{fused}},F(q_{\text{fused}}),s_{\text{fused }})\}, \tag{4}\] A corner case we need to handle: at the beginning of mobile manipulation, the camera hasn't observed the target object yet, there is no grasping pose available. In this case, we need our graspability-aware approach to degenerate into being reachability-aware, directing the agent towards the target object's location. We thus leverage the same encoding method of graspability states (Equation 4), and substitute the grasping pose with the target object location \(p_{\text{goal}}\), uniform sampled orientation \(q_{\text{sample}}\) within SO(3), and a constant low score \(s_{\text{reach}}\) (set to \(0.1\)) to form the reachability observation \(S_{\text{reach}}\approx S_{\text{grasp}}\): \[S_{\text{reach}}=maxpooling\{M(p_{\text{goal}},F(q_{\text{sample}}^{(k)}),s_{ \text{reach}})\}_{k=1:K}. \tag{5}\] The uniformly sampled orientation \(q_{\text{sample}}^{(k)}\) (\(K=128\)) encourages the arm to reach the target object from any direction until the online grasping pose fusion module supplies valid grasping poses. Besides the graspability state, we also encoded the visual information \(\mathcal{S}_{\text{visual}}\) from both the front camera and gripper cameras and \(\mathcal{S}_{\text{state}}\) from joints state encoding. These follow the same methodology as described in [15]. Consequently, the graspability-aware policy can be expressed as \(\pi(\mathcal{A}_{\text{base}},\mathcal{A}_{\text{arm}},\mathcal{A}_{\text{ grip}}|\mathcal{S}_{\text{grasp}},\mathcal{S}_{\text{visual}},\mathcal{S}_{\text{ state}})\). ### _Observe-to-grasp reward for RL training._ With the graspability observation, the agent is required to make a balance between observing and grasping during mobile manipulation. To this end, we design an observe-to-grasp reward mechanism, consisting of a grasping observation reward \(r_{\text{go}}\) and a gripper-to-grasping pose reward \(r_{\text{gg}}\): \[r_{\text{go}}^{(t)} =\sum s_{\text{fused}}^{(t)}-\sum s_{\text{fused}}^{(t-1)}, \tag{6}\] \[r_{\text{gg}}^{(t)} =D_{\text{gg}}^{(t-1)}-D_{\text{gg}}^{(t)},\] where \(s_{\text{fused}}^{(t)}\in\mathcal{G}_{\text{fused}}^{(t)}\), and \(D_{\text{gg}}\) is the gripper to grasping pose evaluation function. For \(r_{\text{go}}\), we leverage the online grasping fusion module, directly assessing the enhancement of graspability observation. And for the gripper-to-grasping Fig. 3: An illustration of grasping pose fusion and valid grasping pose identification. New added grasps and previous fused grasping poses are indicated in green and blue, respectively. reward \(r_{\text{gg}}\), the gripper is encouraged to approach to fused grasping pose with a high score: \[D_{\text{gg}}=\min\{\ e^{-s_{\text{fused}}}(\beta_{1}\|p_{\text{ grip}}-p_{\text{fused}}\|_{2}+\beta_{2}\theta(q_{\text{grip}},q_{\text{fused}}))\}, \tag{7}\] where \(\theta(\cdot)\) compute the interval angle between two rotations (radius). And \(\beta_{1}\) and \(\beta_{2}\) are set to \(0.3\) and \(0.2\), respectively. Finally, we can formulate our reward as: \[r_{\text{o2g}}^{(t)} =(1-\sigma)r_{\text{go}}^{(t)}+\sigma r_{\text{gg}}^{(t)}, \tag{8}\] \[\sigma =\frac{1}{1+e^{0.5-t/t_{\text{max}}}},\] and the \(\sigma\) is a logistic sigmoid function related to the execution steps. This approach ensures that the observe-to-grasp reward initially encourages the agent to observe, and as more steps are taken, gradually shifts to promoting grasping actions. Such a reward is dense and adaptive, guiding the agent's learning more effectively. In addition to the observe-to-grasp reward, we also leverage a sparse success reward \(r_{\text{success}}=10\) (\(D_{\text{ee}}<15\)cm), a slack penalty \(r_{\text{slack}}=10^{-2}\) and a force penalty \(r_{\text{force}}=10^{-4}\). These additional rewards enhance the stability of the learning process. **Implementation details.** We use Habitat 2.0 as our training simulator. Our method predicts sample actions every 20 steps and uses Proximal Policy Optimization (PPO) [34] for agent training. Given the substantial overlap between consecutive frames for increased efficiency, our method predicts grasping poses every \(10\) frames. We uniformly sample \(128\) fused grasping poses (allowing repetition if the number of fused grasping poses is fewer than \(128\)) for \(\mathcal{S}_{\text{grasp}}\) (Equ. 4). In the absence of grasping poses, we resort to using \(\mathcal{S}_{\text{reach}}\) (Equ. 5). Any parameters not detailed in this paper are adopted from [15]. ## VI Experiments ### _Experimental setup_ **Synthetic environment setup.** We evaluate our method on the Habitat 2.0 simulator [15] and Isaac Gym [16]. The Habitat 2.0 features photo-realistic reconstructions of apartment scenes from ReplicaCAD [15], stuffed by objects from YCB dataset [36]. We use both two datasets on Habitat, including 1000 Habitat episodes and a self-build 1,000 challenging episode dataset with cluttered object layouts (approximately 10 objects on each receptacle). For Isaac Gym, we create a mobile manipulation environment similar to [11]. _The episode data will be released to the public._ **Real-world environment setup.** We deploy the B1+Z1 robot in the real world to grasp the target object from a cluttered receptacle (with 4-6 objects) while performing obstacle avoidance. In detail, We mount an Azure Kinect DK on the head of the B1 robot dog and a Realsense D415 on the gripper of the Z1 arm. During the experiments, we make use of ORB-SLAM3 [37] to obtain 6D grasping poses based on the gripper camera observations. Note that the extrinsic parameters between the cameras and the arm, as well as the robot base, are pre-calibrated. **Baselines.** Given the intricacy of the task, ensuring a fair comparison among all mainstream methods is a formidable challenge. Hence, we focus to compare our method (GAMMA) with other methods that are most pertinent to our approach and have been assessed within the same simulator environment. Specifically, we consider: 1. Multi-skill Mobile Manipulation **(M3)**[35]: A modular method which incorporates mobility for enhanced flexibility in object interactions. 2. Habitat-Baselines **(HB)**: A standard baseline method provided by Habitat 2.0. 3. Reachability-aware policy **(ReachMM)**: An approach that leverages the reachability encoded state \(\mathcal{S}_{\text{reach}}\) as defined in Equ. 5. 4. Non-fusion graspability-aware policy **(GAMMA without fusion)**: A method which doesn't make use of our proposed OGFM module and predicts grasp poses for every frame. **Metrics.** We measure the episode when the grasp action is called. In simulator, we consider two following metrics: (1) _Gaze Success Rate_ (GazeSR), an episode is deemed successful if the distance between the arm camera position and the target object position is within the \(15\) cm and the angle between the camera ray and the object-to-camera ray is less than \(10^{\circ}\). (2) _Grasp Success Rate_ (GraspSR), Success is achieved if the gripper's pose closely matches any densely annotated grasping pose, with deviations less than \(10\) cm in distance and \(10^{\circ}\) in angle. For real-world experiments, we evaluate the _Success Rate_ (SR) based on whether the object is grasped without being dropped. ### _Results_ **Comparison on Habitat simulator.** To comprehensively evaluate our method, we conduct extensive experiments in the Habitat simulator. These results are demonstrated in Fig. 4: Simulation setup. First row: Unitree B1 + Z1 robot dog in Isaac Gym. Second row: Fetch Robot (left) and Unitree B1 + Z1 robot dog (right) in Habitat Simulator. Table I. Here, we find that our method achieves state-of-the-art performance in all challenging settings. Moreover, we find other methods have an apparent performance drop from non-cluttered episodes to cluttered episodes because the cluttered scenes required more accurate gripper poses to avoid mistakingly grasping other objects. Our methods benefit from temporally consistent grasping poses and directly learn how to drive the gripper to the grasping pose, showing only a small performance drop. Another interesting finding is that the GazeSR and GraspSR have large performance gap in many methods that use abstract grasp (M3 [35], HB [15], ReachMM), which proved that the grasping poses perform better to motivate grasping strategy. **Comparison in Isaac Gym and real-world environments.** We deploy our policy, trained on the Isaac Gym simulator, to real-world environments (Unitree B1 + Z1 arm). The results on both Isaac Gym and the real-world environment are reported in Table II. We evaluate two widely-demanded skills grasping within cluttered objects and avoiding obstacles simultaneously. Our findings indicate that directly deploying our method yields robust performance in real-world settings. Furthermore, when comparing with the GAMMA without fusion, we observed a performance drop from GazeSR to GraspSR, a trend also identified in the Habitat [15] environment(Table I). This supports the importance of complete and accurate graspability in RL training. _Please refer to the attached video for more details of the real-world experiment._ **Mobile manipulation process analysis.** We plot two main observations related to graspability-aware mobile manipulation: the number of grasping poses and the distance from the gripper to these poses (Equ. 7). The results, showcased in Fig.6, originate from the Habitat environment. These data compellingly indicate that our policy adeptly directs the agent to identify an increased number of grasping poses while simultaneously nearing those specified positions effectively. Additionally, to elucidate the significance of the graspability state, we color-code the integrated grasping pose and employ a fine voxel size of \(1\)cm to ensure a detailed visualization. Throughout the mobile manipulation process, it's evident that the agent gains confidence while observing and approaching the grasping poses. **Runtime and memory analysis.** The online grasping fusion module is highly efficient, both in terms of memory usage and execution speed. It demands less than 0.1 GB of memory for a single scene and operates at real-time frame rates. For training, our method necessitates 36 GPU hours on an A100 to attain state-of-the-art performance. ## VII Conclusions We introduce a graspability-aware mobile manipulation policy, enabling agents to achieve robust and accurate mobile grasping. This capability is powered by an online grasping pose fusion module, which fuses online predicted grasping poses. This leads to temporally consistent grasping pose observations, facilitating learning graspability. Our method \begin{table} \begin{tabular}{l|c c|c} \hline \multirow{2}{*}{Methods} & Isacac Gym Simulator & Real-world Env. \\ \cline{2-3} & GazeSR & GraspSR & SR \\ \hline \hline GAMMA without fusion & 89.0\% & 29.6\% & 53.3\% \\ GAMMA (Ours) & **96.6\%** & **86.6\%** & **73.3\%** \\ \hline \end{tabular} \end{table} TABLE II: Comparisons on Isaac Gym simulator and real-world experiments under challenging conditions (with obstacles and cluttered objects). \begin{table} \begin{tabular}{l c c c c c c|c c} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Fetch Robot} & \multicolumn{4}{c}{Unitree B1+Z1 arm} \\ \cline{2-9} & \multicolumn{2}{c|}{Non-cluttered env.} & \multicolumn{2}{c|}{Cluttered env.} & \multicolumn{2}{c|}{Non-cluttered env.} & \multicolumn{2}{c}{Cluttered env.} \\ \cline{2-9} & GazeSR & GraspSR & GazeSR & GraspSR & GazeSR & GraspSR & GraspSR \\ \hline **HB[15]** & 45.8 & - & **22.0** & - & **57.0** & - & **31.9** & - \\ M[35] & 55.2 & 49.4 & 43.6 & 39.5 & - & - & - & - \\ ReachMM & 36.5 & 29.1 & 18.2 & 11.3 & 39.1 & 29.4 & 21.5 & 12.7 \\ GAMMA without fusion & 10.3 & 7.2 & 8.1 & 3.2 & 10.9 & 7.3 & 8.1 & 2.1 \\ **GAMMA (Ours)** & **67.3** & **62.4** & **60.7** & **57.1** & **71.0** & **69.2** & **66.3** & **64.5** \\ \hline \end{tabular} \end{table} TABLE I: Quantitative comparison and ablation study on the Habitat simulator on both non-cluttered and cluttered episodes. Fig. 5: Visualization of the quality of the fused grasping poses during mobile manipulation. The grasping are color-coded based on their graspability feature (to red the better). Fig. 6: Plots of two metrics: (1) number of grasping poses and (2) distance to grasping poses of our method and ReachMM while performing mobile manipulation. demonstrates superior performance on Habitat. We also deploy our approach on the Unitree B1 with Z1 arm for real-world experiments, further showcasing the robustness of our methodology. In the future, we would like to explore the potential of this framework in highly dynamic environments or for long-horizon tasks.
2303.18126
Euler's variational approach to the elastica
The history of the elastica is examined through the works of various contributors, including those of Jacob and Daniel Bernoulli, since its first appearance in a 1690 contest on finding the profile of a hanging flexible cord. Emphasis will be given to Leonhard Euler's variational approach to the elastica, laid out in his landmark 1744 book on variational techniques. Euler's variational approach based on the concept of differential value is highlighted, including the derivation of the general equation for the elastica from the differential value of the first kind, from which nine shapes adopted by a flexed lamina under different end conditions are obtained. To show the potential of Euler's variational method, the development of the unequal curvature of elastic bands based on the differential value of the second kind is also examined. We also revisited some of Euler's examples of application, including the derivation of the Euler-Bernoulli equation for the bending of a beam from the Euler-Poisson equation, the pillar critical load before buckling, and the vibration of elastic laminas, including the derivation of the equations for the mode shapes and the corresponding natural frequencies. Finally, the pervasiveness of Euler's elastica solution found in various studies over the years as given on recent reviews by third parties is highlighted, which also includes its major role in the development of the theory of elliptic functions.
Sylvio R. Bistafa
2023-03-31T15:10:56Z
http://arxiv.org/abs/2303.18126v1
###### Abstract ###### Abstract The history of the elastica is examined through the works of various contributors, including those of Jacob and Daniel Bernoulli, since its first appearance in a 1690 contest on finding the profile of a hanging flexible cord. Emphasis will be given to Leonhard Euler's variational approach to the elastica, laid out in his landmark 1744 book on variational techniques. Euler's variational approach based on the concept of differential value is highlighted, including the derivation of the general equation for the elastica from the differential value of the first kind, from which nine shapes adopted by a flexed lamina under different end conditions are obtained. To show the potential of Euler's variational method, the development of the unequal curvature of elastic bands based on the differential value of the second kind is also examined. We also revisited some of Euler's examples of application, including the derivation of the Euler-Bernoulli equation for the bending of a beam from the Euler-Poisson equation, the pillar critical load before buckling, and the vibration of elastic laminas, including the derivation of the equations for the mode shapes and the corresponding natural frequencies. Finally, the pervasiveness of Euler's elastica solution found in various studies over the years as given on recent reviews by third parties is highlighted, which also includes its major role in the development of the theory of elliptic functions. **Euler's variational approach to the elastica** Sylvio R. Bistafa [email protected] Retired, Polytechnic School, University of 5So Paulo, Brazil **Keywords: elastica, elastic curves, calculus of variations, Euler-Bernoulli equation, Euler-Poisson equation, vibration of beams, elliptic integrals, elliptic functions** ## 1 Introduction On May 1690 Jacob Bernoulli (1655-1705) started a contest on finding the profile of a hanging flexible cord. By not specifying any condition which limits the problem to the nonelastic case, he challenges the mathematicians of the time to find the shape of a hanging elastic rope, on the belief that if one finds the curvature for the elastic case, then that of the nonelastic case can be obtained from it. In the same year, Gottfried Leibniz (1646-1716) replied to Jacob Bernoulli's challenge saying that he would considered instead the shape of a thread hanging from its two points takes because of bending under its own weight, on the assumption that the thread, like a chain, keeps the same length and is neither stretched nor shortened as a normal thread would do. This assumption simplified the original challenge into finding the catenary curve. If the rope is elastic in all its parts, the problem turns into finding the _elastica_. On the other hand, if the rope is as rigid as a chain, it turns into finding the catenariab. Footnote a: Latin for a thin strip of elastic material. From then on, the focus of the participating mathematicians was to find the curvature of the catenaria, all except Jacob Bernoulli who remained with his original problem. In the 1691 June issue of the _Acta Eruditorum_, three solutions were published: one by Jacob's brother Johann
2309.13028
Minimization of energy functionals via FEM: implementation of hp-FEM
Many problems in science and engineering can be rigorously recast into minimizing a suitable energy functional. We have been developing efficient and flexible solution strategies to tackle various minimization problems by employing finite element discretization with P1 triangular elements [1,2]. An extension to rectangular hp-finite elements in 2D is introduced in this contribution.
Miroslav Frost, Alexej Moskovka, Jan Valdman
2023-09-22T17:43:09Z
http://arxiv.org/abs/2309.13028v1
# Minimization of energy functionals via FEM: implementation of hp-FEM ###### Abstract Many problems in science and engineering can be rigorously recast into minimizing a suitable energy functional. We have been developing efficient and flexible solution strategies to tackle various minimization problems by employing finite element discretization with P1 triangular elements [1, 2]. An extension to rectangular hp-finite elements in 2D is introduced in this contribution. Keywords:hp finite elements, energy functional, trust-region methods, p-Laplace equation, hyperelasticity, MATLAB code vectorization. ## 1 Introduction The finite element method (FEM) can be efficiently used to minimize energy functionals appearing in various types of problems. The simplest P1 finite elements were implemented in MATLAB for the discretization of p-Laplace energy functional in [1]. We introduced several vectorization techniques in [2] for an efficient evaluation of the discrete energy gradient and, additionally, applied these techniques for the minimization of hyperelasticity in 2D and 3D. Recently, our approach has been successfully applied to 2D/3D problems in solid mechanics, namely the resolution of elastoplastic deformations of layered structures [3] or superelastic and pseudoplastic deformations of shape-memory alloys [4]. The hp-FEM is an advanced numerical method based on FEM dating back to the pioneering works of I. Babuska, B. A. Szabo and co-workers in 1980s, e.g. [5] and [6]. It provides increased flexibility and convergence properties compared to the "conventional" FEM. There are recent MATLAB implementations including triangular elements [12] and rectangular elements [7]. In this paper, we combine energy evaluation techniques of [2] and the hpFEM implementation [7]. The trust-region (TR) method [8] is applied for the actual minimization of energies. It is available in the MATLAB Optimization Toolbox and was found to be very efficient in the comparison performed in [2]. It requires the gradient of a discrete energy functional and also allows to specify a sparsity pattern of the corresponding Hessian matrix which is directly given by a finite element discretization. We employ two different options: * option 1: the TR method with the gradient evaluated directly via its explicit form and the specified Hessian sparsity pattern. * option 2: the TR method with the gradient evaluated approximately via central differences and the specified Hessian sparsity pattern. We demonstrate the capabilities of our implementation on two particular problems in 2D: the scalar p-Laplace problem and the vector hyperelasticity problem. The underlying MATLAB code is available at [https://www.mathworks.com/matlabcentral/fileexchange/125465](https://www.mathworks.com/matlabcentral/fileexchange/125465) for free download and testing. Running times were obtained on a MacBook Pro (M2 Pro processor, 2023) with 16 GB memory running MATLAB R2023a. ## 2 Hierarchical shape basis functions Given a reference element \(T_{ref}=[-1,1]^{2}\) and \(p\in\mathbb{N}\) we define by \(S^{p}(T_{ref})\) a space of all (local) shape basis functions of polynomial degrees less or equal to \(p\) defined on \(T_{ref}\) and we denote by \(n_{p,ref}\) their number. The construction of these functions is based on Legendre polynomials and is described in detail in [6]. Generally, there are three types of shape basis functions shown in Fig. 1: Figure 1: Example of nodal, edge, and bubble shape basis functions on \(T_{ref}\). Generated by a modification of the ‘Benchmark 2’ from [7]. * the nodal (Q1) shape basis function of the first order is nonzero in one particular node and vanishes in all other nodes; * the edge shape basis function of the \(p\)-th order is nonzero on one particular edge and vanishes on all other edges; * the bubble shape basis function of the \(p\)-th order vanishes on the whole boundary of \(T_{ref}\). All shape basis functions on \(T_{ref}\) are sorted by the polynomial degree \(p\) and the type of shape function. Every local basis function is assigned a unique index \(m\). It is determined by the polynomial order \(p\) and an additional index \(s\) given by the local index of a node, edge or bubble. We assume a computational domain \(\Omega\) and its decomposition \(\mathcal{T}\) into quadrilaterals in the sense of Ciarlet [9]. We denote by \(|\mathcal{N}|\), \(|\mathcal{E}|\) and \(|\mathcal{T}|\) the number of nodes, edges and elements, respectively. The local shape basis functions are used for the construction of global ones defined on the whole \(\mathcal{T}\). We denote by \(S^{p}(\mathcal{T})\) the space of all global basis functions of polynomial degrees less or equal to \(p\) and by \(n_{p}\) their number. In [7] we introduced several key matrices providing the relation between the topology of \(\mathcal{T}\) and the corresponding global basis functions. The first matrix collects the indices of all global basis functions and their type, the second collects for every element the indices of global basis functions that are nonzero on that element, and the third stores for every element the signs of local basis functions (necessary for edge functions of an odd degree). Example 1: Quadrilaterals of the L-shape domain are shown in Fig. 2 (left) in which \(|\mathcal{N}|=21\), \(|\mathcal{E}|=32\), \(|\mathcal{T}|=12\). For \(p=2\) we have \(n_{p}=53\) global basis functions (21 nodal and 32 edge) and the Hessian sparsity pattern (right) can be extracted directly. Figure 2: A rectangular mesh (left) and the corresponding Hessian sparsity pattern for \(p=2\) hierarchical elements (right). The upper diagonal submatrix highlighted by the black color corresponds to Hessian sparsity for \(p=1\). ## 3 Models and implementation ### p-Laplace equation We are interested in a (weak) solution of the p-Laplace equation [10]: \[\begin{split}\Delta_{\alpha}u&=f\qquad\quad\text{in }\, \Omega\,,\\ u&=g\qquad\text{on }\,\partial\Omega\,,\end{split} \tag{1}\] where the p-Laplace operator is defined as \(\Delta_{\alpha}u=\nabla\cdot\left(\|\nabla u\|^{\alpha-2}\nabla u\right)\) for some power \(\alpha>1\) (the integer \(p\) denotes the polynomial degree of \(S^{p}(T_{ref})\)). The domain \(\Omega\in\mathbb{R}^{d}\) is assumed to have a Lipschitz boundary \(\partial\Omega\), \(f\in L^{2}(\Omega)\) and \(g\in W^{1-1/\alpha,\alpha}(\partial\Omega)\), where \(L\) and \(W\) denote standard Lebesque and Sobolev spaces. It is known that (1) represents an Euler-Lagrange equation corresponding to a minimization problem \[J(u)=\min_{v\in V}J(v)\,,\qquad J(v):=\frac{1}{\alpha}\int_{\Omega}\|\nabla v \|^{\alpha}\,\mathrm{d}\mathbf{x}-\int_{\Omega}f\,v\,\mathrm{d}\mathbf{x}\,, \tag{2}\] where \(V=W^{1,\alpha}_{g}(\Omega)=\{v\in W^{1,\alpha},v=g\text{ on }\partial\Omega\}\) includes Dirichlet boundary conditions on \(\partial\Omega\). The minimizer \(u\in V\) of (2) is known to be unique for \(\alpha>1\). It corresponds to the classical Laplace operator for \(\alpha=2\). The analytical handling of (1) is difficult for general \(f\). The equation (1) in 2D (\(d=2\)) takes the form \[\nabla\cdot\left(\left((\partial_{1}u)^{2}+(\partial_{2}u)^{2}\right)^{\frac{ \alpha-2}{2}}\nabla u\right)=f\qquad\text{in }\,\Omega \tag{3}\] and the corresponding energy reads \[J(v):=\frac{1}{\alpha}\int_{\Omega}\left((\partial_{1}v)^{2}+(\partial_{2}v)^ {2}\right)^{\frac{\alpha}{2}}\,\mathrm{d}\mathbf{x}-\int_{\Omega}f\,v\, \mathrm{d}\mathbf{x}\,. \tag{4}\] The evaluation of integrals above is based on the Gaussian quadrature. We apply the number of Gauss points corresponding to the quadrature of order \(p+1\). This does not guarantee the exact quadrature, but it proved to be sufficient in our numerical tests. Figure 3 illustrates numerical solutions for the L-shape domain from Figure 2, for a constant \(f=-10\), \(\alpha=3\) and zero Dirichlet boundary condition on \(\partial\Omega\). Figure 3: Numerical solutions of the p-Laplacian with \(\alpha=3\) for a computational mesh with \(|\mathcal{T}|=192\) and polynomial bases for \(p=1\) (left) and \(p=4\) (right). Table 1 shows the performance for Q2 elements (corresponding to the choice \(p=2\)). We notice that computations using the explicitly evaluated gradient are faster than using the numerical gradient (via central differences). A comparison to Q1 elements (corresponding to the choice \(p=1\)) or P1 (triangular) elements of [2] is depicted in Fig. 4. We observe a lower number of needed degrees of freedom (dofs) for Q2 elements and slightly lower running times to achieve the same accuracy. Since the exact energy is not known in this example, we use \(J_{ref}\) as the smallest of all achieved energy values \(J(u)\) of Table 1 decreased by \(10^{-4}\). ### Hyperelasticity Boundary value problems in (non-linear) elastostatics provide examples of vector problem which can be directly dealt with our approach, see [2]. Deformation, \(\mathbf{v}(\mathbf{x})\), of a (hyper)elastic body spanning the domain \(\Omega\in\mathbb{R}^{d}\) subjected to volumetric force, \(\mathbf{f}(\mathbf{x})\), can be obtained by minimization of the corresponding energy functional, \(J\), which takes the form: \[J(\mathbf{v}(\mathbf{x}))=\int_{\Omega}W\big{(}\mathbf{F}(\mathbf{v}(\mathbf{ x}))\big{)}\,\mathrm{d}\mathbf{x}-\int_{\Omega}\mathbf{f}(\mathbf{x})\cdot \mathbf{v}(\mathbf{x})\,\mathrm{d}\mathbf{x}\,, \tag{5}\] where \(\mathbf{F}(\mathbf{v}(\mathbf{x}))=\nabla\mathbf{v}(\mathbf{x})\) is deformation gradient and \begin{table} \begin{tabular}{c r r|r r r|r r} & & \multicolumn{3}{c|}{explicit gradient} & \multicolumn{3}{c}{numerical gradient} \\ \hline level & \(|\mathcal{T}|\) & dofs & time [s] & iters & \(J(u)\) & time [s] & iters & \(J(u)\) \\ \hline 1 & 48 & 113 & 0.06 & 7 & -7.9209 & 0.07 & 7 & -7.9209 \\ 2 & 192 & 513 & 0.14 & 8 & -7.9488 & 0.18 & 8 & -7.9488 \\ 3 & 768 & 2177 & 0.49 & 10 & -7.9562 & 0.67 & 10 & -7.9562 \\ 4 & 3072 & 8961 & 1.73 & 12 & -7.9587 & 2.38 & 12 & -7.9587 \\ 5 & 12288 & 36353 & 8.31 & 13 & -7.9596 & 10.60 & 13 & -7.9596 \\ 6 & 49152 & 146433 & 80.81 & 13 & -7.9600 & 136.92 & 14 & -7.9600 \\ \end{tabular} \end{table} Table 1: Performance of p-Laplacian for \(\alpha=3\) and Q2 elements. Figure 4: Performance of p-Laplacian for \(\alpha=3\): comparison of elements. \[W(\mathbf{F})=C_{1}\big{(}I_{1}(\mathbf{F})-\dim-2\log(\det\mathbf{F})\big{)}+D_{1} (\det\mathbf{F}-1)^{2}\,, \tag{6}\] is so-called compressible Neo-Hookean energy density with \(C_{1},D_{1}\) being material constants and \(I_{1}(\mathbf{F})=\|\mathbf{F}\|^{2}\) denotes the squared Frobenius norm; see [11] for details on the underlying continuum mechanics theory and its mathematically rigorous formulation. We assume the same benchmark problem as in [2]: a 2D hyperelastic domain given by a square \([0,2]\times[0,2]\) perforated by a disk with radius \(r=1/3\) in its center is subjected to a constant volumetric vector force \(\mathbf{f}=(-3.5\cdot 10^{7},-3.5\cdot 10^{7})\) acting in a bottom-left direction; zero Dirichlet boundary conditions are applied on the left and bottom edge. We assume the Young modulus \(E=2\cdot 10^{8}\) and the Poisson ratio \(\nu=0.3\). We consider arbitrary, although mutually consistent physical units. For illustration, Fig. 5 shows examples of the corresponding deformed mesh together with the underlying Neo-Hookean density distribution. Fig. 6 depicts a comparison of P1, Q1 and Q2 elements. Similarly to Fig. 4, Q2 elements are superior to Q1 and P1 in accuracy with respect to the number of dofs, however, we observe only a little improvement with respect to the evaluation times. ### Remarks on 2D implementation As an example, we introduce the following block that describes the evaluation of the p-Laplace energy: ``` 1functione=energy(v,mesh,params) 2v_elems=v(e2d_elems);%valuesonelementsinhphasis 3F_elems=evaluate_F_scalar_2D(v_elems,Dphi_elems);%grads 4densities_elems=density_pLaplace_2D(F_elems,w,alpha); 5e=sum(areas_elems.*densities_elems)-b_full'*v;%energy 6end 7 8 end 9 10 end 11 ``` Figure 5: Deformation and the corresponding Neo-Hookean density distributions for the 2D hyperelastic problem: a mesh with \(|\mathcal{T}|=512\) and Q4 elements (left) and a mesh with \(|\mathcal{T}|=2048\) and Q3 elements (right). The structure corresponds exactly to the code of [2]. The main difference is that some objects inside '**evaluate_F_scalar_2D**' and '**density_pLaplace_2D**' are higher-dimensional. This is because more Gauss points (we denote by \(n_{ip}\) their number) are needed for the integration of higher polynomial functions in the hierarchical basis. The function '**evaluate_F_scalar_2D**' is provided below ``` 1functionF_elems=evaluate_F_scalar_2D(v_elems,Dphi) 2v_elems3D=reshape(v_elems',size(v_elems,2),1,size(v_elems,1)); 3v_x_elems=Dphi{1}.*v_elems3D;%np_refxnipxne 4v_y_elems=Dphi{2}.*v_elems3D;%np_refxnipxne 5F_elems=cell(1,2); 6F_elems{1,1}=squeeze(sum(v_x_elems,1));%npxne 7F_elems{1,2}=squeeze(sum(v_y_elems,1));%npxne 8end ``` and evaluates the following objects: * (lines 3-4) 3D matrices '**v_x_elems**' and '**v_y_elems**' of size \(n_{p,ref}\times n_{ip}\times|\mathcal{T}|\) store for every element the partial derivatives of local basis functions in all \(n_{ip}\) Gauss points multiplied by the corresponding values of \(v\). * (lines 6-7) the final matrices of size \(n_{ip}\times|\mathcal{T}|\) storing the partial derivatives of \(v\) in all Gauss points on every element. The function '**density_pLaplace_2D**' evaluating energy densities on every element is introduced below ``` 1functiondensities=density_pLaplace_2D(F,w,alpha) 2densities=(1/alpha)*sqrt(F{1,1}.^2+F{1,2}.^2).^alpha; 3densities=densities'*w; 4end ``` and evaluates the following objects: Figure 6: Performance of hyperelasticity: comparison of elements. * (line 2) the matrix of size \(n_{ip}\times|\mathcal{T}|\) storing the p-Laplace energy densities in all Gauss points of every element. * (lines 3) vector of length \(|\mathcal{T}|\) containing energy densities on all elements. Similarly, the majority of the original P1 functions related to the evaluation of the hyperelastic energy are extended to the higher dimension in the same way. ## 4 Conclusions and Outlook The hp-FEM for 2D rectangular elements was successfully incorporated into our vectorized MATLAB code and its improved convergence performance was demonstrated on two examples. This work contributes to our long-term effort in developing a vectorized finite element-based solvers for energy minimization problems. Since many such problems emerge in science and engineering, the code is designed in a modular way so that various modifications (e.g., in functional types or boundary conditions) can be easily adopted. Our future research directions include implementing the hp-FEM in 3D or tuning the applied minimization algorithms.
2309.04554
Continuous-wave second-harmonic generation in the far-UVC pumped by a blue laser diode
Far-UVC light in the wavelength range of 200-230 nm has attracted renewed interest because of its safety for human exposure and effectiveness in inactivating pathogens. Here we present a compact solid-state far-UVC laser source based on second-harmonic generation (SHG) using a low-cost commercially-available blue laser diode pump. Leveraging the high intensity of light in a nanophotonic waveguide and heterogeneous integration, our approach achieves Cherenkov phase-matching across a bonded interface consisting of a silicon nitride (SiN) waveguide and a beta barium borate (BBO) nonlinear crystal. Through systematic investigations of waveguide dimensions and pump power, we analyze the dependencies of Cherenkov emission angle, conversion efficiency, and output power. Experimental results confirm the feasibility of generating far-UVC, paving the way for mass production in a compact form factor. This solid-state far-UVC laser source shows significant potential for applications in human-safe disinfection, non-line-of-sight free-space communication, and deep-UV Raman spectroscopy.
Eric J. Stanton, Peter Tønning, Emil Z. Ulsig, Stig Calmar, Maiya A. Bourland, Simon T. Thomsen, Kevin B. Gravesen, Peter Johansen, Nicolas Volet
2023-09-08T19:07:21Z
http://arxiv.org/abs/2309.04554v1
# Continuous-wave second-harmonic generation in the far-UVC pumped by a blue laser diode ###### Abstract Far-UVC light in the wavelength range of 200-230 nm has attracted renewed interest because of its safety for human exposure and effectiveness in inactivating pathogens. Here we present a compact solid-state far-UVC laser source based on second-harmonic generation (SHG) using a low-cost commercially-available blue laser diode pump. Leveraging the high intensity of light in a nanophotonic waveguide and heterogeneous integration, our approach achieves Cherenkov phase-matching across a bonded interface consisting of a silicon nitride (SIN) waveguide and a beta barium borate (BBO) nonlinear crystal. Through systematic investigations of waveguide dimensions and pump power, we analyze the dependencies of Cherenkov emission angle, conversion efficiency, and output power. Experimental results confirm the feasibility of generating far-UVC, paving the way for mass production in a compact form factor. This solid-state far-UVC laser source shows significant potential for applications in human-safe disinfection, non-line-of-sight free-space communication, and deep-UV Raman spectroscopy. ## Introduction Over the past decade, extensive research has substantiated the safety of human exposure to far-UVC light (wavelengths from 200 nm to 230 nm) and its effectiveness in inactivating diverse pathogens, including human coronaviruses [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. These compelling findings underscore the immense potential of far-UVC sources in curtailing airborne viral transmission, thereby playing a crucial role in the containment of diseases like COVID-19 and averting future pandemics. Additionally, far-UVC may be instrumental to improve yields in farming and pharmaceutical industries [11, 12, 13]. Previous studies have primarily investigated the disinfection and interactions of food, livestock, and pathogens with longer wavelengths (250-270 nm), which is unsafe for direct human exposure. However, employing human-safe far-UVC in these applications offers a notable advantage. By helping to eliminate contaminants, far-UVC could bring about additional societal benefits by enhancing the quality of goods in food and drug supply chains and curbing disease outbreaks that stem from these industries [14, 15, 16, 17, 18]. Beyond disinfection, compact far-UVC light sources can simplify free-space communication systems by eliminating the need for pointing accuracy and stability [19]. The reduced solar background noise in this spectrum and a broad acceptance angle may enable non-line-of-sight links, potentially reducing both the cost and complexity of the overall system. Furthermore, far-UVC lasers are beneficial for Raman spectroscopy [20], as they provide greater detail and higher quality information about a sample than lasers at longer wavelength. The higher photon energy increases the Raman scattering efficiency and reduces the fluorescence background in numerous materials. Achieving widespread and strategic deployment of far-UVC sources demands substantial technological advancements in the field [21]. The current leading commercial technology for far-UVC disinfection is the krypton chloride (KrCl) excimer lamp, emitting \(\sim\)150 mW with a spectral peak at 222 nm [22]. These devices use high-voltage (\(>\)1 kV) radio-frequency discharges resulting in a large power consumption. Inherently, their footprint is large (cm size), and they require a filter to eliminate dangerous wavelengths longer than 230 nm. While the KrCl emission wavelength cannot be tuned, slightly shorter wavelengths are anticipated to be safer and more effective in pathogen elimination [23]. Since the intensity of the lamp depends on the excimer volume (\(\sim\)1 cm\({}^{3}\)), the emission is difficult to control and inefficiently coupled to an optical fiber. An attractive solution to reduce size, weight, operating power, and cost is direct emission of far-UVC from a semiconductor material. The performance of light-emitting diodes (LEDs) are progressing [24]. However, for laser diodes, the shortest emission wavelength to date is \(>\)270 nm [25], and fundamental obstacles remain that could potentially limit lifetime, efficiency, and output power. We propose to convert light from a continuous-wave (CW) blue laser diode to the far-UVC using second-harmonic generation (SHG) in a waveguide, as depicted in Fig. 1a. This approach leverages decades of research on GaN emitters for lighting [26, 27] and incorporates recent advances in nanophotonic waveguide manufacturing [28, 29]. The benefits of this approach include a low-voltage power supply, the low-cost and high-availability of blue laser diodes, wavelength flexibility, and the potential for low-cost fabrication with wafer-scale semiconductor manufacturing. Beta barium borate (BBO) is commonly used for second-harmonic generation to the UV [30]. This is thanks to its broad transparency range, large nonlinearity and a shorter phase-matching cut-off wavelength compared to other nonlinear crystals [31, 32]. A relatively compact demonstration with an external-cavity diode laser (ECDL) pump at 445 nm achieved far-UVC using bulk BBO, though with low conversion efficiency (\(\sim\)0.01 % for 1.4 W pump) [33]. Recent advancements using resonant cavities showed improved efficiency with 13 % conversion from a 200 mW pump and 34 % conversion using cascaded SHG from a 1.6 W pump, both producing signals at \(\sim\)229 nm [34, 35]. However, these systems require complex bulk optics for implementation. Another approach for frequency conversion to the far-UVC involves a pulsed pump source for optical parametric oscillation and SHG. Demonstrations achieved 16.2 % efficiency from a 1.2 mJ pump, generating pulses at 222 nm wavelength [36]. Compressed pulses were also used in He-filled hollow capillary fibers, achieving 3.6 % conversion from 0.2 mJ pulses [37]. Pulse-pumping offers higher conversion efficiency but requires more complex and larger pump laser systems. SHG in BBO waveguides was demonstrated at 266 nm wavelength with negligible walk-off and achieved 0.05 % conversion efficiency from a 670 mW pump [38]. However, phase-matching was limited to the bulk phase-matching angle, not utilizing the dispersion properties of the waveguide. Our approach uses Cherenkov phase-matching [39] across a bonded interface, showcased in Fig. 1a,b, with a scanning electron micrograph in Fig. 1c. Blue pump light propagates in a waveguide, creating an evanescent field in a nonlinear crystal directly bonded to the waveguide core. This heterogeneous platform enables far-UVC generation without any interaction of the signal light with the waveguide core that guides the pump. Combining the precision and scalability of Si nanophotonics with the high nonlinearity and low loss of a bulk crystal is the key to achieving high conversion efficiency and stable operation. ## Results This work showcases experimental demonstrations of a heterogeneous photonic chip generating far-UVC with SHG from a blue diode laser. It is complemented by novel simulation results for the frequency converter chip design. ### Numerical simulation and analysis Phase-matching is found for various SiN waveguide heights and widths (Fig. 2a), with predicted Cherenkov angle values ranging from zero to \(\sim\)16\({}^{\circ}\). In regions with large waveguide width and height, phase-matching collapses. The Cherenkov angle is close to zero along this boundary and generally increases for smaller widths and heights away from the boundary cut-off condition. While having the Cherenkov angle close to zero facilitates a low-angle emission at the BBO output facet, this work Figure 1: **a**, Schematic of chip-to-chip edge coupling concept between a laser diode and the frequency converter showing the far-UVC generated at an angle above the waveguide core. **b**, Waveguide cross-section schematic showing the BBO nonlinear crystal bonded to the SiN waveguide core of width \(w\) and thickness \(h\). The Si substrate (not shown in this cross-section) is located below the SiO\({}_{2}\) lower cladding. Geometrical axes \(\hat{x}\), \(\hat{y}\) and \(\hat{z}\) are shown, as well as crystal axes \(\hat{X}\), \(\hat{Y}\) and \(\hat{Z}\) of BBO. Contour lines show the simulated \(\hat{x}\)-polarized electric field profile of the fundamental TE mode at a wavelength of 445 nm. The SiN core guides the mode with a confinement of 58 %, and a significant portion of the evanescent field (26 %) overlaps with the BBO. **c**, Scanning electron micrograph showing the BBO crystal directly bonded to the SiN waveguide, with the SiO\({}_{2}\) lower cladding and the Si substrate. Edges of the BBO sample are chamfered for mechanical stability. focuses on optimizing the conversion efficiency, irrespective of the Cherenkov angle. The strength of the nonlinear interaction, independent of propagation loss, is observed by \(\int\!\!\int\kappa_{0}\,\mathrm{d}\theta\,\mathrm{d}\phi\) (as described in Methods), and is plotted in Fig. 2b. The results indicate that narrower and thicker waveguides provide a stronger frequency conversion. However, the realized frequency conversion takes into account the losses, resulting in an optimal point for the SiN geometry, as propagation losses are inversely proportional to the waveguide width. Figure 3a shows the simulated propagation loss contribution from interfacial scattering and the sum of scattering and absorption losses in the SiN. Notably, measurements of the propagation loss align well with the predicted total loss values for the measured widths ranging from 500 nm to 2500 nm. From these data, the signal power and the conversion efficiency are predicted for selected waveguide heights (95 nm to 115 nm). Figure 2c shows the signal power trend with waveguide width for each height, with 100 nm being the preferred height due to its large peak value and broad phase-matching range. In Fig. 2d, the conversion efficiency is presented as a percentage of the pump power for this height at various pump powers. The on-chip pump power of 100 mW achieves a maximum efficiency of \(\sim\)4 %. Figure 4a illustrates the signal power generated along the propagation of the pump waveguide, while Fig. 4b plots the signal and pump powers along \(z\). The initial section of the waveguide experiences faster pump to signal conversion (indicated by the dashed grey line in Fig. 4b) due to the attenuation of pump light by propagation loss. Additionally, the ray diagram in Fig. 4a illustrates the output beam realization during the experiment. The larger-than-anticipated Cherenkov angle results in the signal reflecting from the top of the chip, leading to two emission beams, both downward and upward. This unexpected behavior is partly due to fabrication tolerance on the SiN layer thickness, resulting in a 98 nm thick layer instead of the targeted 100 nm. Furthermore, the BBO chip thickness was reduced due to an additional polishing step on the top surface to aid the coupling Figure 2: **a**, Cherenkov angle simulated for the BBO/SiN hybrid waveguide for various thicknesses and widths of the SiN waveguide core. **b**, Cherenkov nonlinear coupling parameter in a BBO/SiN waveguide as a function of the SiN core height and width. **c**, Trends of the signal power with waveguide width for various waveguide heights as annotated on the plot for a pump power of 50 mW. **d**, Trends of the conversion efficiency with waveguide width for various pump powers as annotated on the plot for a waveguide height of 100 nm. procedure with a microscope view. The azimuthal radiation profile, as shown in the inset of Fig. 4b, exhibits a local minimum at \(\phi=0\) attributed to the aspect ratio of the SiN waveguide. This effect can be mitigated by engineering a waveguide with a narrower width and thicker height. While this device was designed for optimal operation with a 445 nm pump wavelength, other SiN geometries can support extremely broad-band phase-matching. For a waveguide thickness of 96 nm and a width of 2000 nm, Cherenkov phase-matching spans a continuous range of pump wavelengths from 400 nm to 1000 nm (Fig. 3b). While the trend ignores variation in propagation loss across this spectrum, it underscores the robustness of the Cherenkov approach, requiring no fine-tuning of refractive indices for phase-matching. Moreover, this discovery opens possibilities for the device's realization in other applications involving broadband frequency conversion. ### Passive transmission Firstly, we characterize devices comprising air-clad passive SiN waveguides without the bonded BBO to assess the relative contributions of scattering and absorption effects on propagation loss. The transmission of blue laser light is measured from straight waveguides to determine propagation and coupling losses. These results, along with the predicted trends, are plotted in Fig. 3a. Subsequently, we measure blue light transmission with a BBO crystal bonded to the SiN waveguide layer. The TE-polarized light at 445 nm wavelength from a 2.0 um wide waveguide exhibits a propagation loss of \(\alpha_{\text{p}}=13.4\pm 1.3\) dB/cm, while the coupling loss to a lensed fiber is 11.0 dB. The presence of observable grain residues around the waveguide facet, arising from hybrid facet polishing, negatively impacts the coupling loss. This is evidenced by the trend of lower coupling losses for wider waveguides, where the light has less interaction with the SiN sidewall interface. However, wider waveguides result in increased relative coupling to higher-order modes, reducing the portion of light contributing to frequency conversion. To balance the effects of insertion loss, propagation loss, and single-mode propagation, there is an optimal waveguide width designed for the fundamental TE mode phase-matching. This optimal width is experimentally found in waveguides ranging from 1.0 um and 2.0 um in the following section. ### SHG to the far-UVC Far-UVC generation is measured with a blue laser diode pump and detecting the emitted light at various output angles from the BBO facet. Figure 5a illustrates the output power dependence on the pump power using both ECDL and Fabry-Perot (FP) pump lasers at 445 nm, as shown in Fig. 5a (see Methods for details on laser sources used). Quadratic fits to both types of pump lasers result in coefficients of determination exceeding 98 %. Interestingly, the conversion efficiency from the FP laser appears unaffected by the expansion of lasing frequency modes when increasing the drive current. The observed difference in Figure 3: **a**, Scattering loss (\(\alpha_{\text{s}}\)) of TE-polarized modes in waveguides designed for Cherenkov phase-matching with 98 nm thick SiN, a sidewall roughness \(\sigma_{\text{side}}=7\) nm RMS, and a top surface roughness \(\sigma_{\text{surface}}=0.25\) nm RMS. The absorption loss \(\alpha_{\text{a}}\) is calculated from the material absorption loss of SiN scaled by the confinement factor, where \(\alpha_{\text{SiN}}=14\) dB/cm. Measured propagation losses for the BBO/SiN and the air-clad SiN waveguides are plotted with data points, and error bars indicate the standard deviations of the measurement uncertainties. **b**, Normalized conversion efficiency for SHG simulated across a broad range of pump wavelengths, plotted in blue on the left axis. The corresponding Cherenkov angle is plotted in orange on the right axis. Solid lines represent a waveguide with a 100 nm height and a 1500 nm width, which is the geometry optimized for SHG from a 445 nm wavelength pump. Dashed lines represent a waveguide with a 96 nm height and a 2000 nm width, showcasing a finite frequency conversion across the entire pump wavelength range of 400 nm to 1000 nm. conversion efficiency is attributed to differences in the coupling loss of approximately 2.2 dB. With a fixed pump power from a single-frequency laser at 445 nm wavelength, we measured the SHG output intensity for various emission angles in the subset of Fig. 5b. SHG light generated at the beginning of the waveguide can reflect at the top of the BBO chip and exit the BBO facet at the negative of the Cherenkov phase-matched angle, with refraction at the exit facet. The same measurement was repeated with input wavelengths of 405 nm and 473 nm, corresponding to larger and smaller Cherenkov angles as predicted by simulations in Fig. 3b. By changing the waveguide width, the effective index of the pump mode is varied, thereby modifying the Cherenkov angle and the conversion efficiency. In Fig. 5b the angle of maximum emission and the relative power at a fixed pump power is recorded as a function of the waveguide width. Although the far-UVC power is lower for waveguide widths of 750 nm and 1000 nm compared to wider widths, we expect this discrepancy to be attributed to facet coupling efficiency rather than a trend in the conversion efficiency. The trend in the Cherenkov angle predicted by simulations in Fig. 2 is confirmed with an additional offset. Beyond the single axis distribution obtained from the goniometer, the far-field is further explored using a UV camera. A schematic of the setup with the camera placement and projection screens (described in Methods) is displayed in Fig. 6 along with images captured of the screen positioned \(\sim\)20 cm from the chip. From Fig. 6 both upward and downward emission lobes are evident, with the lower emission significantly more intense, as explained from the geometrical considerations in Fig. 4. We also observe that Cherenkov phase-matching is achieved for range of azimuthal angles (\(\phi\)). ### Wavelength tuning and SFG As shown in Fig. 5a, the frequency converter exhibits tolerance to significant wavelength shifts, attributed to the robustness of Cherenkov phase-matching. This characteristic enables tunability, a valuable feature for several key applications in the far-UVC domain. Figure 7a demonstrates the direct mapping from wavelength tuning of the blue pump laser to the corresponding SHG signal across the tuning range of the stabilized diode laser. The spectral tolerance of the frequency converter is further explored using different pump lasers to extend beyond the tuning range of the 445 nm laser. While the current design of the frequency converter chip is not optimized for pump wavelengths below 440 nm due to increased propagation loss in the SiN waveguides and lower conversion efficiency, slightly longer wavelengths support even higher conversion efficiency. This is demonstrated with a pump laser at 473 nm wavelength, as shown in Fig. 5b, where the Cherenkov angle is lower for the 473 nm pump as predicted in simulations, and the conversion efficiency is higher. A far-UVC signal is also detected for a pump laser at 405 nm, as described in Methods. However, due to higher losses indicating lower conversion efficiency, the output signal is very low, and thus, no spectrum was recorded at this wavelength. The output emission at 202.5 nm is observed at a steeper output angle compared to the longer far-UVC wavelengths. Most of the power lies in the upper emission lobe, as explained by Fig. 4a, where greater angles lead to double reflection of the signal Figure 4: **a**, Ray diagram of the signal generation in the BBO along the SiN waveguide. The ray width and transparency is scaled by the relative generated signal power per unit length, and the output at the right facet accounts for refraction. The SiN and SiO\({}_{2}\) layers are shown thicker than the actual layers for illustrative purposes. **b**, The signal power \(P_{2\omega}\) is generated along the propagation length \(z\) as the pump power \(P_{\omega}\) depletes. The normalized differential of the signal power with the length \(z\) is shown with a dashed gray line, corresponding to the signal generation rays in **a**. The dependence of \(P_{2\omega}\) on the azimuthal angle \(\phi\) is shown in the inset. generated in the first part of the chip. Regardless, this result is significant because the pump wavelength of 405 nm is below the range for conventional phase-matching of SHG in bulk BBO [31, 32]. In contrast, Cherenkov phase-matching can support efficient SHG well below 205 nm, which is promising for enabling unanticipated applications at wavelengths below 200 nm in the vacuum-UV. By combining two pump lasers at 445 nm and 473 nm wavelengths, we can further explore the tolerance of phase-matching through sum-frequency generation (SFG). Both SHG and SFG are observed in Fig. 7b when each laser is coupled to the same waveguide on the frequency converter chip. Owing to the broadband phase-matching, a single waveguide simultaneously supports SHG at 445 nm and 473 nm, with slightly different Cherenkov angles. Additionally, a SFG signal is produced with a wavelength of 229.5 nm. ## Discussion In conclusion, we have successfully demonstrated a solid-state far-UVC laser source using a chip-scale pump laser. This technology offers significant potential for mass production, benefiting from the extensive development of blue laser diodes. The broad phase-matching bandwidth is pivotal in achieving high-yield production with robust operation, ensuring the device's resilience against fabrication tolerances and environmental variations. As a result, pump sources like mass-produced FP laser Figure 5: **a**, Far-UVC power quadratic dependence on the off-chip pump power. The inset shows the far-UVC signal from an ECDL laser and an FP diode laser. **b**, Waveguide width dependence on the far-UVC output intensity and the Cherenkov angle. The inset shows the angular dependence on the far-UVC output for pump wavelengths of 445 nm and 473 nm from a 1500 nm wide waveguide. Figure 6: **a**, Setup for capturing images of the far-field showing the placement and orientation of both camera and projection screen. The microscope shown above the conversion chip aids the alignment process of the lensed fiber to the chip. **b,c**, Images of the far-UVC reflected off the projection screen, with the upper emission lobe less defined than the lower due to the multiple reflections through the BBO. A considerable spread in the azimuthal angle is observed as phase-matching is achieved with a range of Cherenkov angles. diodes can be readily employed, facilitating the development process. The frequency converter utilizes standard materials and minimizes the material processing of BBO. This combination allows for a substantial reduction in size and price, thereby enabling the targeted high-volume application of disinfection. Further, a compact far-UVC laser could drive other applications like non-line-of-sight communication, leveraging the drastic increase of scattering at shorter wavelengths [19] or Raman spectroscopy where excitation at wavelengths shorter than 250 nm alleviates autofluorescence [20]. Utilizing SFG, the spectral acceptance of pump sources enables compact realization of laser sources at a broad range of wavelengths. For example, combinations of compact diode lasers could be used to generate laser emission spanning the UV and visible spectra. We also envision that a frequency-comb pump source could be translated to the UV with this frequency converter device because of the broadband phase-matching. The main limiting factor in the presented work is the loss associated with input coupling and propagation through the SiN waveguide. As shown in Fig. 5, an increase in on-chip pump power has significant impact on the signal power. Previous works demonstrating SiN waveguides with similar geometries show lower propagation losses of 5-7 dB/cm [40, 41, 42]. Other means of improvements lie in the material platform. Some evidence suggest lower loss from PECVD SiN [40]. Also, tantala could provide lower material absorption while maintaining a similar refractive index to SiN [43]. Gradual improvements in blue laser power densities are still seen from manufacturers, and pulsed operation could generate a higher power signal relevant for disinfection. The theory and architecture presented in this study provide multiple avenues for enhancing conversion efficiency, thereby increasing the power output of this compact solid-state far-UVC laser. Together, these advancements pave the way for far-UVC laser applications in non-line-of-sight free-space communication, Raman spectroscopy, and disinfection. ## Methods ### Design SiN is used for the waveguide core layer to guide the blue pump light. BBO is used as the nonlinear crystal because of its large second-order nonlinearity, low loss in the far-UVC, and refractive indices compatible with Cherenkov phase-matching to SiN waveguides [30, 44]. Figure 1b shows a schematic to define the waveguide width (\(w\)), height (\(h\)), the layer compositions, and the crystal axes relative to the device coordinate system. The BBO crystal is orientated to align \(d_{16}\) of the second-order nonlinear tensor to the TE-like pump mode in the SiN [45]. A contour profile of the \(\hat{x}\)-polarized electric field of the pump at 445 nm wavelength overlays the schematic. It has high confinement in the SiN core and a substantial overlap of the evanescent field with the BBO. The first step in the design is to investigate geometries of SiN waveguides that support Cherenkov phase-matching to radiation modes in BBO bonded directly to SiN. With the propagation in the waveguide defined as the \(\hat{z}\)-direction, Cherenkov phase-matching occurs when the signal light propagates in a direction slightly elevated from the \(\hat{z}\)-direction towards the \(\hat{y}\)-direction, such that the \(\hat{z}\)-component of the phase velocity matches the pump phase velocity. The Cherenkov angle (\(\theta_{\mathrm{C}}\)) is the elevation angle \(\theta\) of the signal wavevector relative to the SiN waveguide for the azimuthal angle of \(\phi=0\). This phase-matching Figure 7: **a**, Tuning the single-frequency pump laser produces a continuous sweep of far-UVC output wavelengths. **b**, Using two pump lasers at respectively 445 nm and 473 nm, sum-frequency generation is demonstrated at 229.5 nm. The spectra, showing the normalized detected power, are recorded with two different optical spectrometers, the UV spectrometer having significantly higher resolution. As such the broader spectral appearance of the pump lasers is not their true spectral width as all spectra are under-sampled and merely reflect the resolution of the spectrometer. condition occurs when: \[n_{\omega}=n_{2\omega}\cos(\theta_{\mathrm{C}})\, \tag{1}\] where \(n_{\omega}\) is the effective refractive index of the guided pump mode and \(n_{2\omega}\) is the refractive index the radiated signal light experiences in the nonlinear material. It is illustrated in Fig. 1a and Fig. 4a, and measured in Fig. 5a. Note that Cherenkov phase-matching can occur only when \(n_{2\omega}\geqslant n_{\omega}\). This requirement dictates the materials that can be used for the core, cladding, and nonlinear regions of the Cherenkov SHG waveguide. It also creates a regime of operation for certain combinations of waveguide widths and heights such that \(n_{\omega}\) is engineered to be smaller than \(n_{2\omega}\). The derivation for the Cherenkov SHG conversion efficiency follows the method proposed in [39], with some modifications to include propagation losses. The spatially dependent electric and magnetic fields (of either the pump or the signal) are assumed to take the form of: \[\vec{\mathcal{E}}(x,y,z) =e^{ikz}\,\mathcal{Z}(z)\vec{E}(x,y), \tag{2a}\] \[\vec{\mathcal{H}}(x,y,z) =e^{ikz}\,\mathcal{Z}(z)\vec{H}(x,y), \tag{2b}\] where \(\vec{E}\) and \(\vec{H}\) are the electric and magnetic field components of the mode profile, assumed to be independent of \(z\), and \(\mathcal{Z}\) is a unitless complex function of \(z\) that accounts for nonlinear mode coupling. The term \(e^{ikz}\) with \(k=\beta+i\alpha/2\) describes the propagation in the \(\hat{z}\)-direction through the associated component of the wavevector \(\beta\) and the power loss coefficient \(\alpha\). The pump mode profiles are calculated by a numerical simulation method [46]. The signal consists of a continuum of radiation modes that are best expressed analytically [47]. The approach taken by [39] for embedded channel waveguides is the approximation we use for this waveguide geometry. The \(\hat{x}\)-polarized electric field profile for TE-like radiation modes is calculated for an azimuthal range of: \(-\pi/2<\phi<\pi/2\). Nonlinear coupling can be modeled with the following equation [48]: \[\partial_{z}\mathcal{Z}_{\nu}=i\frac{\omega_{\nu}}{4Q_{\nu}}e^{-ik_{\nu}z} \iint_{A}\vec{\mathcal{P}}_{\nu}^{\mathrm{NL}}\cdot\vec{E}_{\nu}^{*}\,\mathrm{ d}x\,\mathrm{d}y, \tag{3}\] with a normalization parameter (in units of power): \[Q_{\nu}=\frac{1}{2}\iint_{A}\Re\left[\left(\vec{E}_{\nu}\times\vec{H}_{\nu}^ {*}\right)\cdot\hat{z}\right]\mathrm{d}x\,\mathrm{d}y, \tag{4}\] where the integration is performed over a finite cross-sectional area \(A\) that completely encompasses the pump field. The nonlinear electrical polarizations are defined as \(\vec{\mathcal{P}}_{\omega}^{\mathrm{NL}}=2\epsilon_{0}d\vec{v}_{\omega}\) for the pump and \(\vec{\mathcal{P}}_{2\omega}^{\mathrm{NL}}=\epsilon_{0}d\vec{v}_{2\omega}\) for the signal, \(\epsilon_{0}\) is the vacuum permittivity, \(d\) is the tensor representing the second-order nonlinearity of the material, and \(\vec{v}\) is the corresponding complex vector of pump or signal electric field components assuming Kleinman symmetry. Next, the nonlinear coupling coefficients are calculated as a function of the elevation angle \(\theta\) and of the azimuthal angle \(\phi\) by: \[\kappa_{2\omega} =\frac{\omega\epsilon_{0}}{2Q_{2\omega}}\iint_{A}(d\vec{v}_{2 \omega})\cdot\vec{E}_{2\omega}^{*}\,\mathrm{d}x\,\mathrm{d}y, \tag{5a}\] \[\kappa_{\omega} =\frac{\omega\epsilon_{0}}{2Q_{\omega}}\iint_{A}(d\vec{v}_{\omega })\cdot\vec{E}_{\omega}^{*}\,\mathrm{d}x\,\mathrm{d}y. \tag{5b}\] Now the coupled-amplitude equations can be expressed as: \[\partial_{z}\mathcal{Z}_{2\omega} =i\mathcal{Z}_{2\omega}^{2}\kappa_{2\omega}e^{-iz\Delta k}, \tag{6a}\] \[\partial_{z}\mathcal{Z}_{\omega} =i\mathcal{Z}_{\omega}^{*}\iint\mathcal{Z}_{2\omega}\kappa_{\omega }e^{(iz\Delta k-\alpha_{\omega})z}\,\mathrm{d}\theta\,\mathrm{d}\phi, \tag{6b}\] where \(\Delta k=k_{2\omega}-2k_{\omega}\). Note that the signal amplitude \(\mathcal{Z}_{2\omega}\) is a function of \(\theta\) and \(\phi\). The coupled-amplitude equations in Eq. 6 are solved numerically to find the signal output power \(P_{2\omega}(z)=Q_{2\omega}e^{-\alpha_{2\omega}z}|\iint\mathcal{Z}_{2\omega} \mathrm{d}\theta\,\mathrm{d}\phi|^{2}\) in the radiated fields and the pump power \(P_{\omega}(z)=Q_{\omega}e^{-\alpha_{\omega}z}|\mathcal{Z}_{\omega}|^{2}\) in the guided pump mode. The input pump power \(P_{\omega,0}\) initializes the pump amplitude as \(|\mathcal{Z}_{\omega}(0)|^{2}=P_{\omega,0}/Q_{\omega}\), and the absence of an initial signal sets \(\mathcal{Z}_{2\omega}(0)=0\). Note that the radiation fields and \(\kappa\) must be defined over the range of \(\phi\) and a range of \(\theta\) large enough to include all contributions. ### Fabrication Device fabrication starts with an oxidized Si wafer, on which a target of 100 nm SiN is formed by low-pressure chemical vapor deposition. The actual thickness of this layer is measured as 98 nm. Waveguides are patterned with deep-UV lithography and etched with inductively-coupled plasma reactive ion etching. Chips are singulated by dicing to a propagation length of 10 mm. BBO crystals are then grown and cut along the \(\hat{X}\) crystal axis with a similar propagation length. The \(\hat{Z}\) and \(\hat{X}\) axes are polished. The SiN and BBO chips are directly bonded together after O\({}_{2}\) plasma activation, and the bond is strengthened with an anneal at 300 \({}^{\circ}\)C for 12 hours. Hybrid facets of Si and BBO are formed at the input and output of the waveguide by grinding and polishing to a single planar and smooth surface using a water-free polishing solution because BBO is hygroscopic. The facets are coated with MgF anti-reflective layers for the pump at the input side of the chip and for the signal at the output side. ### Measurements For all experiments a laser is coupled to the edge facet of the chip with lensed fibers. Polarization-maintaining (PM) fibers mounted on a 3-axis nanopositioning translation stage ensure close control of both polarization and spatial alignment. Detection equipment on the output side of the setup varies and is described for each measurement. The following lasers are used to pump the nonlinear processes. A stabilized 445-nm diode laser (DLPro from Toptica) is used to pump SHG and SFG, and it is referenced as the ECDL. This laser has a manual tuning range of \(\sim\)4 nm a maximum power in fiber of 95 mW and an instantaneous linewidth of 150 kHz. A fiber-coupled 445-nm FP diode laser (PL450B from Osram, packaged by OZ Optics) is used for SHG, and has a maximum power in fiber of 55 mW and a spectral width of \(\sim\)1.5 nm corresponding to 10 to 15 longitudinal cavity modes. A fiber-coupled FP diode laser emitting around 405 nm (NDV4316 from Nichia, packaged by OZ Optics) is used for SHG, emitting a power of 45 mW in fiber and a \(\sim\)1 nm spectral width, corresponding with 8 to 12 longitudinal cavity modes. Finally, a stabilized free-space diode laser at 473 nm with an intantaneous linewidth \(<\)1 MHz (Cobolt08 from Hubner Photonics) is used for SHG and SFG. It is coupled to a single-mode PM fiber achieving 15 mW in the fiber. For passive transmission measurements, PM lensed fibers are used for both the input and output coupling. Two techniques are used to discriminate the coupling loss from propagation loss. First the total power transmission is measured through detection on a fiber-coupled photodiode (S130VC from Thorlabs), secondly the scattered light intensity is measured along the waveguide from a microscope on top of the chip with a visible-light camera [49]. For characterizing the far-UVC signal from the frequency converter, the input coupling remains the same, while the output is collected using a custom designed 1D goniometer. The goniometer allows for mounting of different sensors recording the output at any output angle, vertically with reference to the waveguide at a fixed distance of 10 cm. For detection of the UVC signals either an optical spectrometer (Maya2000 Pro from Ocean Insight) or a photoemission detector (UVtron from Hamamatsu, with Ni electrode) is used. The photoemission detector gives a binary detection signal with very high sensitivity, allowing for investigations outside of the main emission lobe, however this detector has a low dynamic range. To measure the output angle \(\theta\), shown in Fig. 5b, the photoemission detector is mounted on the 1D goniometer. The limited dynamic range leads to saturation at the main emission lobe from the 445 nm pump laser. Due to its high directionality, lack of calibration, and limited dynamic range, the photoemission detector cannot be effectively mapped to a calibrated power-meter within this range. The spectrometer on the other hand has a linear intensity response, excellent dynamic range, but measures intensity in a small aperture of the input multi-mode fiber of 600 \(\upmu\)m. The spectrometer has a resolution of 0.15 nm and covers a range of 165-275 nm, leaving it blind to the pump wavelengths in the blue (405-473 nm). The signal at 202.5 nm wavelength is only detected using the highly sensitive photoemission detector, whereas the other far-UVC signals are recorded both as spectra and with the photoemission detector. Determining a calibrated measurement of the signal power has not yet been achieved. The significant coupling and propagation losses are accompanied by diffuse scattering of the blue light from the pump at intensities much higher than the far-UVC signal. Hence, a detector must have both very high sensitivity in the UV while having high suppression in the blue. By calibrating the spectrometer against a well known KrCl excimer source emitting at 222 nm, the irradiance from the frequency converter is estimated to be 0.17 \(\upmu\)W/cm\({}^{2}\). This solitary measurement gives no direct value for total output power nor conversion efficiency as it constitutes a very localized intensity in a complex radiation pattern. A projection screen made from highly UV reflective PTFE (POREX Virtek) is placed in front of the emission region, giving a diffuse reflection of the far-field at the position of the projection screen. The UV camera is highly selective in favor of the UV light over the visible. In combination with a band-pass filter centered at 220 nm, the camera setup becomes completely blind to the intense blue pump-laser, imaging only the far-UVC generated from the frequency conversion. For the SFG measurement, the ECDL at 445 nm wavelength and the free-space laser at 473 nm wavelength are combined in a polarization-maintaining fiber splitter. By ensuring the polarization is aligned to the same axis at the fiber input, both lasers are coupled to the same waveguide in the frequency converter chip in TE polarization. To balance the coupling of the three generated UV signals, the angular position of the output spectrometer probe is selected as a compromise because the Cherenkov angle varies with the wavelength.
2301.00110
Fast high-fidelity charge readout by operating the cavity-embedded Cooper pair transistor in the Kerr bistable regime
Operating the cavity-embedded Cooper pair transistor (cCPT) in the Kerr bistable regime, we demonstrate single-shot resolution between two charge states that are $0.09e$ apart. The measurement is performed with 94$\%$ fidelity in a duration of 3 $\mu$s. The drive power at which the measurement is performed corresponds to only 20 intracavity photons on average in the high oscillation amplitude state of the cCPT, which is orders-of-magnitude smaller than that in rf-SETs. We find that the limiting factor for this mode of operation of the cCPT is the spontaneous fluctuation-induced switching between the two metastable oscillation amplitude states. We present empirical data on the variation of the switching dynamics with drive parameters and cCPT DC bias.
Bhargava Thyagarajan, Sisira Kanhirathingal, Benjamin L. Brock, Juliang Li, Miles P. Blencowe, Alexander J. Rimberg
2022-12-31T03:44:10Z
http://arxiv.org/abs/2301.00110v1
Fast high-fidelity charge readout by operating the cavity-embedded Cooper pair transistor in the Kerr bistable regime ###### Abstract Operating the cavity-embedded Cooper pair transistor (cCPT) in the Kerr bistable regime, we demonstrate single-shot resolution between two charge states that are \(0.09e\) apart. The measurement is performed with \(94\%\) fidelity in a duration of \(3~{}\mu\)s. The drive power at which the measurement is performed corresponds to only \(20\) intracavity photons on average in the high oscillation amplitude state of the cCPT, which is orders-of-magnitude smaller than that in rf-SETs. We find that the limiting factor for this mode of operation of the cCPT is the spontaneous fluctuation-induced switching between the two metastable oscillation amplitude states. We present empirical data on the variation of the switching dynamics with drive parameters and cCPT DC bias. ## I Introduction Fast detection of charge on the order of a fraction of an electron has long been an important task. Versatile devices such as the quantum point contact and the single electron transistor (SET) have been used to measure electron lifetimes in a single electron trap [1], to map electric fields with \(100\) nm spatial resolution [2], to observe macroscopic charge quantization [3], and to study quasiparticle and electron tunneling events in real-time [4; 5]. More recently, they have been used in the search for Majorana zero modes in nanowires [6], and could potentially be used to detect dark matter [7; 8]. Such fast, ultrasensitive electrometers are instrumental in the readout of silicon-based spin qubits [9; 10] where the magnetic moment of a single spin is too small to detect directly, and is instead converted to a spin state dependent charge which can be read out. Dispersive charge sensors operating on the supercurrent branch of the Josephson junctions based inductive-SET (L-SET) [11] and the single Cooper-pair box [12] are not shackled by the electron shot noise which limits the operation of the rf-SETs [13]. The cavity-embedded Cooper pair transistor (cCPT) used in this work has previously been shown to achieve a charge sensitivity of \(14~{}\mu e/\sqrt{\rm Hz}\) operating as a dispersive sensor in the linear regime with a single intracavity photon on average [14], close to the theoretical quantum limit for this device [15]. The cCPT is also a rich nonlinear system whose Hamiltonian contains a Kerr nonlinearity [16], and an emergent parametric amplifier term when the flux line of the system is driven at twice the resonance frequency. The Kerr term opens up the possibility of more sensitive charge detection than was achieved in the linear regime [17]. Such a Kerr cavity coupled to a mechanical resonator was proposed [18] and demonstrated [19] to achieve an order of magnitude better cooling of the phonon mode compared to a linear cavity. The Kerr nonlinearity is well known to produce bifurcations in the system response [20]. Bifurcation amplifiers [21; 22; 23] based on a large change in the response at a bifurcation edge have been used to read out the state of superconducting qubits [24]. Nanomechanical devices based on the bifurcation under a parametric drive have been used to sense charges of \(~{}9e\) at room temperature [25]. Similar devices have demonstrated charge sensing of the order of \(70e\) by manipulating the topology of the bifurcation diagram [26]. Here, using the bifurcation between a bistable and a monostable region induced by the Kerr nonlinearity of the cCPT, we demonstrate single-shot readout of \(0.09e\) of charge in \(3~{}\mu\)s with \(94\%\) fidelity, using fewer than \(25\) intracavity photons. Such low power operation ensures minimal back-action on the system being measured [27], and also aids in the integration of these cCPT detectors with state-of-the-art first stage amplifiers such as the TWPAs [28] without overwhelming them beyond their compression point. Such fast high fidelity readout is comparable to the current state-of-the-art for semiconductor spin qubits [29; 30]. In Sec. II we present a semi-classical analysis of the nonlinear cCPT and propose a scheme for it to function as a sensitive charge state discriminator operating in the bistable regime. In Sec. III we experimentally study the hysteresis in the cCPT response in the bistable regime to characterize the extent of the bistability as a function of the drive detuning and strength. We then implement a charge sensing protocol, and observe the presence of fluctuation-induced spontaneous transitions between the bistable states, which we study as a function of drive parameters and cCPT DC bias. Lastly, we characterize our charge sensing protocol and demonstrate the optimum high-fidelity, fast charge state readout possible with this device. In Sec. IV we conclude by discussing some possi ble improvements to this work. Details of the heterodyne measurement scheme employed in this work and the microwave circuitry used in the dilution refrigerator are in Appendix A, and some experimental considerations for the charge sensing scheme used in Sec. III are detailed in Appendix B. ## II Theoretical description The cCPT, schematically depicted in Fig. 1(a), consists of two parts: (i) the cavity, which is a \(\lambda/4\) superconducting coplanar waveguide (CPW) with its end shorted to the ground plane, and (ii) a Cooper pair transistor (CPT) across the center line and ground plane of the CPW at its voltage anti-node. In this geometry, the cavity and CPT form a shared SQUID loop, which couples them together. When the CPT remains in its ground state, it modifies the effective potential of the cavity, such that the cCPT behaves as a nonlinear oscillator whose resonant frequency can be tuned using the effective gate charge, \(n_{g}=\frac{C_{g}V_{g}}{e}\), and the magnetic flux threading the SQUID loop, \(\Phi_{\rm ext}\). Here, \(V_{g}\) is the external DC voltage applied to the CPT island through the gate capacitance \(C_{g}\). Along with the fabrication details for the cCPT device used in this work, a detailed characterization at low drive amplitudes where the nonlinearities do not contribute substantially to the dynamics has been carried out in Ref. [16]. Notably, the Josephson energy, \(E_{J}\), and the charging energy, \(E_{C}\), were estimated to be \(E_{J}/h=14.8\) GHz and \(E_{C}/h=54.1\) GHz respectively. Finally, to drive and measure the cCPT, a probe transmission line is coupled to the CPW through a coupling capacitor \(C_{c}\). For an input drive close to resonance, under the rotating wave approximation, the Hamiltonian for the cCPT is given by [15; 16] \[H=\hbar\omega_{0}(n_{g},\Phi_{\rm ext})a^{\dagger}a+\frac{1}{2}\hbar K(n_{g}, \Phi_{\rm ext})a^{\dagger 2}a^{2}, \tag{1}\] where \(a(a^{\dagger})\) are the annihilation (creation) operators for the cavity mode, \(\omega_{0}(n_{g},\Phi_{\rm ext})\) and \(K(n_{g},\Phi_{\rm ext})\) are the resonant frequency of the linear cCPT Hamiltonian and the Kerr coefficient respectively. The variation of \(K(n_{g},\Phi_{\rm ext})\) over the operational range of the cCPT device used in this work is shown in Fig. 1(b). The Kerr coefficient changes sign with flux, attaining extremum values at half-integer multiples of \(\Phi_{0}\), and passing through zero close to \(\Phi_{\rm ext}=0.25\Phi_{0}\). Kerr-free cavities have been used to increase the dynamic range of parametric amplifiers [31; 32]. We use input-output theory [33] to model the dynamics of the cavity mode. The quantum Langevin equation for \(a\) gives \[\dot{a} =\frac{1}{i\hbar}\big{[}a,H\big{]}-\big{[}a,a^{\dagger}\big{]} \bigg{(}\frac{\kappa_{\rm tot}}{2}a-\sqrt{\kappa_{\rm ext}}a_{\rm in}(t)- \sqrt{\kappa_{\rm int}}b_{\rm in}(t)\bigg{)}\] \[=-\bigg{(}i\left(\omega_{0}+Ka^{\dagger}a\right)+\frac{\kappa_{ \rm tot}}{2}\bigg{)}a+\sqrt{\kappa_{\rm ext}}a_{\rm in}(t)+\sqrt{\kappa_{\rm int }}b_{\rm in}(t), \tag{2}\] and a corresponding equation for \(a^{\dagger}\), where \(\kappa_{\rm ext}\) is the external damping rate due to the coupling of the resonator to the probe transmission line with the input bath operator \(a_{\rm in}(t)\), and \(\kappa_{\rm int}\) is the internal damping rate associated with the coupling of the resonator to an internal loss channel with input operator \(b_{\rm in}(t)\). The total Figure 1: (a) Schematic of the cCPT. (b) Variation of the Kerr coefficient \(K\) as a function of gate, \(n_{g}\), and flux, \(\Phi_{\rm ext}\), over the operational bias range of the cCPT simulated using the extracted values of \(E_{J}\) and \(E_{C}\)[16]. (c) Simulated response of the cCPT for different drive powers, \(P_{\rm in}\). The red curve is for a drive strength \(P_{\rm in}\ll P_{\rm in}^{\rm(c)}\). The blue curve is for \(P_{\rm in}>P_{\rm in}^{\rm(c)}\) and the green is for \(P_{\rm in}\gg P_{\rm in}^{\rm(c)}\). Above \(P_{\rm in}^{\rm(c)}\), we see bistability across a range of detunings indicated by the corresponding shaded region. The \(\triangle\)’s represent the stable high oscillation amplitude state, the \(\triangledown\)’s represent the stable low oscillation amplitude states, and the \(\Circle\)’s represent the unstable state. The solid lines indicate monostability. (d) Simulated magnitude of the reflection coefficient for the drive powers in (c). (e) Simulated phase of the reflection coefficient for the drive powers in (c). All simulations in (c), (d) and (e) were for a cCPT DC bias \((n_{g},\Phi_{\rm ext})=(0,0)\) with \(K/2\pi=-470\) kHz, and nominal damping rates for this bias point [16]. damping rate of the cavity is \(\kappa_{\rm tot}=\kappa_{\rm ext}+\kappa_{\rm int}\). When the input tone is a pure sine wave at frequency \(\omega_{d}\) of the form \(\left\langle a_{\rm in}\right\rangle=\alpha_{\rm in}e^{-i\omega_{d}t}\), the steady state response of the cavity is at this drive frequency. For this coherent drive, using the semi-classical approximation, we make the ansatz \(\left\langle a\right\rangle=-i\omega_{d}t\), with \(\left\langle\dot{a}\right\rangle=-i\omega_{d}ae^{-i\omega_{d}t}\) and the average intracavity occupation number \(n=\left|\alpha\right|^{2}=\left\langle a^{\dagger}a\right\rangle\). Plugging this ansatz into the expectation value of Eq.(2) we obtain \[\left[-i\left(\Delta-K|\alpha|^{2}\right)+\frac{\kappa_{\rm tot}}{2}\right] \alpha=\sqrt{\kappa_{\rm ext}}\alpha_{\rm in}, \tag{3}\] where we have defined the detuning \(\Delta=\omega_{d}-\omega_{0}\). Using Eq.(3) and the input-output relation \(a_{\rm out}(t)=a_{\rm in}(t)-\sqrt{\kappa_{\rm ext}}a(t)\)[33, 34] we find the reflection coefficient \(S_{11}(\Delta)\) to be \[S_{11}(\Delta) = \left(\frac{\alpha_{\rm out}}{\alpha_{\rm in}}\right)^{*} \tag{4}\] \[= \frac{(\Delta-K|\alpha|^{2})-i(\kappa_{\rm int}-\kappa_{\rm ext} )/2}{(\Delta-K|\alpha|^{2})-i(\kappa_{\rm int}+\kappa_{\rm ext})/2},\] where \(a_{\rm out}(t)\) is the transmission output bath operator. Also, from Eq.(3) and the corresponding equation for \(\alpha^{*}\), \(n=\left|\alpha\right|^{2}\) satisfies the cubic equation \[K^{2}n^{3}-2K\Delta n^{2}+\left(\Delta^{2}+\frac{\kappa_{\rm tot}^{2}}{4} \right)n=\kappa_{\rm ext}\frac{P_{\rm in}}{\hbar\omega_{d}}, \tag{5}\] where \(P_{\rm in}=n_{\rm in}\hbar\omega_{d}\) is the power of the input drive tone incident on the cCPT, and \(n_{\rm in}=\left|\alpha_{\rm in}\right|^{2}\) is the input photon flux. As illustrated in Figs. 1(c-e), at very low drive strengths this cubic equation has only one real root and the oscillator exhibits only monostable behaviour across all detunings. As the drive strength is increased beyond a critical power \(P_{\rm in}^{\rm(c)}=\frac{\sqrt{3}}{9}\frac{\kappa_{\rm ext}^{3}}{|K|\kappa_ {\rm ext}}\hbar\omega_{d}^{(c)}\), the oscillator system undergoes a bifurcation, and exhibits bistability for a range of detunings. Here, \(\omega_{d}^{(c)}\) is the drive frequency corresponding to a detuning of \(\Delta_{c}={\rm sgn}(K)\frac{\sqrt{3}}{2}\kappa_{\rm tot}\) and \((\Delta_{c},P_{\rm in}^{\rm(c)})\) corresponds to a spinode point in the parameter space of the input drive [35]. In the bistable region, two of the three real solutions of the cubic Eq.(5) correspond to high and low oscillation amplitude states with corresponding values of \(S_{11}\) from Eq.(4), while the third is an unstable, experimentally inaccessible state [36]. The variation of the resonant frequency of the cCPT with the gate, \(n_{g}\), as illustrated in Fig. 2(a), forms the basis for a sensitive dispersive charge detector. Operating in the single-photon, linear regime, this device was demonstrated to have a charge sensitivity of \(14~{}\mu e/\sqrt{Hz}\)[14]. Fig. 2(b) uses Eq.(4) to simulate the reflected phase as a function of drive frequency, \(\omega_{d}\), for two gate values separated by \(\delta n_{g}\) corresponding to a resonant frequency shift \(\delta\omega_{0}\). The \(S_{11}\) for the two gate values are denoted by pink and green curves respectively, both in the linear (\(P_{\rm in}\ll P_{\rm in}^{\rm(c)}\)), single photon regime (dashed lines); and with a drive power \(P_{\rm in}>P_{\rm in}^{\rm(c)}\) (solid lines). At a given \(\omega_{d}\), \(n_{g}\) can hence be inferred from the measured \(S_{11}\). For \(P_{\rm in}>P_{\rm in}^{\rm(c)}\), these simulations describe what we would observe in the absence of spontaneous transitions between the high and the low oscillation amplitude states in the bistable region. In the absence of these transitions, while ramping the drive detuning from a large blue-detuned value (with respect to linear resonance, \(\omega_{0}\), \(\Delta>0\)), to a Figure 2: (a) Experimentally measured resonant frequency of the linear cCPT (\(P_{\rm in}\ll P_{\rm in}^{\rm(c)}\)), \(\omega_{0}\), as a function of the gate charge on the cCPT, \(n_{g}\), at a fixed flux bias \(\Phi_{\rm ext}=0.15\Phi_{0}\) (triangles). The line is the theoretically expected resonant frequency for the junction parameters of this device. (b) Simulations illustrating the larger separation in reflected phase, \(\delta S_{11}\) (\(\delta S_{11}^{\rm(Kerr)}\)) when operating at \(P_{\rm in}>P_{\rm in}^{\rm(c)}\) (solid lines) compared to \(P_{\rm in}\ll P_{\rm in}^{\rm(c)}\) (dashed lines). (c) Phase of the reflected signal for a forward (solid line) and reverse (dashed line) triangular ramp of the drive amplitude, \(P_{\rm in}\). The input power is ramped between -140 dBm and -109 dBm in increasingly longer times from 2 \(\mu\)s to 28 \(\mu\)s from red to blue curves. The cCPT was biased at \((n_{g},\Phi_{\rm ext})=\) (0, 0) and the detuning was \(\Delta/2\pi=\) –9.5 MHz. red-detuned value (\(\Delta<0\)), we expect to stay in the high oscillation amplitude state until we reach the bifurcation detuning further from \(\omega_{0}\) for the green curve in Fig. 1(c-e). We refer to this as the lower bifurcation point, while referring to the bifurcation detuning closer to \(\omega_{0}\) as the upper bifurcation point. Upon crossing the lower bifurcation point, an abrupt jump from the high to the low oscillation amplitude state is expected, with a corresponding large change in \(S_{11}\), as illustrated in Fig. 2(b). For an appropriate drive frequency denoted by the dashed black line, the same separation in gate charge, \(\delta n_{g}\), produces a larger difference in the reflected phase between the pink and green curves, \(\delta S_{11}^{\rm(Kerr)}\), than the \(\delta S_{11}\) while operating in the linear regime. Conversely, \(\delta S_{11}^{\rm(Kerr)}\) continues to remain large as the green and pink curves are brought together by reducing \(\delta n_{g}\), whereas \(\delta S_{11}\) undergoes substantial reduction while doing so. The sensitivity of the charge detector is the smallest \(\delta n_{g}\) that produces a \(\delta S_{11}\) which can be detected with a signal-to-noise-ratio (SNR) of 1 [14]. Given that the noise in the measurement is limited by the amplifier chain in the experimental setup [14], the larger \(S_{11}^{\rm(Kerr)}\) for smaller \(\delta n_{g}\) promises a lower, much improved charge sensitivity for the device operating in this regime. ## III Experiments In this section, we first describe the results of a triangular input power ramp in order to understand the extent of the bistable region with respect to the cCPT drive parameters at a given bias point. We then outline the protocol we use in order to perform charge sensing based on the bifurcation described above. Contrary to the sharp jump in \(S_{11}^{\rm(kerr)}\) at a precise value of the detuning described in Sec. II, we see a non-zero probability of obtaining a value on either end of the step illustrated in Fig. 2(b) for a range of detunings. We discuss the results of this protocol for a range of cCPT bias points and drive parameters. From this, we glean the optimal conditions for charge sensing and finally perform an optimized single-shot measurement. In order to study the extent of the bistability, we bias the cCPT at \((n_{g},\Phi_{\rm ext})=(0,\,0)\), and drive it at a fixed detuning \(\Delta/2\pi=-9.5\) MHz with a triangular amplitude ramp in the forward and the reverse direction to check for hysteresis. This is the bias point at which we expect minimum fluctuation in the resonant frequency of the cCPT due to charge and flux noise [16; 37]. We perform a heterodyne measurement to obtain the phase of the reflected signal over the course of the ramp. The RF circuitry used in the experiments described here is detailed in Appendix A. Fig. 2(c) plots the observed hysteresis in the phase of \(S_{11}\) for different ramp rates, each averaged over 5000 repetitions of the ramp. For fast ramps, we see that we obtain a value for the reflected phase corresponding to the low oscillation amplitude state for the forward ramp, and a value that corresponds to the high oscillation amplitude state during the reverse ramp. However, as the ramp time is increased, we observe that the spacing between the observed phase during the forward and the reverse ramps reduces, and for this cCPT bias point, saturates to the values represented by the blue curves, corresponding to ramp times of \(\sim 25\,\mu\)s. This is because, when given enough time to do so, the oscillator system undergoes fluctuation-induced spontaneous switching between the high and low oscillation amplitude states over the course of a ramp. This yields a weighted average value for the phase at each \(P_{\rm in}\) value over 5000 repetitions of the pulse sequence. The weights depend on the average lifetimes of the high and the low oscillation amplitude states at the chosen cCPT bias and the drive parameters. We see less variation in the shape of the forward and reverse ramp curves for the larger ramp times in Fig. 2(c). This provides a rough estimate of \(\sim 25\,\mu\)s for the average lifetimes of these bistable states. This is a sign of spontaneous transitions between the high and low oscillation amplitude states for a range of input drive strengths, and will be detrimental to the charge sensing scheme described above which counts on the sharp jump from one oscillation amplitude state to the other at precisely a bifurcation point. A similar reduction in the area enclosed between the curves corresponding to the forward and reverse ramps for longer ramps was recently observed for a nonlinear semiconductor microcavity [38]. While performing the charge sensing measurement, we choose an input drive strength which gives rise to a region of bistability (\(P_{\rm in}>P_{\rm in}^{\rm(c)}\)) for the chosen cCPT DC bias (\(n_{g},\Phi_{\rm ext}\)) with a corresponding \(K<0\). In order to deterministically initialize the oscillator in the high oscillation amplitude state, we perform a linear ramp on the detuning of the drive tone from a blue- to a red-detuning as illustrated in Fig. 3(a). More details on the initialization section (shaded pink) of this protocol are provided in Appendix B. Once initialized, we measure and average the phase of the reflected signal for a time \(t_{\rm acq}\). Performing this measurement \(N_{\rm tot}=20,000\) times, we obtain a double Gaussian histogram as illustrated in the inset of Fig. 3(b), and extract the probability of the high oscillation amplitude state, \(P(\omega_{d})\), as the ratio of the area of the left Gaussian to the total area of the histogram. We perform this measurement for different detunings at the end of the initialization step of the pulse in Fig. 3(a), and plot the obtained probability of being in the high oscillation amplitude state for each detuning, obtaining the S-curves in Fig. 3(c). We fit sigmoids of the form \[P(\omega_{d})=\frac{1}{1+\exp\Bigl{\{}-\frac{4.3944(\omega_{d}-\Delta_{0})}{ \gamma}\Bigr{\}}\Bigr{\}}, \tag{6}\] where \(\Delta_{0}\) is the center of the sigmoid, and the numerical factor in the exponential ensures \(\gamma\) is its width between \(P(\omega_{d})=0.1\) and \(P(\omega_{d})=0.9\). As described earlier in Sec. II, we ideally expect an abrupt step in \(P(\omega_{d})\) from \(1\to 0\) at the lower bifurcation point for our ramp protocol in Fig. 3(a). However, from Fig. 3(c), we clearly do not see an abrupt step in \(P(\omega_{d})\) at just the bifurcation point, but a gradual change in its value across a range of detunings, whose behavior for different cCPT bias points and drive parameters we will now study. For systems where the ratio \(\frac{|K|}{\kappa_{\mathrm{tot}}}<1\)[40], close to a bifurcation, the switching between these two metastable oscillation amplitude states is described by a quantum activation model which predicts fluctuation-induced escape over a metapotential barrier [35; 41]. This has been demonstrated to accurately model the switching between these states in nanomechanical systems [42; 43], Josephson bifurcation amplifiers [23], and in Josephson junction array devices [44; 45]. For systems with Kerr strengths comparable to the cavity linewidth, a quantum calculation is required to accurately model this switching [40]. We discuss some of the possible sources of these fluctuations in Sec. IV. From a charge sensing point of view, we want the S-curves for two cCPT gate biases separated by a given \(\delta n_{g}\) to have a large separation between their centers, \(\Delta_{0}\), while the widths of these sigmoids, \(\gamma\), should remain small. Additionally, in order to perform single-shot measurements separating the oscillation state using a threshold phase value at the middle of the two Gaussian peaks in the inset in Fig. 3(b), we need to minimize the overlap between the Gaussians. Fig. 4(a) shows the variation of the centers of the sigmoid, \(\Delta_{0}\), vs cCPT gate bias, \(n_{g}\), for different cCPT flux biases, \(\Phi_{\mathrm{ext}}\). For each flux bias, the largest separations between the centers of two S-curves are observed for large gate biases. Given that we work close to \(n_{g}=0.71\), the largest separation is for flux values close to \(\Phi_{\mathrm{ext}}=0\). This Figure 3: (a) Charge sensing protocol. Detuning of the input pulse tone used to initiate and readout the oscillation state in the charge sensing experiment described in the text with representative values for the durations and detunings of the different sections. The pink area depicts the initialization segment to initialize the oscillator in the high oscillation amplitude state. The phase is measured and averaged during the green segment, for a time \(t_{\mathrm{acq}}\). We wait a time \(t_{\mathrm{down}}=5\,\mu\mathrm{s}\) between consecutive pulses and set \(f_{\mathrm{hat}}=0\). (b) Schematic illustrating sigmoid S-curves for two different cCPT gate biases illustrated in pink and green, with the black arrow denoting the maximum contrast between the two. The inset shows a representative histogram of the reflected phase, \(\phi\), upon running the above pulse sequence \(N_{\mathrm{tot}}=20,000\) times. The two Gaussian distributions correspond to the oscillator being in the high (left) and low (right) oscillation amplitude states respectively, with the solid lines representing a double Gaussian fit. (c) Obtained S-curves (\(\circ\)’s) for different \(n_{g}\) values at \(\Phi_{\mathrm{ext}}=0.06\Phi_{0}\) and an input drive power \(P_{\mathrm{In}}=-128\) dBm. The averaging time, \(t_{\mathrm{acq}}=3\,\mu\mathrm{s}\). The solid lines are sigmoid fits to Eq. (6). The horizontal error bars represent the standard deviation of the resonant frequency fluctuations due to charge and flux noise at the cCPT DC bias point [16; 39]. Figure 4: (a) S-curve centers, \(\Delta_{0}\), vs gate charge, \(n_{g}\), for different different flux biases, \(\Phi_{\mathrm{ext}}\), for \(P_{\mathrm{In}}=-128\) dBm. The error bars are smaller than the markers. (b) S-curve widths, \(\gamma\), vs \(K\) at the cCPT DC bias points in (a), for different drive strengths. The error bars are the 95% confidence intervals to the sigmoid fits. is related to both the large variation in the ground state energy of the cCPT at these DC bias points, and consequently the linear resonance frequency, \(\omega_{0}\), as in Fig. 2(a) [16], as well as the variation of the metapotential landscape in the bistable region that limits the extent of switching between the two oscillation amplitude states at a given set of drive parameters. The separation between the \(\Delta_{0}\) for two distinct cCPT bias points is also found to be largest at low input powers, \(P_{\rm in}\) (data not shown). Fig. 4(b) shows the variation of the width of the sigmoids, \(\gamma\), plotted against the theoretically computed value of the Kerr coefficient, \(K\), from Fig. 1(b) at different cCPT bias points (\(n_{g},\Phi_{\rm ext}\)), for different \(P_{\rm in}\). The cCPT DC bias points we are interested in based on the separation of the centers of the sigmoids in Fig. 4(a) correspond to \(K/2\pi=-600\) to -800 kHz, and we see \(\gamma\) is much smaller for lower \(P_{\rm in}\) at these bias points. This is related both to the reduction in width of the bistable region with decreasing \(P_{\rm in}\), as seen from Eq.(5), and to the reduction in barrier height of the metapotential with increasing \(P_{\rm in}\)[40; 46]. This reduced barrier height enables switching between the two oscillation amplitude states within experimental timescales over a wider range of detunings. The other consideration in demonstrating single-shot readout is the resolution of the two Gaussians in the inset of Fig. 3(b). Fig. 5(a) shows the separation between the centers of the two Gaussians for all the cCPT DC bias points in Fig. 4(a) for a range of \(P_{\rm in}\). The smallest Kerr strengths, \(|K|\), and the lowest drive strengths, \(P_{\rm in}\), give us maximum separation. We understand this as follows. The detuning corresponding to the maximum oscillator response is either negative (spring softening) or positive (spring hardening), based on \({\rm sgn}(K)\). The degree of softening of the oscillation response curve (Fig. 1(c)) and hence the reflection coefficient, \(S_{11}\), illustrated in Fig. 1(d-e) depends on \(|K|\). The low oscillation amplitude response is close to zero at all detunings in the bistable region, with the corresponding \(S_{11}\) close to 1. Meanwhile, the high oscillation amplitude increases from close to zero at the upper bifurcation point to a maximum value at the lower bifurcation point, with a slope inversely proportional to \(|K|\), with similar behavior for \(S_{11}\). At detunings close to the upper bifurcation point around which we see non-zero probability for both high and low oscillation amplitude states within experimental lifetimes, the high amplitude oscillation response and the corresponding \({\rm Phase}(S_{11})\) at a given detuning assume a finite non-zero value whose magnitude depends inversely on \(|K|\), as seen in Fig. 5(a), such that the difference in \({\rm Phase}(S_{11})\) between the high and the zero phase low amplitude state also depends inversely on \(|K|\). To understand the variation with \(P_{\rm in}\), we compare the blue and the green curves in Figs. 1(c-e), which are both for the same \(K\). We see that the slope of the amplitude response of both bistable states is nearly independent of \(P_{\rm in}\), but close to the upper bifurcation point, the corresponding \(S_{11}\) yields smaller separation between the high and low oscillation amplitude states for higher \(P_{\rm in}\). Fig. 5(a) hence suggests that we work at low \(|K|\) and at low \(P_{\rm in}\) in order to observe maximum separation between the reflected phase of the high and the low oscillation amplitude states. While we saw that the latter condition also yields S-curves with the smallest widths (Fig. 4(b)), Fig. 4(a) shows that the \(\Phi_{\rm ext}\) corresponding to small \(|K|\) values correspond to poor separation between \(\Delta_{0}\) for two cCPT \(n_{g}\) values, which is contradictory to our goal. The pursuit of low \(|K|\) and low \(P_{\rm in}\) suggests operating the cCPT on the cusp of bistability where a large gain in the dispersive readout is expected at a certain detuning [17]. However, operation of the device studied here is not possible in that regime as discussed further in Sec. IV. Figure 5: (a) Measured separation between peaks of Gaussians as in inset of Fig. 3(b) vs \(K\) for different \(P_{\rm in}\). (b) Histogram count \(N^{(0.62)}\) (\(N^{(0.71)}\)) of the phase of the reflected signal for optimal charge sensing. The data is for \(N_{\rm tot}=20,000\) trials each at \((n_{g},\Phi_{\rm ext})=(0.62,0)\) (blue) and \((n_{g},\Phi_{\rm ext})=(0.71,0)\) (red) respectively, while driving the cCPT with an input tone at \(\omega_{d}/2\pi=5.8013\) GHz with \(P_{\rm in}=-128\) dBm. The dashed line denotes the threshold phase, \(\phi_{\rm th}\), used to discriminate the charge state in a single-shot. (c) Measurement fidelity as a function of averaging time \(t_{\rm acq}\) for the drive conditions in (b). With these considerations, we obtain a maximum contrast of \(96.61\%\) in the S-curves between when the cCPT is biased at \((n_{g},\Phi_{\rm ext})=(0.62,0)\) and at \((0.71,0)\), and driven at \(\omega_{d}/2\pi=5.8013\) GHz with \(P_{\rm in}=-128\) dBm. For this drive strength, using an averaging time, \(t_{\rm acq}=3\,\mu\)s, Fig. 5(b) shows the obtained histograms with counts \(N^{(0.62)}(\phi)\) and \(N^{(0.71)}(\phi)\) respectively. The separation between the Gaussian peak centers is \(36^{\circ}\) for this bias point and \(P_{\rm in}\), and the width of each Gaussian is \(12^{\circ}\) for this \(t_{\rm acq}\). Using a threshold value \(\phi_{\rm th}\) at the center of the two Gaussian peaks as denoted by the dashed line, we assign a charge state to each histogram data point. Defining the fidelity \(F^{(0.62)}=1-\frac{1}{N_{\rm tot}}\sum_{\phi=\phi_{\rm th}}^{180}N^{(0.62)}(\phi)\) and \(F^{(0.71)}=1-\frac{1}{N_{\rm tot}}\sum_{\phi=-180}^{\phi_{\rm th}}N^{(0.71)}( \phi)\), we obtain an average fidelity \(F=94.59\,\%\). The similarity between the obtained fidelity and the measured maximum contrast which is agnostic to the overlap of the Gaussians caused by the amplifier noise shows that for this \(t_{\rm acq}\), the limiting factor of our measurement is not the signal-to-noise ratio of the amplifier chain, but is the broadening of the S-curves caused by fluctuation-induced switching between the metastable oscillation states. Finally, it is worth noting that using the above DC bias and drive parameters in Eq. (5) along with damping rates \(\kappa_{\rm int}\), \(\kappa_{\rm ext}\) extracted as described in [16; 39], we find that the intracavity photon number, \(n=8.1\) at \(n_{g}=0.62\), and \(n=20.94\) at \(n_{g}=0.71\) respectively in the high oscillation amplitude state. At \(n_{g}=0.71\), for the optimal drive tone, the oscillator resides predominantly in the low oscillation amplitude state with an intracavity occupation on the order of 0.2 photons. These are orders of magnitude fewer photons than used by devices such as the rf-SETs [13; 47]. ## IV Discussion The cCPT operating in the Kerr bistable regime is thus sensitive to changes in its electrostatic environment that produce a shift of \(\delta n_{g}=0.09e\) and we have demonstrated real-time single-shot high fidelity detection of this charge difference in \(3\,\mu\)s. This corresponds to a charge sensitivity per unit bandwidth \(=\delta n_{g}\sqrt{t_{\rm acq}}=155.89\,\mu e/\sqrt{\rm Hz}\). The bandwidth of this electrometer is set by \(\kappa_{\rm tot}/2\pi\approx 1.5\) MHz. This readout is performed with only a few tens of intracavity photons, which is several orders of magnitude smaller than in other state-of-the-art devices [13; 48]. An application of the cCPT as a charge sensor is to detect the state of a quantum-dot spin qubit using spin-to-charge conversion [9; 10; 29]. The backaction by the charge sensor on the system being measured is proportional to the number of intracavity photons [49], making such low cavity number operation desirable [27]. Using techniques such as defining the SET in the Si substrate [50; 51], and extending the cCPT island [5] in order to increase the \(\delta n_{g}\) induced on the cCPT island in the event of a spin tunneling out of a quantum dot, we could work at larger \(P_{\rm in}\) for the same \(\Phi_{\rm ext}\) and corresponding \(K\), while still achieving sufficient contrast and comparable fidelity for much smaller \(t_{\rm acq}\). For the more relaxed \(\delta n_{g}\) requirement, low power operation with smaller \(t_{\rm acq}\) without compromising fidelity would be possible at other \(\Phi_{\rm ext}\) corresponding to smaller \(|K|\) and larger phase separation between the high and the low oscillation amplitude states as in Fig. 5(a), while still retaining a large contrast value. The major limitation to this mode of operation of the cCPT as a charge sensor are the spontaneous fluctuation-induced transitions between the high and low oscillation amplitude states in the bistable regime. The metapotential landscape governing these transitions depends on \(P_{\rm in}\) and \(K\)[40; 46]. The \(|K|/\kappa_{\rm tot}\) range of the cCPT lies in the interesting'mesoscopic' region where quantum effects begin to become important [40]. Mapping the metapotential and corresponding switching rates between the high and low oscillation amplitude state for such a device could guide understanding of single-photon Kerr devices [52] which have been proposed as single-photon sources [53], to generate ultra-fast pulses [54] and to be used to implement quantum non-demolition measurements [55]. For a given metapotential, the intensity of the fluctuations present in the system is the other factor that affects the switching rates and hence the width of the S-curves, \(\gamma\). Given that thermal activation is unlikely since \(\hbar\omega_{d}>k_{B}T\), one commonly studied source of fluctuations is the dephasing of the oscillator caused by the modulation of the resonant frequency [36], or equivalently, of the detuning of the drive. The phase noise of the signal generator is typically \(1/f\) in nature and is quite small in the frequency range relevant for escape dynamics such as observed in Fig. 2(c). The resonant frequency fluctuations for systems such as the cCPT has been studied in detail [39], and characterized [16]. The resonant frequency fluctuations due to charge noise arising from fluctuating two-level systems close to the cCPT island, and magnetic flux noise arising due to unpaired surface spins are both \(1/f\) in character, and should not be relevant to the switching dynamics either. However, the frequency modulation due to white photon shot noise [49] induced Kerr shift considered in Ref. [39] would explain the increase in the width of the S-curves at larger \(|K|\) as in Fig. 4(b), working in tandem with the reduced barrier metapotential barrier at larger \(|K|\). A careful study showing a direct correlation between the switching rates of the cCPT and \(P_{\rm in}\) would confirm this hypothesis, since the frequency independent power spectral density of the photon shot noise depends on the average cavity occupation, \(n\). Another avenue to increase the sensitivity of the device would be to reduce the quasiparticle poisoning (QP) in the device. We observe substantial switching out of the even to the odd manifold of the CPT (\(n_{g}\to 1-n_{g}\)) [56] for \(|n_{g}|>0.71\)[16]. As illustrated in Fig. 2(a), the cCPT resonance frequency, \(\omega_{0}\), is most sensitive to \(n_{g}\) close to \(|n_{g}|=1\), and employing techniques such as effective shielding from Cooper-pair-breaking, quasiparticle generating radiation [57; 58] could greatly enhance the performance of the cCPT. Still using this inherent Kerr nonlinearity, but by driving the cCPT with a \(P_{\text{in}}\sim P_{\text{c}}^{\text{(c)}}\) close to but before the onset of bistability where \(\frac{d\Omega_{\text{S1}}}{d\Delta}\rightarrow\infty\) for some \(\Delta\), we should be able to realize a large gain in the charge sensitivity [17]. The presence of gate and flux noise induced resonance frequency fluctuations [39, 16] make it hard to operate at the precise \(\Delta\) where this enhancement is expected, but using the resonance frequency stabilizing feedback scheme demonstrated in [37] should enable such operation. Enhanced cooling of a nanomechanical resonator coupled to a nonlinear cavity operating in this regime has been shown [19]. The cCPT Hamiltonian also has other nonlinear terms such as those of a degenerate parametric amplifier which can be driven into resonance using an appropriate flux pump at close to \(2\omega_{0}\). The amplitude of the parametric oscillations induced [59, 60, 61] depends on the gate bias of the cCPT [62], and can be used as a charge sensor, similar to the qubit state detector operating on this principle [63]. ###### Acknowledgements. We thank Ethan Williams, Hui Wang, Archana Kamal, Chandrasekhar Ramanathan, William Braasch and Hory Mohammadpour for helpful discussions and useful feedback on the manuscript. This work was supported by the NSF under Grants No. DMR-1807785 (S. K., B. T., B. L. B., and A. R) and DMR-1507383 (M. P. B.), and by a Google research award (S. K.). ## Appendix A Experimental setup Fig. 6 shows the rf circuitry used in the experiments described in this work. The input tone from a Keysight N5183B signal generator is mixed with an intermediate frequency tone from a Tektronix arbitrary waveform generator whose amplitude envelope can be ramped, or whose frequency can be chirped for the charge sensing protocol (see Fig. 3(a)). This signal passes through various stages of attenuation in the dilution refrigerator before driving the cCPT, which is mounted inside a magnetic shield at the mixing chamber stage of the refrigerator. The reflected signal goes through a circulator to a traveling wave parametric amplifier (TWPA) [28] which serves as the first stage amplifier. The signal is then amplified by a Low Noise Factory LNF LNC4_8C high electron mobility transistor (HEMT) and a room temperature low noise field effect transistor (FET). This signal is then mixed down to an intermediate frequency of 21 MHz, filtered, digitally sampled, and demodulated to extract the phase. The TWPA has an average gain of 18 dB over the operating bandwidth of the cCPT, ensuring that the added noise of the amplifier chain is dominated by the noise added by the TWPA. The added noise density referred to the input of the amplifier chain is separately measured to be \(\sim\)4.67 photons/Hz (noise temperature of 1.28 K), close to the quantum limit of 1 photon/Hz [49] for the phase insensitive TWPA. ## Appendix B Charge sensing protocol initialization Here, we elaborate upon the initialization section of the charge sensing protocol illustrated in Fig. 3(a). In order to initialize the oscillator in the high oscillation amplitude state, we start from a detuning in the monostable region on the positive detuning side in Fig. 1(c-e), and ramp the detuning by \(f_{\text{ramp}}/2\pi=-41\) MHz in a time \(t_{\tau}=530\) ns. The final detuning is close to a bifurcation edge, denoted by the black dashed line in Fig. 2(b). The oscillator is driven with this constant tone for a time \(t_{\text{stab}}=4.9\,\mu\)s, during which time fluctuations could cause a transition to the low oscillation amplitude state. These values for \(t_{\tau}\) and \(t_{\text{stab}}\) were settled upon after performing QuTiP [64] simulations using a master equation solver for the exact input tone in Fig. 3(a), and seeing the system through a transient evolution period to the steady state. Figure 6: Microwave circuitry used in Sec. III. This value of \(t_{\rm stab}\) is also close to the nominal value of \(5/(\kappa_{\rm tot}/2\pi)\) over which transients of oscillating systems are expected to decay, even in the region where switching between high and low oscillation amplitude states is observed, where the oscillator dynamics are considerably slowed [40]. The detuning could then be ramped to a slightly larger blue-detuning, \(f_{\rm latch}\), to reduce the probability of a switching event during the measurement time, hence 'latching' the oscillator in the oscillation amplitude state attained at the end of the stabilization time [23]. However, unlike the systems studied in Refs. [43; 23], the low oscillation amplitude state is often not a long-lived state in our system, making the latching a little less likely, and causing the switching statistics to depend on the additional parameters \(f_{\rm latch}\) and \(t_{\rm acq}\). We thus set \(f_{\rm latch}=0\).
2309.14747
Analysis of Vibration and Thermal of a Modeled Circuit Board of Automated External Defibrillator (AED) Medical Device
In this research, the AED was modeled with the Ansys 2020 workbench and calibrated based on static and dynamic loading to verify the static displacement with the first set of five frequencies obtained based on the un-prestressed conditions. With modification, using the prestressed analysis, the next set of frequencies obtained gives an improved result with 0.0003 percent error difference between each frequency. The modeled Circuit board was used to examine the vibration and dynamic analysis for the rigid board. Likewise, the thermal analysis was conducted on the modeled Circuit board with the heat source as the battery and the rate of dissipation of heat around the board and its effect on the circuit components.
Saidi Olayinka Olalere
2023-09-26T08:15:21Z
http://arxiv.org/abs/2309.14747v1
Analysis of Vibration and Thermal of a Modeled Circuit Board of Automated External Defibrillator (AED) Medical Device ###### Abstract This study was performed to analysis the vibration and thermal changes experienced by an Automated External Defibrillator (AED) Medical Device which is exposed to shocks from patient's reactions, vibration from mobile ambulances and air ambulance, and the heat changes due to the battery component on the circuit board. Basically, AED is made of plastic with the external parts containing majorly the display, button, pad socket, speaker and more while the internal part entails the circuit boards which housed the components like resistors, capacitors, inductors, integrated circuits and more. In this research, the AED was modeled with the Ansys 2020 workbench and calibrated based on static and dynamic loading to verify the static displacement with the first set of five frequencies obtained based on the un-prestressed conditions. With modification, using the prestressed analysis, the next set of frequencies obtained gives an improved result with 0.0003% error difference between each frequency. The modeled Circuit board was used to examine the vibration and dynamic analysis for the rigid board. Likewise, the thermal analysis was conducted on the modeled Circuit board with the heat source as the battery and the rate of dissipation of heat around the board and its effect on the circuit components. Thermal Analysis, Vibration Analysis, Modeled Circuit Board, Automated External Defibrillator (AED), Medical Device, Ansys workbench, Dynamic Analysis, Bending effect, Damping effect. ## I Introduction The Automated External Defibrillator (AED) is a life safer to assist those experiencing sudden and life-threatening cardiac arrest (arrhythmias) of ventricular fibrillation and pulseless ventricular tachycardia to analyze the heart's rhythm by delivery electrical shock to reverberate the heart. The AED is made of plastic, and it has external and internal parts. The external part contains the Pad Expiration Window, Latch, Status Indicator, Battery Compartment, and Battery, Electrode holder, Color display, Manual Override button, Shock button, Pad/Electrode Socket, speaker, and IR port. The internal parts include Main board, Display board, Speaker, Display, Shock Discharge Capacitor and Beeper Speaker. The effect of sudden cardiac arrest is the cause of more than 350,000 deaths in the United States and the way to resuscitate a heart is through the AED. On average, the response time for the first responder is between 8 - 12 minutes while for every delay, the survival rate reduces by 10% approximately (The American Red Cross, 2021). The AED is an easy-to-use instrument which doesn't require any special training. It is placed in public places for easy access in case of emergency or sudden cardiac arrest. During Chicago's Heart Start program for a period of two years, among the 22 persons, 18 were in a cardiac arrhythmia which were treated with AEDs. Of these 18, 11 survived. Of these 11 patients, 6 were treated by bystanders with no previous training in AED use (Sherry L. Caffrey, 2002). The AED circuit board for the analysis will be a material made of Epoxy FR-4 with length 254mm and width 216mm while the thickness is 0.5mm. The components of the circuit board include Capacitor, Microcontroller, Flash memory, Analog Digital converter, Field Programmable Gate Array (FPGA), Processor, Audio controller, Inductor and more. Based on the design specification and components, a Finite Element Analysis model is established. The governing equation for the experiment is. \[\text{Mx}^{*}+\text{Cx}^{*}+\text{Kx}=\text{F(t)} \tag{1}\] where M, C and K are mass, damping and stiffness matrices respectively. The goal is to analysis the effect of vibration and thermal experience on the AED based on its operation. The model will undergo various static and dynamic testing of the modeled circuit board. The dynamic and vibration properties will be analyzed for the modeled circuit board for rigid board based on the number and position of the support. This research will assist in obtaining reliable results when the Defibrillator is used in mobile ambulances experiencing vibration and road bumping. The Automated External Defibrillator has been a lifesaving device which precedes after a Cardiopulmonary Resuscitation (CPR) is performed. The results from AED are important to understand when CPR should be continued or stopped. The methodology used for the analysis is Finite Element Analysis (FEA) by designing the model using the Ansys Workbench for the circuit board which will contain integrated circuits. The static and dynamic test will be conducted for the model to determine the bending and damping effect respectively. ## II Literature Review Deformation experienced in electric components are classified as vibration, shock, and thermal failure. The AED circuit board is a device that experiences failure which can impact the result obtained in resuscitating an individual experiencing cardiac arrest. Generally, most electronic circuit boards experience random vibration instead of ordinary vibration due to external factors within the vibration environment. Most research on electronics is based on high-cycle fatigue to predict fatigue life of component experiencing sinusoidal vibration. Fatigue failure under sinusoidal vibration loading for component by comparing the vibration failure test, FEA, and theoretical test (Y.S.Chen, 2008). FEA modeling of a PCB's vibration with rigid boundary conditions and comparing the modeling results with the test results using a rigid fixture to identify the PCB dynamic properties only such as natural frequencies (Jingshu Wu, 2002). For random vibration fatigue, the circuit board research has extended to the soldering by predicting the fatigue life when subjected to random excitation through vibration loading (Pitarresi J.M, 1993). An experimental validated vibration fatigue damage model of a plastic ball grid array solder joint assembly was developed by (Mei-Ling, 2009) to calculate strain and solder joint survival using three-band technique. ## III Materials and Methods ### Model preparation The modeled circuit board components used are Capacitors, Microcontroller, Flash Memory, Analog-Digital Converter, FPGA, Processor, Audio Controller, Battery, and Transistor. The Automated External Defibrillator is a model with different components for analysis. The base plain which is the board has a measurement of 254mm by 216mm. The capacitors are in cylinder form with length 40mm and diameter of 35mm. The microcontroller is 10mm by 10mm by 1.4mm. The battery designed is 34.5mm in length while its diameter is 17mm. The components material with the Young Modulus and Poisson ratio were listed below. The modeled AED was developed using Ansys 2020 through the workbench. The model circuit board was designed based on specific dimensions of the board and its components. Also, there are four fixed support for the AED which will be fixed to the plastic casing of the Automated External Defibrillator. The FEA is used for the deformation analysis and the thermal effect on the circuit board and its components. The FEA model is presented in Fig 1. The boundary condition was set at the four edges of the modeled circuit board which are fixed as rigid bodies as seen in a typical Automated External Defibrillator. ### Mesh Selection The modeled circuit board was meshed to verify the stress discontinuity of the member components attached to the board. The mesh model produces 90371 \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline SN & Component & Materials & Young & Poisson & Thermal \\ & & & Modulus & ratio & Conductivity \\ \hline 1 & Board & FR4 epoxy & 24 GPa & 0.118 & 0.81 WimK \\ \hline 2 & Capacitor & Tantiakm & 175 GPa & 0.34 & 54.4 WimK \\ \hline 3 & Microcontroller & Copper & 117 GPa & 0.34 & 305 WimK \\ \hline 4 & Finish memory & Polystyrene & 3260 MPa & 0.34 & 0.033 WimK \\ \hline 5 & Anuso Digital Converter & Silicon & 140 GPa & 0.275 & 150 WimK \\ \hline 6 & FPGA & Silicon & 140 GPa & 0.275 & 150 WimK \\ \hline 7 & Processor & Silicon & 140 GPa & 0.275 & 150 WimK \\ \hline 8 & audio controller & Copper & 117 GPa & 0.34 & 385 WimK \\ \hline 9 & Battery & Lithium & 31715.884 & 0.355 & 5.4 WimK \\ \hline 10 & Transistor & Silicon & 140 GPa & 0.275 & 150 WimK \\ \hline \end{tabular} \end{table} Table 1: Numerical Constant of Component Figure 1: Model of the Circuit Board nodes and 59671 elements from program-controlled order. ## IV Result and Discussion ### Deformation Analysis for 4 Member Support The circuit board undergoes different deformation at different parts of the board. The circuit board experiences the maximum deformation which is more at the middle of the board at a peak of 33.141mm. Likewise, the circuit board experience edge bent along the entire board as shown in Fig 3. The components with high height deform faster which will lead to damage of the circuit board. The rotational deformation at the Z axis is the largest which transverse is at 10% rate before converging at 60% while at X axis, deformation is experienced at 10 - 60% time series. The Y axis deformation was between 10 - 80% of the time series before finally converging as since in Fig 3. From Table 2 and Fig 4, slightly above 30% of the effective mass contributed to the mode in the X direction while 50% and 55% effective masses for Y and Z direction respectively. The fatigue experienced by the structure shows that more deformation was visible at the center of the board which has less support compared to the four edges of the modeled circuit board. In Fig 5, It shows the rate of the deformation at an increasing level which shows the time to failure for the components during the vibration test. \begin{table} \begin{tabular}{|l|l|l|l|} \hline Mode & Rotation X & Rotation Y & Rotation Z \\ \hline 1 & 33.669 & 15.71 & 21.22 \\ \hline 2 & 3.20E+01 & 1.21E+01 & 4.72E+01 \\ \hline 3 & 3.18E+01 & 3.71E+01 & 8.91E+01 \\ \hline 4 & 1.61E-01 & 7.13E+00 & 1.33E-01 \\ \hline 5 & 4.98E+00 & 9.73E-02 & 7.10E-03 \\ \hline 6 & 2.18E-03 & 3.70E-03 & 4.09E-03 \\ \hline & 102.6497 & 72.17238 & 157.6688 \\ \hline \end{tabular} \end{table} Table 2: Mass Participation for 4-member Figure 4: Effective mass to Frequency for 4-member Support Figure 5: The total deformation rate of the Circuit Board Figure 3: Static Structural Deformation for 4-member Support Figure 2: The Mesh of the Circuit Board Figure 8: Moment Convergence Figure 6: Force Convergence Figure 7: Displacement Divergence For the force convergence in Fig 6, the sub-step converged towards the iteration end point. The convergence experience longer iteration so the load is evenly distributed as seen by the sub-step. At 20% of the loading, normal stiffness experienced is lowered to improve the analysis results. From the displacement convergence in Fig 7, the normal stiffness was maintained towards the tail end of the analysis signifying that the deformation rate was uniform from 15% iteration. From the moment convergence in Fig 8, with the mesh refinement which lead to nodes increment shows that the analysis converged with uniformity. Fig 9 shows the deformation for an un-prestressed condition. The deformation was experienced more at the location of the capacitors which leads to distortion in the circuit board shape. The modal analysis which is used to investigate the vibration on the circuit board is used to evaluate the natural frequencies as shown in Fig 10. The prestressed analysis assists in improving the results obtained in the un-prestressed process by modifying the stiffness to reduce the natural frequencies inadequacies. This gives better and improve results for the simulation as the analysis was refined to give better frequencies as against the un-prestressed as seen in Fig 11. The modeled circuit board was reinforced with more support members to improve its deformation effect. The deformation peaked at 33.172mm with minimum deformation at the edge of the board as shown in Fig 12. From Fig 12 and Table 3, over 95% effective mass was the participating mode in the Z direction, 60% in the X direction and slightly below 60% of effective masses of the mode in the Y direction. Figure 11: Prestressed Modal Analysis Figure 12: Static Structural Deformation for 8-member Support Figure 10: Un-Prestressed Modal Analysis \begin{table} \begin{tabular}{|l|l|l|l|} \hline Mode & Rotation X & Rotation Y & Rotation Z \\ \hline 1 & 33.442 & 25.723 & 5.2324 \\ \hline 2 & 2.67E+00 & 4.11E-01 & 1.53E+02 \\ \hline 3 & 6.17E+01 & 3.95E+01 & 4.72E-01 \\ \hline 4 & 6.09E-03 & 7.47E+00 & 2.01E-01 \\ \hline 5 & 5.24E+00 & 1.06E-02 & 1.01E-03 \\ \hline 6 & 3.00E-01 & 3.94E-03 & 8.40E-03 \\ \hline & 103.3538 & 73.10313 & 158.784 \\ \hline \end{tabular} \end{table} Table 3: Mass Participation for 8-member Figure 9: Un-Prestresses Modal Total Deformation The blue part of the circuit board shown in Fig 14 is the battery of the modeled circuit board which is used to power the board. The temperature is distributed at this point which leads to heat transfer to the entire circuit board. The heat dissipation to the circuit was high, over 40000\({}^{\circ}\)c of the initial temperature of 23\({}^{\circ}\)c based on the surface area and dissipation rate. The thermal error of the battery as shown in Fig 15 gives a higher value which is due to the temperature dissipated within the components encountered by the circuit board. The temperature effect gives higher thermal error at a small time-interval which will lead to fast rate of the deformation of the circuit board. ## Discussions The analysis was used to carry out the effect of modal analysis and thermal changes on the medical device called Automated External Defibrillator. The experiment was carried out using the Finite Element Analysis in which the model was developed with Ansys workbench to undergo static structure, modal analysis, and steady state thermal. From other research results, it can be verified that the natural frequencies ranges for both the FEA model and experimental model which may be due to smeared property approach and boundary conditions. During this research, the detailed joint of the components and the circuit board is not considered but emphasis is based on the face-to-face contact of the components to the circuit board. Also, the parametric iteration method used to determine the damping varies from 0.001 to 0.005 in a step of 0.005sec. ## V Conclusion The zero frequencies experience is due to the rigid bodies modes which are superfluous, and the weak spring was used to reduce the superfluous effect by minimizing the zero frequencies. The natural frequencies improved when prestressed analysis was used, better results were obtained. The stiffness of the modeled circuit board is not uniformly distributed because it depends on the components attached to the board and component materials used. The study shows an improved natural frequency with percentage difference error in 0.0003% shown from the prestressed analysis. The deformation was pronounced more on the component with high heights like capacitors. So, it is suggested that flat capacitors of less height will be suitable in design. Likewise, the heat dissipation by the battery is huge and a better dissipation path is required. Also, lithium battery has a high specific heat capacity of 3582J/(Kg.K) which has high temperature effect on the circuit board. It is suggested that the battery Figure 14: Thermal State of the Circuit Board Figure 13: Effective mass to Frequency for 8-member Support Figure 15: Thermal Error of the Circuit Board can have a different board or cooling fan should be inbuilt into the Automated External Defibrillator equipment. ## Acknowledgment The author wishes to thank Professor Rahman Mosfequr.
2309.14753
Advanced Volleyball Stats for All Levels: Automatic Setting Tactic Detection and Classification with a Single Camera
This paper presents PathFinder and PathFinderPlus, two novel end-to-end computer vision frameworks designed specifically for advanced setting strategy classification in volleyball matches from a single camera view. Our frameworks combine setting ball trajectory recognition with a novel set trajectory classifier to generate comprehensive and advanced statistical data. This approach offers a fresh perspective for in-game analysis and surpasses the current level of granularity in volleyball statistics. In comparison to existing methods used in our baseline PathFinder framework, our proposed ball trajectory detection methodology in PathFinderPlus exhibits superior performance for classifying setting tactics under various game conditions. This robustness is particularly advantageous in handling complex game situations and accommodating different camera angles. Additionally, our study introduces an innovative algorithm for automatic identification of the opposing team's right-side (opposite) hitter's current row (front or back) during gameplay, providing critical insights for tactical analysis. The successful demonstration of our single-camera system's feasibility and benefits makes high-level technical analysis accessible to volleyball enthusiasts of all skill levels and resource availability. Furthermore, the computational efficiency of our system allows for real-time deployment, enabling in-game strategy analysis and on-the-spot gameplan adjustments.
Haotian Xia, Rhys Tracy, Yun Zhao, Yuqing Wang, Yuan-Fang Wang, Weining Shen
2023-09-26T08:29:02Z
http://arxiv.org/abs/2309.14753v1
# Advanced Volleyball Stats for All Levels: ###### Abstract This paper presents PathFinder and PathFinderPlus, two novel end-to-end computer vision frameworks designed specifically for advanced setting strategy classification in volleyball matches from a single camera view. Our frameworks combine setting ball trajectory recognition with a novel set trajectory classifier to generate comprehensive and advanced statistical data. This approach offers a fresh perspective for in-game analysis and surpasses the current level of granularity in volleyball statistics. In comparison to existing methods used in our baseline PathFinder framework, our proposed ball trajectory detection methodology in PathFinderPlus exhibits superior performance for classifying setting tactics under various game conditions. This robustness is particularly advantageous in handling complex game situations and accommodating different camera angles. Additionally, our study introduces an innovative algorithm for automatic identification of the opposing team's right-side (opposite) hitter's current row (front or back) during gameplay, providing critical insights for tactical analysis. The successful demonstration of our single-camera system's feasibility and benefits makes high-level technical analysis accessible to volleyball enthusiasis of all skill levels and resource availability. Furthermore, the computational efficiency of our system allows for real-time deployment, enabling in-game strategy analysis and on-the-spot gameplan adjustments. The source code of our framework is publicly available1. Footnote 1: [https://github.com/volleyballEEE/VolleyStats](https://github.com/volleyballEEE/VolleyStats) sports analytics, setting trajectory extraction, setting tactics classification, deep learning, volleyball statistics ## I Introduction Volleyball, one of the most popular team sports worldwide, is renowned for its dynamic strategies and complex tactics. Analyzing volleyball requires a detailed approach that considers many factors influencing each play. Over the past decade, there have been significant strides in integrating computing technology with tactical analysis in sports such as basketball [8, 10, 11] and soccer [2, 9, 12]. In contrast to basketball and soccer, the incorporation of computational assistance for volleyball tactical analysis is still in its early stages, holding considerable untapped potential. Current approaches to recording volleyball technical statistics, such as service errors, attack points, attack efficiency, and reception efficiency, offer only a limited perspective of the game dynamics. These metrics are often manually recorded during matches and lack comprehensive support for in-game decision-making. Scoring in a volleyball match primarily revolves around attacking, which mostly occurs at the end of a rally. Teams employ specific tactics and strategies to determine the optimal player, target location, and technique for ball hitting. Thus, understanding a team's setting tactics and distribution during the game is critical for both coaches and players. By understanding these tactics and distributions, players can adapt their blocking and defensive strategies accordingly. Advanced technical statistics [3], which include detailed setting patterns and tactics, have been proposed to enhance tactics and in-game analysis. These statistics differentiate between front-row and back-row attack setting patterns (where players jump from either in front of or behind the 3-meter line as per volleyball rules [20]). Front-row and back-row attacks require vastly different defensive strategies to counter. Thus being able to differentiate between a front- and a back-row set is highly important for volleyball analysis. Additionally, with the evolving athletic capabilities of players, the frequency of back-row sets has increased over the years, making it even more relevant to accurately analyze these types of sets. Despite their utility, these advanced statistics must be manually labelled from post-game videos. Although current stats are relatively straightforward, they are not universally adopted due to the manual input process and the cost of training a recorder. Therefore, the implementation of these advanced setting pattern statistics in real games, especially in non-top professional level games, is nearly impossible as it requires recorders to possess extensive knowledge of volleyball. Other sports such as basketball have paved the way by successfully incorporating automatic methods to generate advanced stats that fulfill the demands of professional games and expert analysts. Companies including STATS (Sports VU) [4], Second Spectrum [5], and NBN23 [6] have commercialized advanced game statistics using multi-camera detection. Given the high cost of multi-camera usage, affordable alternatives have also been introduced [7]. In line with basketball, volleyball also needs an automatic method to produce advanced statistics that meet the needs of modern professional coaches and players, aiding them to make on-court decisions. While Chen [13] and Chakraborty [14] proposed a framework to automatically extract ball trajectory and classify/detect the setting pattern, their methods do not differentiate between front-row and back-row attacks. This distinction is becoming increasingly critical as modern volleyball accelerates and places more emphasis on three-dimensional attacks. Commercial solutions such as PlayfulVision [15] provide tactical statistics for top-level volleyball games using multi-view cameras. Some studies also demonstrate the superior performance of multi-view cameras [16, 17, 18] in ball trajectory extraction. However, it is important to acknowledge that volleyball, compared to football and basketball, receives less investment, particularly in non-professional level matches. Thus, it remains impractical to employ multi-angle cameras for extracting volleyball information in regions with general levels of play. Nevertheless, the popularization and advancement of volleyball remain achievable goals. To bridge this gap and empower players of average skill and the general public with these advanced volleyball setting tactics and pattern statistics, we propose a setting tactics pattern recognition framework, PathFinder. PathFinder is a low-cost, end-to-end Computer Vision framework that takes raw videos of volleyball rounds as inputs and outputs detected advanced statistics, including set tactic classification. Our PathFinder framework, along with the improved PathFinderPlus framework, serves as a promising foundation for showcasing the current possibilities of automated advanced volleyball analysis, yielding satisfactory results across various video angles and qualities. Notably, our framework is designed to work with a single camera and incorporates back-row attack recognition. This not only promotes a deeper understanding of volleyball but also enhances the overall enjoyment of the sport, ultimately driving the further growth and popularity of volleyball. Our framework offers three significant advantages. Firstly, it aligns well with current match recording practices, as many coaches save game recordings round-by-round during the match. Our framework is capable of directly analyzing these recordings and generating an advanced version of real-time setting tactics and pattern statistics, aiding coaches in making informed decisions during the game. Secondly, our solution eases the burden on statistic recording personnel, removing the need for a background in volleyball or an understanding of complex data recording tasks. Lastly, our framework exhibits extensive applicability, catering to various levels of play, including university, high school, and other non-professional games. Irrespective of the specific setting, as long as a camera is available, our algorithm can be easily deployed for analysis and statistics generation. In summary, our contributions are primarily four-fold: * We propose the first end-to-end computer vision framework capable of detecting and classifying volleyball setting patterns, including distinguishing between back-row and front-row setting patterns. * We introduce PathFinderPlus, a modified version of our PathFinder framework with a novel ball extraction method that improves the performance of our setting pattern classifier by 4%-5% under various game conditions. * We are the first to propose an algorithm that leverages early scoring information to determine if the opposing team is in the back-row or front-row, which can be expanded to track all player game rally rotations. * Finally, our system's high computational efficiency allows for real-time deployment. This can enable coaches and players to analyze in-game strategies and make on-the-spot adjustments to their game plans based on the statistics generated by our system. In addition, the system's capability for after-game film study could provide an additional tool for teams to review their performance and strategize for future matches. The remainder of this paper is organized as follows. Section 2 discusses related work. The formal problem description is in Section 3. The proposed framework is described in Section 4. Experiments and results are discussed in Section 5. A discussion of future work is presented in Section 6. ## II related work In this section, we review related work on automated setting pattern classification, as well as volleyball trajectory extraction methodologies. ### _Automatic setting pattern detection and classification_ With the modernization of volleyball, athletes' physical fitness is continually improving, leading to increased speed and height in attacks. As the game progresses, acquiring real-time tactical statistics of the opposing team has become more important than ever. Existing statistical methods, however, are inadequate for monitoring the distribution of the opponent's tactics. Volleyball experts constantly emphasize the importance of real-time access to detailed tactical distribution statistics of opponents, as this information enables coaches to analyze potential setting routes used by the opponents in critical moments, allowing them to establish suitable blocking and defensive systems. This is crucial for securing key points and ultimately, winning the game. Therefore, introducing a more advanced method for setting distribution statistics is essential. In a 2012 article [13] by Chen, the author proposed a classification concept for setting tactics. However, this classification has become less significant in modern volleyball era because the importance of back-row attacks has significantly increased over the past decade, and the blocking defense formation corresponding to back-row attacks is completely different from front-row attacks. Failure to distinguish the difference between these setting patterns can result in ineffective defense and negatively impact the team as a whole. Xia [3] introduced a technical statistical model in his article that better addresses the requirements of volleyball experts, incorporating separate concepts for front and back-row tactics. However, the implementation of their methods for recognizing setting tactics relied on manual labeling, which is challenging to achieve in real-time gameplay. This is because accurately identifying and categorizing each tactic would necessitate the recorder possessing a moderate level of volleyball knowledge, which is often unrealistic for most locations and levels of matches. ### _Ball extraction methodologies_ To achieve the objective of automatically recognizing and classifying set patterns, the initial step involves extracting the ball trajectory from video footage. Playfulvision [15], Takahashi [16], Chen [17], and Cheng [18] have introduced methods for trajectory recognition using 3-D multi-view cameras. However, these methods do not align with our goal of providing players at all levels with access to the advanced technical statistics offered by our framework since it is challenging to ensure the availability of multi-view cameras in most games, expecially at lower levels. For trajectory recognition with a single camera, Chen [13] and Chakraborty [14] combined physical methods with traditional computer vision (CV) techniques for trajectory recognition in 2011 and 2012, respectively. However, as volleyball's speed continues to increase, these methods face greater challenges. In 2020, Toporov [19] proposed a fusion of deep learning and traditional computer vision for trajectory recognition which provides improved performance and speed. We have based our ball extraction methodologies in our PathFinder framework off of Toporov's method [19], as well as improved upon it in our PathFinderPlus framework. ## III Problem Formulation In modern volleyball games, single-camera in-game videos are commonly used to facilitate coaches' technical analysis post-game. These recordings are typically captured from a camera placed behind one of the teams, allowing for a round-by-round documentation of the game for meticulous examination. Since our input data follows a similar round-based structure, we frame our problem formulation accordingly: ### _Inputs_ * \(G=\{g_{1},g_{2},...,g_{n}\}\): A volleyball game is represented as a series of rallies, where each rally \(g_{i}\) is a sequence of rounds \(g_{i}=\{r_{1},r_{2},...,r_{m}\}\). * \(R=\{r_{1},r_{2},...,r_{m}\}\): Each round consists of a series of video frames \(r_{i}=\{v_{1},v_{2},...,v_{k}\}\) depicting the volleyball play during that round. * \(B=\{b_{1},b_{2},...,b_{q}\}\): Within each round, a sequence of the ball's positions are detected from the video frames, represented as the trajectory of the ball \(B\). ### _Outputs_ \(T\): a set of volleyball tactics, \(T=\{t_{1},t_{2},...,t_{p}\}\), identified for each round in each rally of the volleyball game. ### _Objective_ Our objective is to accurately identify and classify the setting tactics \(T\) used in each round of a volleyball game, given the input video frames \(V\) in each round and the ball's trajectory \(B\). This can be formalized as an optimization problem where the accuracy of our tactic classification is maximized: \[\text{maximize: accuracy}(T,T_{\text{detected}}),\] where \(T_{\text{detected}}\) are the tactics detected by our model for each round, and accuracy is a function that calculates the fraction of correct tactic detections in each round. The exact definition of accuracy can vary but is usually defined as \[\text{accuracy}(T,T_{\text{detected}})=\frac{\text{number of correct detection}}{\text{total number of detection}}.\] This optimization problem consists of two parts: ball trajectory detection and tactic classification. In order to complete our objective, we must first achieve precise ball detection in the video frames \(V\) to extract the ball's trajectory \(B\). However, due to the typically poor quality of cameras used in filming volleyball games, even with advanced computer vision models, this task poses significant challenges. After detecting the ball trajectory, our next step is to determine what setting tactic \(T\) was used. Accurate set tactic classification would enable automated detection and labelling of one of the most important advanced volleyball variables analyzed in Xia et al. [3]. ## IV Method In order to automatically classify volleyball setting strategies and patterns from the video, we break the framework down into the following steps: Ball's Trajectory Extraction, Setting Trajectory Extraction, opposite Front-Back-Row Rotation Recognition, and Setting Path Classification. With all these steps combined, we call this framework _PathFinder_. ### _Ball Trajectory Extraction_ #### Iv-A1 Initial Volleyball Extraction Methodologies We initiate our analysis by applying established Computer Vision (CV) techniques, i.e., pre-trained deep neural network models such as YOLOv8 [22], for volleyball detection. However, the challenges presented by poor camera quality, noisy backgrounds, and variable ball designs have hindered the success of pre-trained deep neural network models in achieving accurate trajectory prediction, making it largely unfeasible. Recognizing the limitations of solely relying on these pre-trained deep neural network models, we explore a combination of neural network and traditional CV techniques, specifically tailored for volleyball detection. Toporov's blended methodology for volleyball extraction [19] consists of four primary phases: a preprocessing step for ball detection using traditional CV strategies, a Convolutional Neural Network (CNN) for ball tracking (Video processing & model training in figure 1), a step for path trajectory detection (Video ball detection in figure 1), and a filter for ball trajectory selection (ball trajectory selection in figure 1). The preprocessing phase involves applying Gaussian Blur, Background Subtraction, morphological operations, and Contour Detection to identify potential ball areas from a series of images. Specifically, Gaussian Blur is employed to minimize high-frequency noise in the images, resulting in smoother images. The Gaussian function in two dimensions is represented as: \[G(x,y)=\frac{1}{2\pi\sigma^{2}}e^{-\frac{x^{2}+y^{2}}{2\sigma^{2}}}, \tag{1}\] where \(x\) and \(y\) represent the horizontal and vertical distances from the origin, respectively, and \(\sigma\) denotes the standard deviation of the Gaussian distribution, the selection of which is automated. Subsequently, a Background Subtractor is utilized to differentiate static and dynamic elements in the video footage. Morphological operations, namely dilation and erosion [23], are executed to minimize noise and refine the segmented foreground. Finally, contours are detected and outlined around potential ball regions. These potential ball regions are fed into a pre-trained CNN model for ball classification and tracking. The CNN model, constructed using the Keras library, consists of two Convolutional layers (32 filters for the first and 64 for the second, both with a filter size of 3x3 and ReLU activation), two Max Pooling layers, a Flatten layer, a Fully Connected layer with 64 neurons and ReLU activation, a Dropout layer with a rate of 0.1, and a Softmax output layer with two neurons. The model was compiled using the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.01, categorical cross-entropy as the loss function, and accuracy as the performance metric. The model was trained on a dataset of 32x32 RGB images for 50 epochs with a batch size of 32. The trained model weights and architecture were saved for subsequent usage. In the post-detection phase, this method [19] utilized a custom-defined Blob class to track and manage the identified balls. This Blob class represents an object being tracked and updated at each frame according to the CNN output. It contains variables such as "id" (a unique identifier for each blob), "pts" (a list that stores the positions of the blob), and status (status of the blob such as "still" or "directed moving"). During tracking, each Blob object maintains a "pts" list, to which the new position of the ball is appended at each frame. This list effectively forms the trajectory of the ball, as it records all the ball positions in a time-sequential manner. This "pts" list is then utilized to predict the ball's next position. It is assumed in the method that the ball's movement is uniform in the short term, hence the next position of the ball can be predicted by fitting a linear model on the most recent positions. Additionally, the Blob object also holds a "status" property to denote the status of the ball, such as whether it is moving and the direction of its movement. If the ball's position changes between two consecutive frames, the method updates its status to "directed moving". Otherwise, the status remains "still" if the ball's position does not change or if the ball direction is not the same during the last three frames. This process can be denoted as the new added point \((x,y)\), the second last Figure 1: Overview of the PathFinder and PathFinderPlus frameworks is presented. The PathFinder framework incorporates Toporov’s ball trajectory extraction method [19] (represented by the uncolored blocks), along with our proposed Setting Pattern Extraction and Setting Pattern Detection Classifier (represented by the yellow blocks) to yield the outcome of the setting pattern tactics classification (represented by the green block). The PathFinderPlus framework enhances the PathFinder by integrating the PathFinderPlus filter (represented by the blue block) to achieve improved setting trajectory extraction results. point \((x_{-1},y_{-1})\), and the third last point \((x_{-2},y_{-2})\) in each 1-d array in the "pts". The method defines \(dx1=x-x_{-1}\), \(dx2=x_{-1}-x_{-2}\). If \(dx1\cdot dx2>0\), it indicates that they are in the same direction. The \(y\) direction can be checked in a similar way. For the condition of being "still", we check the distance between the newly added point \((x,y)\) with the second last point in each 1-d array in the "pts". If the distance is less than a threshold (e.g., 5 pixels), we consider the ball as "still". This mechanism allows us to detect the ball's movement status while tracking it. By utilizing the Blob object, we gain access to crucial information about the ball, including its real-time position, movement trajectory, and status. To facilitate this, we created a list of Blob objects, which serves as a container for storing all detected balls within the current frame. Through an iterative process, we iterate over this list, enabling us to update the position and status of the ball. This procedure ensures the continuous tracking and monitoring of moving balls, providing real-time updates on their positions and status. Upon completing the tracking process, the final recognized trajectories consist of the "pts" arrays of all Blob objects with the status "directed moving". These trajectories represent the continuous paths followed by all detected balls exhibiting directed movement throughout the series of video frames, hence providing valuable data for further tactical analysis. However, this original method has its limitations as it only considers whether the most recent movements are in the same direction without effectively filtering false positives. This can lead to inaccuracies as it neglects the overall trend of the ball movement, but instead focusing on localized changes between frames. For instance, certain paths might be mistakenly labeled as "still" because the method was influenced by a few false positives towards the end of the trajectory. These false positives could briefly divert the direction of the ball, causing the method to inaccurately update the ball's status to "still". This scenario highlights the potential issues when the analysis only considers recent movements without studying the overall trend of the ball's movement. #### Iii-B2 The PathFinderPlus Ball Detection Approach To address the limitations of the original ball extraction method describe above, we propose an improved blended ball detection and trajectory tracking algorithm. The new method is called _PathFinderPlus_, the same PathFinder framework but with superior ball detection and tracking ability. The _PathFinderPlus_ ball detection method introduces a filter including two new mathematical functions, namely _evaluateXDecrease_ and _evaluateXIncrease_. The _evaluateXDecrease_ function is defined as follows: given a series of points \((x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{n},y_{n})\) in the "pts" list, we calculate the differences between consecutive x-coordinates, \(dx_{i}=x_{i+1}-x_{i}\) for \(1\leq i<n\). We then determine if a majority of these differences are negative. Similarly, the _evaluateXIncrease_ function checks if a majority of these differences are positive. Mathematically, this can be interpreted as checking if the derivatives of most of the x-coordinates are negative or positive, respectively. Furthermore, the _PathFinderPlus_ Filter function has been added to the Blob class, which applies the majority decreasing or increasing check on the path of the ball. If a majority of the x-coordinates in the path follow a decreasing or increasing pattern, the ball's path is considered as a valid trajectory. This global approach is beneficial since it considers the overall trajectory of the ball rather than just the most recent movements. This is particularly useful when there are false positives in the "pts" array, which may temporarily deviate the ball's trajectory but do not affect the overall movement direction. Even if such points appear at the end of the trajectory, our method would still be able to correctly recognize the general movement trend, making it more robust than the original approach. Therefore, with the _PathFinderPlus_ ball detection method, we can more accurately recognize and track the path of moving objects, even when they have some slight changes in their direction due to less-than-ideal ball detection with the poor camera quality. The _PathFinderPlus_ ball tracking algorithm also allows us to track and update the status and position of each detected ball in real-time, achieving continuous tracking of moving balls, but with improved accuracy and robustness compared to the original approach. We are aware that there are other numerical methods for curve fitting and outlier detection [13, 17] that can be used for ball tracking. For instance, according to the laws of physics, a ball in motion influenced by gravity will follow a parabolic trajectory. By considering the general viewing conditions, where a 3D parabolic curve is projected onto a 2D plane, we can utilize quadratic fitting techniques to predict the future locations of the ball based on its trajectory observed in the game video. Regarding outlier detection, standard RANSAC [21] outlier filtering methods can be employed to identify and eliminate outliers in the ball detection process. However, we chose not to utilize such methods for two primary reasons. Firstly, PathFinderPlus's straightforward global approach already demonstrates satisfactory performance in handling outliers. Secondly, since our system is designed for real-time game strategy analysis, we prioritize computational efficiency. Simpler numerical methods tend to excel in this aspect. In our testing, the PathFinderPlus framework can process a round of data in 16.83 seconds on average on a Macbook Pro equipped with an Apple M1 Pro chip. The average round clip length consists of 152 frames and takes around 6.33 seconds on average. This performance already allows for near real-time analysis. Additionally, considering the breaks between volleyball rallies, which typically last between 15 to 20 seconds (between points scored and the next serve), our framework has even more leeway to achieve real-time analysis during a volleyball match. Furthermore, opting for simpler methods not only supports real-time analysis but also opens up possibilities for IoT (Internet of Things) deployment and reduces hardware requirements, making it more accessible for traditional deployments. ### _Setting Trajectory Extraction_ Both the originally proposed blended volleyball extraction methodology and our PathFinderPlus volleyball detection approaches are only the first step in our proposed PathFinder pipelines. Next we extract the trajectory for the set specifically in order to make a set tactic classification. In the context of volleyball, a round is typically concluded by a setting action, as it marks the transition to an attack against the opponent's court. Each video segment captures the relevant time span, starting from the pass (or opponent's attack) that initiates a team's possession and ending at the moment when the ball is struck, concluding the possession. The extraction of the ball's path is synchronized with this specific period, ensuring that the trajectory of the set should be the final major trajectory within a round. In the previous sections, we introduced the "pts" array. Now, in the context of the 'Setting Trajectory Extraction,' we will apply additional processing to this "pts" array to extract the valid setting trajectory for analysis. Mathematically, a "pts" array can be represented as a 2D array \(PTS=[A_{0},A_{1},\ldots,A_{M-1}]\), where each \(A_{i}\) is a 1D array containing two elements that represent the ball's 2D screen positions during the corresponding video frames. The outcome of our framework can be represented as \(R=[PTS_{0},PTS_{1},\ldots,PTS_{N-1}]\), where each \(PTS_{i}\) is a "pts" array. Given that the act of setting predominantly occurs towards the end of a rally, we focus our attention on the terminal ball path within the video segment. A filtering procedure is then executed to retain only those arrays \(PTS_{i}\) that satisfy \(PTS_{i}\geq 9\). The threshold value of 9 is empirically derived based on the observation that valid setting actions generally yield longer ball path arrays, while shorter arrays more likely indicate false positives or data noise. We define a function \(S(i):\{0,1,\ldots,M-1\}\to PTS^{\prime}\cup\{[0,0]\}\), where \(PTS^{\prime}\) is the filtered ball path array. \(S(i)\) returns \(PTS_{i}\) if \(PTS_{i}\geq 9\). If no such \(PTS_{i}\) exists, \(S(i)\) defaults to \([0,0]\). Hence, the final ball path array utilized for trajectory analysis corresponds to \(S(M-1)\), signifying the last valid segment in the processed ball path array, or \([0,0]\) in the absence of valid paths. During this process, we analyze the ball paths in reverse order to extract the setting path more effectively. We remove arrays with a length less than 9. This approach is based on the rationale that, in a volleyball game, the setting action typically constitutes the last substantial trajectory within a round. Any shorter trajectories that follow the setting action are considered noise and are disregarded. In addition, any trajectories preceding the setting action are not relevant to set tactic detection (hence the analysis of the video clip in reverse). To maintain a focus on efficiency and the possibility of fast IoT implementation, we refrain from employing a more complex analysis for trajectory segmentation (i.e., segmenting the trajectory in a round into multiple, sequential quadratic curves, with each quadratic segment representing the ball's trajectory after a contact with a player, the floor, the net, or the net antenna [13, 17]). ### _Opposite Front-Back-Row Rotation Recognition_ In modern volleyball, the attack from the back row on the right side of the court, commonly referred to as the "D-ball", is highly significant. The defensive formation required to counter the back-row attack differs greatly from that needed for the front-row attack, specifically the right-side attack, also known as the "Opposite" attack. Therefore, it is crucial to distinguish between the back-row attack (D-ball) and the regular front-row attack (Opposite). To tackle this challenge, we introduce the rotation check procedure. To understand the rotation check procedure, it is important to discuss the rotation rules in volleyball. The rotation rules in volleyball dictate the positions of the six players on each team during a game. The court is divided into six numbered positions from 1 to 6, as illustrated in Figure 2. Positions 1, 5, and 6 are in the back row, while positions 2, 3, and 4 are in the front row. The 3-meter line, indicated by the white dashed line, separates the front row from the back row. This line is crucial in regulating attacks, as players in the back row have certain restrictions when attacking the ball, such as needing to jump from behind the 3-meter line. Position 1, known as the serving position, is located in the right back area of the court. When the receiving team (Team A) wins a rally while the serving team (Team B) is serving, Team A regains the serve and undergoes a rotation. During this rotation, the players on Team A shift their positions in a clockwise direction. Specifically, the player in position 2 moves to position 1 to assume the serving role, the player in position 3 moves to position 2, and so on. This rotation cycle ensures that players adopt different roles and positions throughout the match. The rotation check procedure recognizes if the "Opposite" player, the position referring to the player that hits on the right side of the court (e.g., the "opposite" and "d-ball" attacks), is currently in the front or back row. The players' locations will determine if they will be making an "opposite" attack or a "d-ball" attack. We use several notations in this procedure, as outlined in the table I: Algorithm 1 provides details about the implementation of the rotation check procedure. The algorithm begins by initializing the Opposite positions (Lines 5-6), goes through every file and updates the Opposite's position based on the Figure 2: Traditional volleyball court with numbered positions and the 3-meter line. current and previous rallies (Lines 7-18), and finally checks if the Opposite position is in the back row (Lines 19-23). This procedure is essential for distinguishing between front-row and back-row Opposite attacks. As front-row and back-row Opposite attacks have distinct characteristics, the ability to differentiate them can significantly aid in improving the coaches' in-court decision-making. ``` 0:\(\mathit{pos}_{A},\mathit{pos}_{B},\mathit{files}\) 0:\(\mathit{BackRow}_{A},\mathit{BackRow}_{B}\) 1:Procedure\(\mathit{RotationCheck}(\mathit{pos}_{A},\mathit{pos}_{B},\mathit{files})\) 2:\(\mathit{List}_{A},\mathit{List}_{B}\leftarrow\{\},\{\}\) 3:\(\mathit{opp}_{A},\mathit{opp}_{B}\gets Initial_{A},Initial_{B}\) 4:for\(i=0\rightarrow\mathit{len}(\mathit{files})-1\)do 5:if\(i=0\)then 6:\(\mathit{opp}_{A},\mathit{opp}_{B}\leftarrow\mathit{pos}_{A},\mathit{pos}_{B}\) 7:else 8:\(\mathit{score},\mathit{round},\mathit{team}\leftarrow\mathit{split}( \mathit{files}[i],\_\_\prime)\) 9:\(\mathit{prevScore},\mathit{prevRound},\mathit{prevTeam}\leftarrow \mathit{split}(\mathit{files}[i-\mathit{int}(\mathit{split}(\mathit{files}[i-1], \_\prime)^{\prime}][1]))\) 10:if\(team\neq\mathit{prevTeam}\)and\(\mathit{score}\neq\mathit{prevScore}\)then 11:if\(team=a\)then 12:\(\mathit{opp}_{B}\leftarrow((\mathit{opp}_{B}-1)\bmod 6)\) or (6 if\(\mathit{opp}_{B}\) is 0 after mod) 13:else 14:\(\mathit{opp}_{A}\leftarrow((\mathit{opp}_{A}-1)\bmod 6)\) or (6 if\(\mathit{opp}_{A}\) is 0 after mod) 15:endif 16:endif 17:endif 18:\(\mathit{List}_{A},\mathit{List}_{B}\leftarrow\mathit{List}_{A}\cup\{\mathit{ opp}_{A}\},\mathit{List}_{B}\cup\{\mathit{opp}_{B}\}\) 19:endfor 20:for\(i=0\rightarrow\mathit{len}(\mathit{List}_{A})-1\)do 21:\(\mathit{List}_{A}[i],\mathit{List}_{B}[i]\leftarrow\mathit{List}_{A}[i]\in\{1, 5,6\},\mathit{List}_{B}[i]\in\{1,5,6\}\) 22:endfor 23:\(\mathit{BackRow}_{A},\mathit{BackRow}_{B}\leftarrow\mathit{List}_{A},\mathit{ List}_{B}\) 24:End Procedure ``` **Algorithm 1** Opposite Back-Row Check ### _Setting Path Classification_ The setting path is a crucial element in a volleyball match as it reveals the strategic intentions of a team, providing valuable insights for the coach's decision-making process. In the preceding sections, we have discussed the methodology for detecting the ball's trajectory, extracting the set pattern, and determining the position of the "Opposite" player in the front or back row using a single-camera video. The notations and definitions used in the algorithm are summarized in Table II. To this end, we present an algorithm designed to automatically recognize and classify setting paths based on the positions of the setter and hitter, as well as the trajectory of the ball. The PathFinder algorithm also has the capability to output intermediate-step advanced variables such as setter and hitter contact heights. The detailed set classification process is outlined in Algorithm 2. Note that the Coefficients Q, M, S, and C are heuristic values determined through manual analysis by volleyball experts and scaled by the net width in the 2D camera space to accommodate different game scenarios and technical camera angles. In essence, this algorithm examines the maximum height of the set trajectory and the starting and ending locations to determine the set tactic using these specifically crafted heuristics. The two main factors that distinguish different set tactics are the height of the set, which also decides the speed of the set, and the relative location of the hitter with respect to the setter. These fundamental factors provide a clear framework for a heuristic-based approach to set tactic classification. With the completion of the set classification step, the PathFinder and PathFinderPlus frameworks are finalized. ### _Method Summary_ In this section, we have presented a comprehensive methodology for analyzing various aspects of a volleyball game using a single-camera recording. Our PathFinder frameworks are based on innovative techniques and algorithms, and are divided into four primary subsections, each addressing a unique aspect of the volleyball game analysis. Our method goes beyond simple ball tracking and incorporates strategic patterns and player rotations, which are essential for understanding the game dynamics. This comprehensive approach provides the capability to extract meaningful insights from match recordings, which can significantly aid coaches in improving their tactics and strategies. The algorithms presented in these subsections work in synergy, with the outputs of one procedure serving as inputs to the next. This integrated system allows for a seamless flow of information and analysis, enhancing the overall effectiveness of the framework. In the next section, we aim to empirically validate our method by demonstrating its effectiveness and reliability \begin{table} \begin{tabular}{l l} \hline **Symbol** & **Definition** \\ \hline \(B\) & A 2D array representing the ball’s trajectory. \\ \(LXX\) & The x-coordinate of the left side of the net. \\ \(RXX\) & The x-coordinate of the right side of the net. \\ \(UNY\) & The y-coordinate of the top of the net. \\ \(LXY\) & The y-coordinate of the bottom of the net. \\ \(BRA\) & Boolean indicating if there is a back row player ready to spike in team A. \\ \(BRB\) & Boolean indicating if there is a back row player ready to spike in team B. \\ \(TR\) & Sating indicating which team is receiving the ball (A’ or B’). \\ \(PL,P2,P3,P4\) & Points used to divide the court into five equal sections based on x-axis. \\ \(SPH\) & The positions of the setter and hitter along the x-axis, respectively. \\ \(IVA\) & The average of the y-coordinates at the highest points of the ball’s trajectory. \\ \(XD\) & The x-coordinate difference between the hitter and the senter’s positions. \\ \(T\) & The y-coordinate of the net is \\ \(Q,M,S,C\) & Heuristic Coefficients for different tactics scaled by net width \\ \hline \end{tabular} \end{table} Table II: Setting Trajectory Analysis Notation Table through tests conducted on various volleyball match recordings. By providing a comprehensive view of volleyball game analysis, our framework aims to empower coaches, players, and analysts to gain deeper insights, interpret game data more effectively, and ultimately improve team performance. ``` 0:\(B,LNX,RNX,UNY,LNY,BRA,BRB,TR\) 0:\(T\) 1:ProcedureAnalyze_Trajectory\(B\), \(LNX\), \(RNX\), \(UNY\), \(LNY\), \(BRA\), \(BRB\), \(TR\) 2:\(P1,P2,P3,P4\gets CalculateAreas(LNX,RNX)\) 3:\(NW\gets UNY-LNY\) 4:\(SP,HP\gets mean(x[B[1:3]]),mean(x[B[-3:-1]])\) 5:\(HYA\gets mean(y[sort(B,by=y)[1:5]])\) 6:\(XD\gets HP-SP\) 7:if\(TR==^{\prime}b^{\prime}\)then 8:if\(XD>0\) and \(XD\leq\frac{1}{5}(RNX-LNX)\) and \(HYA>Q\cdot NW\)then 9:\(T\leftarrow\) "Quick" 10:elseif\(XD>\frac{1}{2}(RNX-LNX)\) and \(XD\leq\frac{3}{2}(RNX-LNX)\) and \(HP>1.5\cdot P1\) and \(HP<P4\) and \(HYA>M\cdot NW\)then 11:\(T\leftarrow\) "Thirty-One" 12:elseif\(XD<0\) and \(abs(XD)\leq\frac{1}{3}(RNX-LNX)\) and \(HYA>Q\cdot NW\)then 13:\(T\leftarrow\) "Back-One" 14:elseif\(SP<P3\) and \(SP>P1\) and \(HP>P3\) and \(HP<P4\) and \(HYA>S\cdot NW\)then 15:\(T\leftarrow\) "Short" 16:elseif\(HP>P3+\frac{1}{2}(P4-P3)\)then 17:\(T\leftarrow\) "Outside" 18:elseif\(HP>P1+\frac{1}{2}(P2-P1)\) and \(HP<P3+\frac{1}{2}(P4-P3)\) and \(HYA<C\cdot NW\)then 19:\(T\leftarrow\) "Bic" 20:elseif\(HP<P1+\frac{1}{2}(P2-P1)\)then 21:if\(BRB\)then 22:\(T\leftarrow\) "D-ball" 23:else 24:\(T\leftarrow\) "Oppo" 25:endif 26:else 27:\(T\leftarrow\) "unknown" 28:endif 29:else 30: Perform the same operations and logic as above for team 'b', but with x directions mirrored for team 'a' on the opposite side of the net 31:endif 32:return\(T\) ``` **Algorithm 2** Analyze Setting Trajectory ## V Experiments and Results In this section, we will present the experimental setup and results to validate the effectiveness of our setting strategy classification framework, referred to as "PathFinder". Furthermore, we will analyze the performance of our proposed improved ball detection methodology in "PathFinderPlus". ### _Experiments Setup_ For our experimentation, we gathered a dataset comprising 537 video clips of volleyball rounds, sourced from 1280 x 720p recordings of national team Men's volleyball match play from 2021-22 (including notable matches such as Cuba vs USA). The experimental setup was designed to simulate two common scenarios encountered in technical video analysis of volleyball matches. In the first scenario (Fig. 3 a), the camera is positioned parallel to the ground, capturing the game from a horizontal perspective. In the second scenario (Fig. 3 b), the camera is positioned at an angle to the ground, and the recorded footage has a relatively complex background. These two scenarios present different challenges for trajectory analysis, player position recognition, and setting path classification. #### V-A1 Results We compared the performance of our proposed PathFinderPlus with the existing methodology in PathFinder (without our proposed PathFinderPlus filter), whose source code was provided by the original authors, on our setting tactic classifier. The accuracy of the setting tactic classification was assessed across both scenarios. The comparison of the classification results is summarized in Table III. The results indicate that the PathFinderPlus ball extraction outperforms the existing blended methodology from PathFinder as the first step of our pipeline in both scenarios. Specifically, under the Horizontal Camera Angle condition, the overall accuracy of PathFinderPlus set tactic classification improved over baseline PathFinder from 67.32% to 71.24%. Similarly, under the Non-Horizontal Camera Angle and Noisy Background condition, PathFinderPlus demonstrated an increase in overall classification accuracy from 45.89% to 51.52%. These results demonstrate the robustness of our pipeline in handling diverse game situations and technical video camera angles. For both ball extraction methods, our framework exhibited a better classification performance under the Horizontal Camera Angle scenario with a simple background than that under the Non-Horizontal Camera Angle and Noisy Background scenario. Figure 3: Illustration of the two scenarios used in the experiments. (A) Scenario A: the camera angle is horizontal. (B) Scenario B: the camera angle is slightly downward and the background is noisy. ### _Case Study: Analyzing the Accuracy Difference_ There are two primary factors that contribute to the superior performance of our ball extraction methodology in place of the original blended methodology in our pipeline: * **Camera Angle and Background Complexity:** Scenario 2 involves a camera angle and more complex backgrounds, leading to instances where the ball might go out of the frame, making it more challenging to track the ball's trajectory. This results in a decrease in accuracy for both methods in Scenario 2 compared to Scenario 1. However, PathFinderPlus manages to mitigate the adverse effect to a certain extent due to its robust design, resulting in higher accuracy under these challenging conditions. * **Frame Analysis Depth:** The original methodology only inspects the last three frames of a setting trajectory, leading to potential misclassifications. For instance, if the last three frames happen to be false positives, the trajectory will be labeled incorrectly. In contrast, our method incorporates a more comprehensive frame analysis, thereby reducing the chances of such misclassification. Figure 4 (a) illustrates a ball trajectory drawn using the original methodology. In this figure,the blue dots represent detected ball locations categorized as "directed moving" and included in the classification pipeline, while the green dots represent detected ball locations labeled as "still" and deemed invalid for classification. It is evident that the trajectory of the ball is clearly a valid set trajectory, yet it is considered invalid by the original methodology due to false positives at the end of the trajectory. However, Figure 4 (b) demonstrates the same ball trajectory labeled by our PathFinderPlus ball detection algorithm. It successfully recognizes this set trajectory as valid, allowing it to be used for classification later in the pipeline. This showcases the improved accuracy and reliability of our PathFinderPlus methodology in differentiating between valid and invalid ball trajectories. We also note that "Thirty-one", "Back-one", and "Quick" perform relatively worse than other setting tactics. The majority of challenges lie in: * **Challenges of ball detection:** Since all three tactics are for middle blockers, who mostly have a lower setting height and shorter setting distance than other positions, it is difficult for the camera to capture the ball. * **Challenges of misclassification:** In high-level volleyball games, the setting height for "Bic", where players hit from the middle back-row, is similar, albeit with slight differences from these three setting tactics. This similarity poses a challenge in distinguishing them, leading to misclassification. In summary, our proposed PathFinderPlus ball detection methodology outperforms the existing ball detection method when used in our set classification framework for volleyball game analysis under varied conditions due to its comprehensive video frame analysis and robust design. Beyond classification, PathFinderPlus also enables the extraction of advanced statistical data from raw volleyball match footage, offering deeper insights for in-game analysis. Our method proves its effectiveness in handling complex game situations and diverse camera angles, making it a valuable tool for coaches, players, and sports analysts. While both PathFinder and PathFinderPlus show promising results, future work will focus on further enhancing the performance of PathFinderPlus, including the ball detection methodology and the overall framework, under more challenging conditions. Additionally, efforts will be made to improve the classification accuracy of middle blockers' tactics and explore the extension of this methodology to other sports. Figure 4: The green dashed line indicates the “still” trajectory status set, i.e., the path is not valid. The blue dashed line indicates a “directed moving” trajectory status set, i.e., the trajectory is valid. The white dot represents an object mislabeled as a ball (e.g., a players’ head), possibly due to visual similarities, but its movement does not even form a trajectory. \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline **Experimental Condition** & \multicolumn{2}{c}{**Method**} & \multicolumn{5}{c}{**Accuracy (\%) of Each Set Tactic**} & \\ & & **Thirty-one** & **Out** & **Oppo** & **D-ball** & **Quick** & **Bic** & **Back-one** & **Short** & **Total** \\ \hline \multirow{3}{*}{**Horizontal Camera Angle**} & PathFinder & 36.36\% & 81.67\% & 83.33\% & 69.23\% & 18.75\% & 72.22\% & 25.00\% & 57.14\% & 67.32\% \\ & PathFinderPlus & 36.36\% & 88.33\% & 83.33\% & 76.92\% & 18.75\% & 77.78\% & 25.00\% & 57.14\% & 71.24\% \\ \hline \multirow{3}{*}{**Non-Horizontal Camera Angle and Noisy Background**} & PathFinder & 25.00\% & 60.61\% & 57.14\% & 60.00\% & 18.18\% & 42.86\% & 20.00\% & 33.33\% & 45.89\% \\ & PathFinderPlus & 37.50\% & 69.70\% & 57.14\% & 60.00\% & 21.21\% & 42.86\% & 20.00\% & 33.33\% & 51.52\% \\ \hline \hline \end{tabular} \end{table} Table III: Comparison of Set Detection Accuracy Under Different Experimental Conditions. “Thirty-one” type refers to a setting strategy where the middle blocker hits the ball, with a gap existing between the setter and the hitter. “Quick” involves the middle blocker hitting the ball close to the setter. “Back-one” is similar to “Quick” but here, the middle blocker hits the ball from behind the center. “Out” refers to the ball being set to an outside hitter position. “Short” refers to a setting strategy where the outside hitter will hit the ball inside, which is closer to the center of the court compared to a normal outside hit. “Bic” refers to a back-row attack where the middle back (player positioned at 6) hits the ball. “Oppo” is used when the opposite (right-side) hitter (the one who traditionally plays across from the setter in the rotation) hits the ball, and the opposite hitter is in the front-row at the time. “D-ball” is used when the opposite hitter hits the ball and they’re positioned in the back-row. ## VI Conclusion and Future Work In this paper, we introduced and evaluated a novel framework for advanced setting strategy classification in volleyball matches. Our primary contributions are four-fold: * **Automated Advanced Statistics:** With our PathFinder and PathFinderPlus frameworks, we have provided comprehensive advanced statistical data for in-game analysis fully automatically using a single camera. To our knowledge, this level of granularity and automation in advanced volleyball statistics has not been achieved before. Our innovative end-to-end frameworks enable us to take raw volleyball round videos as input and deliver advanced volleyball set tactic classifications as output, empowering coaches with timely and focused advanced setting tactics statistics to assist them in making informed decisions during matches. * **Novel PathFinderPlus Ball Trajectory Detection Methodology:** Our proposed ball trajectory detection method in PathFinderPlus has been shown to outperform the existing methodology in both a horizontal and non-horizontal camera angle scenario with a noisy background on setting tactic classification. This demonstrates the robustness of our system in varied game conditions. * **Opposite Row Identification:** In volleyball, the opposing opposite hitter's row (front or back) significantly affects the team's defensive strategy. We are the first to propose an algorithm to automatically identify the opposite's row during gameplay, providing crucial insights for subsequent tactical analysis. * **Efficient Algorithm Design:** As mentioned, our aim is to provide this analysis in real time during a game so the coaches and players can dynamically adjust strategies and game plans. Hence, our design is geared toward simplicity with potential for real time and IoT implementation. Furthermore, our study underscores the feasibility and advantages of a single-camera system. This configuration is not only cost-effective, but also widens the accessibility of high-level technical analysis, making it available to volleyball enthusiasts of varying skill levels and resource availability. In the future, we envision extending this methodology to track all player rotations throughout the game. We also plan to enhance our system's performance under more challenging conditions and explore its application to other sports. We aim to improve our framework accuracy and promote the enjoyment of volleyball by providing sophisticated analytical tools to a broader audience.
2309.06357
$sl(2,\mathds{C})\times D$ symmetry and conformal primary basis for massless fields
Alternative to the embedding formalism, we provide a group theoretic approach to the conformal primary basis for the massless field with arbitrary helicity. To this end, we first point out that $sl(2,\mathds{C})$ isometry gets enhanced to $sl(2,\mathds{C})\times D$ symmetry for the solution space of the massless field with arbitrary helicity. Then associated with $sl(2,\mathds{C})\times D$ symmetry, we introduce the novel quadratic Casimirs and relevant tensor/spinor fields to derive 2 explicit constraints on the bulk dilatation and $sl(2,\mathds{C})$ Casimirs. With this, we further argue that the candidate conformal primary basis can be constructed out of the infinite tower of the descendants of the left and right highest (lowest) conformal primary wavefunction of $sl(2,\mathds{C})$ Lie algebra, and the corresponding celestial conformal weights are determined by the bulk scaling dimension through solving out the exact on-shell conformal primary wavefunctions, where on top of the two kinds of familiar-looking on-shell conformal primary wavefunctions, we also obtain another set of independent on-shell conformal primary wavefunctions for the massless field with helicity $|s|\ge 1$. In passing, we also develop the relationship between the 4D Lorentz Lie algebra and 2D conformal Lie algebra from scratch, and present an explicit derivation for the two important properties associated with the conformal primary wavefunctions.
Yuan Chen, Mingfeng Li, Kai Shi, Hongbao Zhang, Jingchao Zhang
2023-09-12T16:20:08Z
http://arxiv.org/abs/2309.06357v2
# \(sl(2,\mathds{C})\times D\) symmetry and conformal primary basis for massless fields ###### Abstract Alternative to the embedding formalism, we provide a group theoretic approach to the conformal primary basis for the massless field with arbitrary helicity. To this end, we first point out that \(sl(2,\mathds{C})\) isometry gets enhanced to \(sl(2,\mathds{C})\times D\) symmetry for the solution space of the massless field with arbitrary helicity. Then associated with \(sl(2,\mathds{C})\times D\) symmetry, we introduce the novel quadratic Casimirs and relevant tensor/spinor fields to derive 2 explicit constraints on the bulk dilatation and \(sl(2,\mathds{C})\) Casimirs. With this, we further argue that the candidate conformal primary basis can be constructed out of the infinite tower of the descendants of the left and right highest (lowest) conformal primary wavefunction of \(sl(2,\mathds{C})\) Lie algebra, and the corresponding celestial conformal weights are determined by the bulk scaling dimension through solving out the exact on-shell conformal primary wavefunctions, where on top of the two kinds of familiar-looking on-shell conformal primary wavefunctions, we also obtain another set of independent on-shell conformal primary wavefunctions for the massless field with helicity \(|s|\geq 1\). In passing, we also develop the relationship between the 4D Lorentz Lie algebra and 2D conformal Lie algebra from scratch, and present an explicit derivation for the two important properties associated with the conformal primary wavefunctions. ## 1 Introduction Over the last few decades, holographic principle has been standing out as a guiding principle for us to formulate the quantum theory of gravity. AdS/CFT correspondence, as one explicit implementation of such a principle, states that the quantum gravity in an asymptotically Anti-de Sitter spacetime is encoded fully by the boundary conformal field theory (CFT). On the other hand, by holography, the only observable in an asymptotically flat spacetime is the scattering amplitude. However, the scattering amplitude is expressed conventionally in the momentum representation, which manifests the translation symmetry but obscures the holographic nature. Given this, a new representation in terms of the so-called conformal primary basis has been constructed via the embedding formalism in CFT, whereby the scattering amplitude in the \((d+2)\)-dimensional flat spacetime admits a natural holographic interpretation of the conformal correlator in the \(d\)-dimensional celestial sphere[1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Such a holographic reformulation of the scattering process in terms of the so-called celestial amplitude further motivates a recently conjectured duality, called celestial holography, which proposes that the bulk scattering in the flat spacetime can be dual to a CFT on the celestial sphere[11; 12; 13; 14; 15]. No matter whether celestial holography turns out to be valid or not, such an alternative formulation of the scattering process has already shed new light on our understanding of scattering amplitude, where the conformal primary basis, as the building block of the whole reformulation, plays a vital role as it should be the case. In particular, for the case of massless particles, the conformal primary basis in the new representation turns out to be related to the familiar plane wave basis in the momentum representation by a Mellin transformation or further followed by a shadow transformation. The main purpose of this paper is to offer a group theoretic understanding of the conformal primary basis for the massless particles in the 4-dimensional Minkowski spacetime, which is supposed to serve as an alternative perspective to the aforementioned embedding formalism. To this end, we shall first develop the relationship between 4D Lorentz Lie algebra and 2D conformal Lie algebra from scratch and derive the two important properties associated with the conformal primary wavefunctions in the subsequent section. Then in Section 3, with the observation of the bulk dilatation as an emergent symmetry for massless particles, we will argue that the \(sl(2,\mathds{C})\times D\) symmetry dictated conformal primary basis can serve as an candidate basis for the massless particle representation of the Poincare Lie algebra, which is further substantiated by an explicit derivation. In Section 4, we build the specific correspondence between the 4D bulk scaling dimension and 2D celestial conformal weights for all the on-shell conformal primary wavefunctions. We shall conclude our paper with some discussions in the final section. Notation and conventions follow Chapter 13 of [16] unless specified otherwise. In particular, the signature is \((+,-,-,-)\), and \(\epsilon_{0123}=1\). In addition, a spinor is raised and lowered as \(\phi^{A}=\epsilon^{AB}\phi_{B},\phi_{B}=\epsilon_{AB}\phi^{A}\) with \(\epsilon_{AC}\epsilon^{BC}=\epsilon_{A}{}^{B}=-\epsilon^{B}{}_{A}=\delta_{A}{ }^{B}\). ## 2 4D Lorentz group, 2D conformal group, and conformal primary wavefunctions In what follows, we shall focus exclusively onto the 4-dimensional Minkowski spacetime, where we can take advantage of the spinor machinary to develop the relationship between 4D Lorentz group and 2D conformal group. To proceed, we like to take the canonical choice of spinor basis \(o^{A},\iota^{A}\) such that \(o_{A}\iota^{A}=1\), with the dual basis given by \(-\iota_{A},o_{A}\). Furthermore, we have \(\sigma^{1}\to-{\bf K}_{1}+i{\bf J}_{1},\quad\sigma^{2}\to{\bf K}_{2}-i{\bf J}_{2},\quad\sigma^{3}\to-{\bf K}_{3}+i{\bf J}_{3},\) \[i\sigma^{1}\to i(-{\bf K}_{1}+i{\bf J}_{1}),\quad i\sigma^{2}\to i({\bf K}_{2}-i{ \bf J}_{2}),\quad i\sigma^{3}\to i(-{\bf K}_{3}+i{\bf J}_{3}) \tag{8}\] from \[\lambda^{\mu}{}_{\nu}x^{\nu}\sigma_{\mu}{}^{\Sigma\Gamma}=x^{\mu}\sigma_{\mu}{}^{ \Sigma^{\prime}\Xi}l^{\Gamma}{}_{\Xi}, \tag{9}\] and \[\sigma^{1}\rightarrow-{\bf K}_{1}-i{\bf J}_{1},\quad\sigma^{2} \rightarrow{\bf K}_{2}+i{\bf J}_{2},\quad\sigma^{3}\rightarrow-{\bf K}_{3}-i{ \bf J}_{3},\] \[i\sigma^{1}\rightarrow-i(-{\bf K}_{1}-i{\bf J}_{1}),\quad i \sigma^{2}\rightarrow-i({\bf K}_{2}+i{\bf J}_{2}),\quad i\sigma^{3}\rightarrow- i(-{\bf K}_{3}-i{\bf J}_{3}) \tag{10}\] from \[\lambda^{\mu}{}_{\nu}x^{\nu}\sigma_{\mu}{}^{\Sigma^{\prime}\Gamma}=x^{\mu} \bar{l}^{\Sigma^{\prime}}{}_{\Omega^{\prime}}\sigma_{\mu}{}^{\Omega^{\prime} \Gamma}. \tag{11}\] On the other hand, \(SL(2,\mathds{C})\) can also be understood as the global conformal group on the celestial sphere. To this end, let \(\lambda^{\Sigma}=(w,1)\), then its corresponding null vector \(\lambda^{\Sigma}\lambda^{\Sigma^{\prime}}\) is given by \(q^{\mu}=(w\bar{w}+1,w+\bar{w},i(\bar{w}-w),w\bar{w}-1)\), whose spatial component can be geometrized as a point on a unit celestial sphere as \({\bf q}=(w\bar{w}+1)\hat{\bf q}\) by performing the stereographic projection from the north pole of the sphere to the complex plane with \(w=\cot\frac{\theta}{2}e^{i\varphi}\). Similarly, with the choice of \(\lambda^{\Sigma}=(1,\bar{w})\), the corresponding null vector is given by \(q^{\mu}=(1+w\bar{w},w+\bar{w},i(\bar{w}-w),1-w\bar{w})\), which can be visualized as a point on the unit celestial sphere as \({\bf q}=(1+w\bar{w})\hat{\bf q}\) instead by performing the stereographic projection from the south pole to the complex plane with \(w=\tan\frac{\theta}{2}e^{i\varphi}\). In what follows, we prefer to work exclusively with the first parametrization of our spinor as well as its corresponding null vector2. Accordingly, \(SL(2,\mathds{C})\) acting on our spinor \(\lambda^{\Sigma}\) will induce a global conformal transformation on the celestial sphere as follows Footnote 2: As to the second parametrization with \(\bar{w}\) replaced by \(w\), kindly please refer to Appendix B for the relevant results. \[w^{\prime}=\begin{pmatrix}a&b\\ c&d\end{pmatrix}w=\frac{aw+b}{cw+d},\quad\bar{w}^{\prime}=\begin{pmatrix}\bar{a }&\bar{b}\\ \bar{c}&\bar{d}\end{pmatrix}\bar{w}=\frac{\bar{a}\bar{w}+\bar{b}}{\bar{c}\bar{ w}+\bar{d}} \tag{12}\] with \(ad-bc=1\). Then it is not hard to show that \(sl(2,C)\) can also be realized as follows \[l_{-1}=\begin{pmatrix}0&1\\ 0&0\end{pmatrix}=\frac{1}{2}(\sigma^{1}+i\sigma^{2})\to T_{-1}, \bar{l}_{-1}=\begin{pmatrix}0&1\\ 0&0\end{pmatrix}=\frac{1}{2}(\overline{\sigma^{1}+i\sigma^{2}})\rightarrow\bar {T}_{-1},\] \[l_{1}=\begin{pmatrix}0&0\\ -1&0\end{pmatrix}=-\frac{1}{2}(\sigma^{1}-i\sigma^{2})\to T_{1}, \bar{l}_{1}=\begin{pmatrix}0&0\\ -1&0\end{pmatrix}=-\frac{1}{2}(\overline{\sigma^{1}-i\sigma^{2}})\rightarrow \bar{T}_{1}\] \[l_{0}=\frac{1}{2}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}=\frac{1}{2}\sigma^{3}\to T_{0}, \bar{l}_{0}=\frac{1}{2}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}=\frac{1}{2}\overline{\sigma^{3}}\rightarrow\bar{T}_{0}, \tag{13}\] where \(T_{n}=w^{n+1}\partial_{w}\) and \(\bar{T}_{n}=\bar{w}^{n+1}\partial_{\bar{w}}\) are the vector fields on the celestial sphere, satisfying the following commutation relations \[[T_{n},T_{m}]=(m-n)T_{n+m},\quad[\bar{T}_{n},\bar{T}_{m}]=(m-n)\bar{T}_{n+m}, \quad[T_{n},\bar{T}_{m}]=0. \tag{14}\] By inspection of Eq. (8), Eq. (10), and Eq. (13), one can obtain the identification between the 4D Lorentz generators and 2D conformal generators \[T_{-1}\simeq L_{-1}=\frac{1}{2}(-{\bf K}_{1}+{\bf J}_{2}+i({\bf K} _{2}+{\bf J}_{1})), \bar{T}_{-1}\simeq\bar{L}_{-1}=\frac{1}{2}(-{\bf K}_{1}+{\bf J}_{2}-i({\bf K }_{2}+{\bf J}_{1})),\] \[T_{1}\simeq L_{1}=\frac{1}{2}({\bf K}_{1}+{\bf J}_{2}+i({\bf K} _{2}-{\bf J}_{1})), \bar{T}_{1}\simeq\bar{L}_{1}=\frac{1}{2}({\bf K}_{1}+{\bf J}_{2}-i({ \bf K}_{2}-{\bf J}_{1})),\] \[l_{0}\simeq L_{0}=\frac{1}{2}(-{\bf K}_{3}+i{\bf J}_{3}), \bar{l}_{0}\simeq\bar{L}_{0}=\frac{1}{2}(-{\bf K}_{3}-i{\bf J}_{3}). \tag{15}\] Accordingly, we have the following commutation relations \[[L_{n},L_{m}]=(m-n)L_{m+n},\quad[\bar{L}_{n},\bar{L}_{m}]=(m-n)\bar{L}_{m+n}, \quad[L_{n},\bar{L}_{m}]=0 \tag{16}\] as it should be the case. For the later convenience, we would like to denote the Lie algebras out of \(L_{n}\) and \(\bar{L}_{n}\) with \(n=0,\pm 1\) as \(sl(2,\mathds{C})_{L}\) and \(sl(2,\mathds{C})_{R}\), respectively. A wavefunction on our Minkowski spacetime and the celestial sphere is called the conformal primary wavefunction with the \(SL(2,\mathds{C})\) conformal dimension \(\Delta\) and spin \(J\) if \[{\cal O}(x^{\prime\mu}=\Lambda^{\mu}{}_{\nu}x^{\nu};w^{\prime}=\frac{aw+b}{cw+ d},\bar{w}^{\prime}=\frac{\bar{a}\bar{w}+\bar{b}}{\bar{c}\bar{w}+\bar{d}})=| \frac{\partial w^{\prime}}{\partial w}|^{-\frac{\Delta+J}{2}}|\frac{\partial \bar{w}^{\prime}}{\partial\bar{w}}|^{-\frac{\Delta-J}{2}}D(\Lambda){\cal O}(x ;w,\bar{w}), \tag{17}\] where the representation of Lorentz group \(D(\Lambda)\) is determined by the spinor and tensor indices of \({\cal O}\) as usual with \(|\frac{\partial w^{\prime}}{\partial w}|=\frac{1}{(cw+d)^{2}}\) and \(|\frac{\partial\bar{w}^{\prime}}{\partial\bar{w}}|=\frac{1}{(\bar{c}\bar{w}+d )^{2}}\). Whence we have \[D(\Lambda)^{-1}{\cal O}(\Lambda x;w^{\prime},\bar{w}^{\prime})-{\cal O}(x;w^{ \prime},\bar{w}^{\prime})=|\frac{\partial w^{\prime}}{\partial w}|^{-h}|\frac {\partial\bar{w}^{\prime}}{\partial\bar{w}}|^{-\bar{h}}{\cal O}(x;w,\bar{w})- {\cal O}(x;w^{\prime},\bar{w^{\prime}}), \tag{18}\] where the \(SL(2,\mathds{C})\) conformal weights are given by \((h,\bar{h})=\frac{1}{2}(\Delta+J,\Delta-J)\). This implies \[{\cal L}_{L_{n}}{\cal O}=-{\cal L}_{T_{n}}{\cal O}=-(w^{n+1} \partial_{w}+h(n+1)w^{n}){\cal O},\] \[{\cal L}_{\bar{L}_{n}}{\cal O}=-{\cal L}_{\bar{T}_{n}}{\cal O}=-( \bar{w}^{n+1}\partial_{\bar{w}}+\bar{h}(n+1)\bar{w}^{n}){\cal O} \tag{19}\] for \(n=0,\pm 1\), which amounts to saying that the Lie derivative of the Lorentz generators acting on the conformal primary wavefunction can be understood as the Lie derivative of the corresponding \(SL(2,\mathds{C})\) generators acting on it with an additional minus sign. This is essentially the underlying reason for the definition of the conformal primary wavefunction through Eq. (17). For our purpose, we would like to list the celestial conformal weights and the bulk scaling dimension for a few important conformal primary wavefunctions in Table 1. Furthermore, if a conformal primary wave function is on-shell, namely satisfies the equation of motion dictated by the unitary representation of the Poincare group, one can define an operator on the celestial sphere associated with it as follows \[\hat{\cal O}(w^{\prime},\bar{w}^{\prime})=(\hat{\Phi}(x^{\prime}),{\cal O}(x^ {\prime};w^{\prime},\bar{w}^{\prime}))_{\Sigma^{\prime}}, \tag{20}\] where \(\hat{\Phi}(x^{\prime})\) is the corresponding bulk quantum field and the Klein-Gordon inner product \((,)_{\Sigma^{\prime}}\) evaluated on a Cauchy surface \(\Sigma^{\prime}\) is nevertheless independent of the choice of \(\Sigma^{\prime}\). The Lorentz covariance of the Klein-Gordon inner product implies \[\hat{\cal O}(w^{\prime},\bar{w}^{\prime}) = (D(\Lambda)^{-1}\hat{\Phi}(x^{\prime}),D(\Lambda)^{-1}{\cal O}(x^ {\prime};w^{\prime},\bar{w}^{\prime}))_{\Sigma} \tag{21}\] \[= (U(\Lambda)\hat{\Phi}(x)U(\Lambda)^{-1},|\frac{\partial w^{\prime }}{\partial w}|^{-h}|\frac{\partial\bar{w}^{\prime}}{\partial\bar{w}}|^{-\bar{ h}}{\cal O}(x;w,\bar{w}))_{\Sigma}\] \[= |\frac{\partial w^{\prime}}{\partial w}|^{-h}|\frac{\partial\bar{ w}^{\prime}}{\partial\bar{w}}|^{-\bar{h}}U(\Lambda)\hat{\cal O}(w,\bar{w})U( \Lambda)^{-1},\] where \(U(\Lambda)\) corresponds to the unitary representation of the Lorentz group in the Fock space. It is noteworthy that due to its presence, the celestial operator \(\hat{\cal O}\) does not transform under the global conformal transformation as the ordinary conformal primary operators. But nevertheless, due to the Lorentz invariance of both the vacuum and S-matrix, i.e., \(U(\Lambda)|0\rangle=|0\rangle\) and \(U(\Lambda)^{-1}SU(\Lambda)=S\), we have \[\langle 0|\hat{\cal O}_{i}(w^{\prime}_{i},\bar{w}^{\prime}_{i})S \hat{\cal O}_{j}(w^{\prime}_{j},\bar{w}^{\prime}_{j})|0\rangle \tag{22}\] \[= |\frac{\partial w^{\prime}_{i}}{\partial w_{i}}|^{-h_{i}}|\frac{ \partial\bar{w}^{\prime}_{i}}{\partial\bar{w}_{i}}|^{-\bar{h}_{i}}|\frac{ \partial w^{\prime}_{j}}{\partial w_{j}}|^{-h_{j}}|\frac{\partial\bar{w}^{ \prime}_{j}}{\partial\bar{w}_{j}}|^{-\bar{h}_{j}}\langle 0|\hat{\cal O}_{i}(w_{i}, \bar{w}_{i})S\hat{\cal O}_{j}(w_{j},\bar{w}_{j})|0\rangle,\] which tells us that the celestial amplitude behaves like the conformal correlator on the celestial sphere. \(sl(2,\mathds{C})\times D\) symmetry and the candidate basis for the massless particle representation of the Poincare group The unitary representation of the Poicare group for massless particles is usually expressed in terms of the simultaneous eigen vectors of the commuting spatial 3-momentum or the commuting 4-momentum with one on-shell condition \(P_{\mu}P^{\mu}=0\). \begin{table} \begin{tabular}{c|c|c|c|c|c} & \(h\) & \(\bar{h}\) & \(\Delta\) & \(J\) & \({\cal D}\) \\ \hline \(\lambda^{\Sigma}\) & \(-\frac{1}{2}\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \(\bar{\lambda}^{\Sigma^{\prime}}\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \(q^{\mu}\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-1\) & \(0\) & \(-1\) \\ \(D^{\mu}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(\epsilon_{\Sigma\Gamma}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) \\ \(\eta_{\mu\nu}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(2\) \\ \(\frac{1}{q\cdot\bar{x}}\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(1\) & \(0\) & \(-1\) \\ \(\frac{\lambda^{\Sigma}_{\Sigma^{\prime}}}{\sqrt{q\cdot x}}\) & \(-\frac{1}{4}\) & \(\frac{1}{4}\) & \(0\) & \(-\frac{1}{2}\) & \(-1\) \\ \(\frac{D^{\Sigma\Sigma^{\prime}}\bar{\lambda}_{\Sigma^{\prime}}}{\sqrt{q\cdot x }}\) & \(\frac{1}{4}\) & \(-\frac{1}{4}\) & \(0\) & \(\frac{1}{2}\) & \(0\) \\ \end{tabular} \end{table} Table 1: The celestial conformal weights (conformal dimension and spin) and the bulk scaling dimension for some conformal primary wavefunctions. But note that \(L^{2},L_{0}\) and \(\bar{L}^{2},\bar{L}_{0}\) commuting with one another, where \[L^{2}=L_{0}^{2}-\frac{1}{2}(L_{-1}L_{1}+L_{1}L_{-1}),\quad\bar{L}^{2}=\bar{L}_{0 }^{2}-\frac{1}{2}(\bar{L}_{-1}\bar{L}_{1}+\bar{L}_{1}\bar{L}_{-1}), \tag{10}\] are the corresponding Casimir operators for \(sl(2,\mathds{C})_{L}\) and \(sl(2,\mathds{C})_{R}\) Lie algebras, respectively. So it is reasonable to expect the simultaneous eigen vectors of the above 4 operators could constitute the candidate basis as well for the massless particle representation of the Poincare group. But as alluded above, the number of degrees of freedom for the 4-momentum is not 4 but 3 due to the on-shell condition. So there may exist one similar constraint on \(L^{2},L_{0},\bar{L}^{2},\bar{L}_{0}\). As we shall show in this section, this is the case indeed, where it turns out that the bulk dilation operator come to play a crucial role. To be more specific, first we note that not only does the bulk dilatation vector field \(D\) together with the Killing vector fields form a closed Lie algebra, but also commutes with the two Poincare Casimir operators \(P^{2}=P_{\mu}P^{\mu}\) and \(W^{2}=W_{\mu}W^{\mu}\) with the Pauli-Lubanski spin operator defined as \(W_{\mu}=-\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}P^{\nu}M^{\rho\sigma}\) for the massless particle representation of the Poincare Lie algebra, where \(W_{\mu}=isP_{\mu}\) with \(s\) the helicity of the massless particle3. This amounts to saying that the bulk dilatation emerges as an additional symmetry of the solution space of the equations of motion dictated by the massless particle representation of the Poincare group. In addition, \(D\) commutes with the Lorentz boosts and rotations, thus also commutes with \(L^{2},L_{0},\bar{L}^{2},\bar{L}_{0}\). As a result, associated with \(sl(2,\mathds{C})\times D\) symmetry, the candidate basis is formed virtually by the simultaneous eigen vectors of 5 operators \(D,L^{2},L_{0},\bar{L}^{2},\bar{L}_{0}\) with 2 constraints on them. In what follows, we shall present an explicit derivation for this. Footnote 3: More precisely, \(D\) commutes with \(s\). ### Novel quadratic Casimirs and relevant tensor/spinor fields According to our experience acquired from [17; 18], we would like to define a novel quadratic Casimir operator \[\bar{C}^{2}=\bar{L}^{2}-\frac{1}{4}D^{2}. \tag{11}\] Associated with this Casimir, we further introduce the following two auxiliary tensor fields \[\bar{H}^{ab} = \bar{L}_{0}^{a}\bar{L}_{0}^{b}-\frac{1}{2}(\bar{L}_{-1}^{a}\bar{ L}_{1}^{b}+\bar{L}_{1}^{a}\bar{L}_{-1}^{b})-\frac{1}{4}D^{a}D^{b},\] \[\bar{Z}_{abc} = \bar{L}_{0a}\nabla_{b}\bar{L}_{0c}-\frac{1}{2}(\bar{L}_{-1a} \nabla_{b}\bar{L}_{1c}+\bar{L}_{1a}\nabla_{b}\bar{L}_{-1c})-\frac{1}{4}D_{a} \nabla_{b}D_{c}. \tag{12}\] Similarly, one can introduce the corresponding Casimir operator and associated auxiliary fields for the \(sl(2,\mathds{C})_{L}\) Lie algebra, which are obviously related to the right ones by the complex conjugation. A straightforward calculation gives \[\bar{H}^{ab}=-\frac{1}{4}x^{2}\eta^{ab},\quad\bar{Z}_{abc}=\frac{1}{4}(\eta_{ ab}D_{c}-\eta_{bc}D_{a}-\eta_{ac}D_{b}-i\epsilon_{abcd}D^{d}). \tag{13}\] Whence we further have \[\bar{Z}^{a}{}_{ac}=\frac{1}{2}D_{c},\quad\nabla_{d}\bar{Z}_{abc}=\frac{1}{4}(\eta _{ab}\eta_{cd}-\eta_{bc}\eta_{ad}-\eta_{ac}\eta_{bd}-i\epsilon_{abcd}), \tag{10}\] where we have used the fact \(\nabla_{a}D_{b}=\eta_{ab}\). For the convenience of the later spinor analysis, we also like to introduce some spinor fields associated with our novel Casimir as follows \[\alpha_{AA^{\prime}B}{}^{C}(R) = \bar{L}_{0AA^{\prime}}\Gamma_{B}{}^{C}(\bar{L}_{0})-\frac{1}{2}( \bar{L}_{-1AA^{\prime}}\Gamma_{B}{}^{C}(\bar{L}_{1})+\bar{L}_{1AA^{\prime}} \Gamma_{B}{}^{C}(\bar{L}_{-1}))-\frac{1}{4}D_{AA^{\prime}}\Gamma_{B}{}^{C}(D)\] \[= \frac{1}{2}\bar{Z}_{AA^{\prime}BB^{\prime}}{}^{CB^{\prime}}+ \frac{1}{8}D_{AA^{\prime}}\delta_{B}{}^{C}=-\frac{1}{4}D_{AA^{\prime}}\delta_{ B}{}^{C}+\frac{1}{8}D_{AA^{\prime}}\delta_{B}{}^{C}=-\frac{1}{8}D_{AA^{\prime}} \delta_{B}{}^{C},\] \[\alpha_{AA^{\prime}B}{}^{C}(L) = L_{0AA^{\prime}}\Gamma_{B}{}^{C}(L_{0})-\frac{1}{2}(L_{-1AA^{ \prime}}\Gamma_{B}{}^{C}(L_{1})+L_{1AA^{\prime}}\Gamma_{B}{}^{C}(L_{-1}))- \frac{1}{4}D_{AA^{\prime}}\Gamma_{B}{}^{C}(D) \tag{11}\] \[= \frac{1}{2}Z_{AA^{\prime}BB^{\prime}}{}^{CB^{\prime}}+\frac{1}{8} D_{AA^{\prime}}\delta_{B}{}^{C}\] \[= -\frac{1}{4}(\epsilon_{AB}D^{C}{}_{A^{\prime}}+\epsilon_{A}{}^{C} D_{BA^{\prime}}+D_{AA^{\prime}}\delta_{B}{}^{C})+\frac{1}{8}D_{AA^{\prime}} \delta_{B}{}^{C}\] \[= -\frac{1}{4}(\epsilon_{AB}D^{C}{}_{A^{\prime}}+\epsilon_{A}{}^{C} D_{BA^{\prime}})-\frac{1}{8}D_{AA^{\prime}}\delta_{B}{}^{C},\] and \[\gamma_{AD}{}^{BC}(R)=\] \[\Gamma_{A}{}^{B}(\bar{L}_{0})\Gamma_{D}{}^{C}(\bar{L}_{0})-\frac {1}{2}(\Gamma_{A}{}^{B}(\bar{L}_{-1})\Gamma_{D}{}^{C}(\bar{L}_{1})+\Gamma_{A}{ }^{B}(\bar{L}_{1})\Gamma_{D}{}^{C}(\bar{L}_{-1}))-\frac{1}{4}\Gamma_{A}{}^{B} (D)\Gamma_{D}{}^{C}(D)\] \[=\frac{1}{4}(\nabla_{DD^{\prime}}\bar{Z}^{CD^{\prime}}{}_{AB^{ \prime}}{}^{BB^{\prime}}+\frac{3}{4}\delta_{A}{}^{B}\delta_{D}{}^{C})=\frac{1} {4}(-\delta_{A}{}^{B}\delta_{D}{}^{C}+\frac{3}{4}\delta_{A}{}^{B}\delta_{D}{} ^{C})=-\frac{1}{16}\delta_{A}{}^{B}\delta_{D}{}^{C},\] \[\gamma_{AD}{}^{BC}(L)=\] \[\Gamma_{A}{}^{B}(L_{0})\Gamma_{D}{}^{C}(L_{0})-\frac{1}{2}( \Gamma_{A}{}^{B}(L_{-1})\Gamma_{D}{}^{C}(L_{1})+\Gamma_{A}{}^{B}(L_{1})\Gamma _{D}{}^{C}(L_{-1}))-\frac{1}{4}\Gamma_{A}{}^{B}(D)\Gamma_{D}{}^{C}(D)\] \[=\frac{1}{4}(\nabla_{DD^{\prime}}Z^{CD^{\prime}}{}_{AB^{\prime}}{ }^{BB^{\prime}}+\frac{3}{4}\delta_{A}{}^{B}\delta_{D}{}^{C})=\frac{1}{4}( \delta_{A}{}^{C}\delta_{D}{}^{B}+\epsilon_{AD}\epsilon^{CB}-\delta_{A}{}^{B} \delta_{D}{}^{C}+\frac{3}{4}\delta_{A}{}^{B}\delta_{D}{}^{C})\] \[=\frac{1}{4}\delta_{A}{}^{C}\delta_{D}{}^{B}+\frac{1}{4}\epsilon_{ AD}\epsilon^{CB}-\frac{1}{16}\delta_{A}{}^{B}\delta_{D}{}^{C} \tag{12}\] where we have used the following identities \[\eta_{AA^{\prime}BB^{\prime}}=\epsilon_{AB}\epsilon_{A^{\prime}B^{\prime}},\quad \epsilon_{AA^{\prime}BB^{\prime}CC^{\prime}DD^{\prime}}=i(\epsilon_{AB}\epsilon _{CD}\epsilon_{A^{\prime}C^{\prime}}\epsilon_{B^{\prime}D^{\prime}}-\epsilon_{A^ {\prime}B^{\prime}}\epsilon_{C^{\prime}D^{\prime}}\epsilon_{AC}\epsilon_{BD}) \tag{13}\] with \(\Gamma\) defined later on in Eq. (11). Whence we further have \[\beta_{C}{}^{A}(R)\equiv\gamma_{CD}{}^{DA}(R)=-\frac{1}{16}\delta_{C}{}^{A}, \quad\beta_{C}{}^{A}(L)\equiv\gamma_{CD}{}^{DA}(L)=\frac{11}{16}\delta_{C}{}^{A}. \tag{14}\] ### Two constraints on \(sl(2,\mathbb{C})\) Casimirs and dilatation As a warm-up, let us start with the massless scalar field \(\phi\), whose equation of motion is given by \[\nabla_{a}\nabla^{a}\phi=0. \tag{15}\] The Lie derivative acting on the scalar field yields \[\mathcal{L}_{X}\mathcal{L}_{Y}\phi=X^{a}\nabla_{a}(Y^{b}\nabla_{b}\phi)=(X^{a} \nabla_{a}Y^{b})\nabla_{b}\phi+X^{a}Y^{b}\nabla_{a}\nabla_{b}\phi, \tag{3.11}\] whereby we obtain \[\bar{\mathcal{C}}^{2}\phi=\bar{Z}^{a}{}_{a}{}^{b}\nabla_{b}\phi+\bar{H}^{ab} \nabla_{a}\nabla_{b}\phi=\frac{1}{2}D^{a}\nabla_{a}\phi=\frac{1}{2}\mathcal{D}\phi \tag{3.12}\] with \(\mathcal{D}=\mathcal{L}_{D}\). Thus we have \[\bar{\mathcal{L}}^{2}\phi=(\frac{1}{4}\mathcal{D}^{2}+\frac{1}{2}\mathcal{D})\phi. \tag{3.13}\] It is easy to see that we also have \[\mathcal{L}^{2}\phi=(\frac{1}{4}\mathcal{D}^{2}+\frac{1}{2}\mathcal{D})\phi. \tag{3.14}\] Now let us move onto the massless spinor field with the helicity \(s=-\frac{1}{2}\), whose dynamics is governed by the Weyl equation4 Footnote 4: Kindly please refer to Appendix C for the relation between the helicity and the spinor index. \[\nabla_{A^{\prime}A}\phi^{A}=0. \tag{3.15}\] Not only is this equation equivalent to \(\nabla_{A^{\prime}}{}^{A}\phi^{B}=\nabla_{A^{\prime}}{}^{(A}\phi^{B)}\), but also implies \(\nabla_{a}\nabla^{a}\phi^{A}=0\). To proceed, let us first recall the definition of Lie derivative of the spinor field \(\phi^{A}{}_{B}\) with respect to a conformal Killing vector field \(\xi^{a}\)[19] \[\mathcal{L}_{\xi}\phi^{A}{}_{B}=\xi^{a}\nabla_{a}\phi^{A}{}_{B}-\phi^{C}{}_{B }\Gamma_{C}{}^{A}(\xi)+\phi^{A}{}_{C}\Gamma_{B}{}^{C}(\xi) \tag{3.16}\] with \[\Gamma_{B}{}^{A}(\xi)=\frac{1}{2}(\nabla_{BB^{\prime}}\xi^{AB^{\prime}}-\frac{ 1}{4}\nabla_{c}\xi^{c}\delta_{B}{}^{A}). \tag{3.17}\] The generalization of this definition to other types of spinor fields is obvious. It is noteworthy that \(\nabla_{a}\Gamma_{B}{}^{A}(\xi)=0\) when restricted to the dilatation and Killing vector fields in our Minkowski spacetime. Thus with \(X\) and \(Y\) such vector fields, we have \[\mathcal{L}_{X}\mathcal{L}_{Y}\phi^{A} = X^{a}\nabla_{a}(\mathcal{L}_{Y}\phi^{A})-\mathcal{L}_{Y}\phi^{B }\Gamma_{B}{}^{A}(X) \tag{3.18}\] \[= X^{a}\nabla_{a}(Y^{b}\nabla_{b}\phi^{A}-\phi^{B}\Gamma_{B}{}^{A} (Y))-(Y^{b}\nabla_{b}\phi^{B}-\phi^{C}\Gamma_{C}{}^{B}(Y))\Gamma_{B}{}^{A}(X)\] \[= (X^{a}\nabla_{a}Y^{b})\nabla_{b}\phi^{A}+X^{a}Y^{b}\nabla_{a} \nabla_{b}\phi^{A}\] \[-\nabla_{a}\phi^{B}(X^{a}\Gamma_{B}{}^{A}(Y)+Y^{a}\Gamma_{B}{}^{ A}(X))+\phi^{C}\Gamma_{C}{}^{B}(Y)\Gamma_{B}{}^{A}(X).\] Whence we have \[\bar{\mathcal{C}}^{2}\phi^{A} = \bar{Z}^{a}{}_{a}{}^{b}\nabla_{b}\phi^{A}+\bar{H}^{ab}\nabla_{a} \nabla_{b}\phi^{A}-2\alpha^{CC^{\prime}}{}_{B}{}^{A}(R)\nabla_{CC^{\prime}}\phi ^{B}+\phi^{C}\beta_{C}{}^{A}(R) \tag{3.19}\] \[= \frac{1}{2}D^{b}\nabla_{b}\phi^{A}+\frac{1}{4}D^{c}\nabla_{c} \phi^{A}-\frac{1}{16}\phi^{A}=\frac{3}{4}\mathcal{D}\phi^{A}+\frac{5}{16}\phi ^{A},\] where we have used \({\cal D}\phi^{A}=D^{a}\nabla_{a}\phi^{A}-\frac{1}{2}\phi^{A}\) in the last step. Similarly, we also have \[{\cal C}^{2}\phi^{A} = \frac{1}{2}D^{b}\nabla_{b}\phi^{A}+\frac{1}{4}D^{c}\nabla_{c}\phi^{ A}+\frac{1}{2}(\epsilon^{C}{}_{B}D^{AC^{\prime}}+\epsilon^{CA}D_{B}{}^{C^{\prime}}) \nabla_{CC^{\prime}}\phi^{B}+\frac{11}{16}\phi^{A} \tag{30}\] \[= \frac{3}{4}D^{b}\nabla_{b}\phi^{A}+\frac{1}{2}D_{B}{}^{C^{\prime} }\nabla_{C^{\prime}}{}^{A}\phi^{B}+\frac{11}{16}\phi^{A}=\frac{3}{4}D^{b}\nabla _{b}\phi^{A}+\frac{1}{2}D^{b}\nabla_{b}\phi^{A}+\frac{11}{16}\phi^{A}\] \[= \frac{5}{4}{\cal D}\phi^{A}+\frac{21}{16}\phi^{A}.\] Thus we end up with the following result \[\bar{\cal L}^{2}\phi^{A}=(\frac{1}{4}{\cal D}^{2}+\frac{3}{4}{\cal D}+\frac{5} {16})\phi^{A},\quad{\cal L}^{2}\phi^{A}=(\frac{1}{4}{\cal D}^{2}+\frac{5}{4}{ \cal D}+\frac{21}{16})\phi^{A}. \tag{31}\] Next let us consider the Maxwell equation \[\nabla_{A^{\prime}A}\phi^{AB}=0 \tag{32}\] for the electromagnetic field with \(s=-1\), where \(\phi^{AB}\) is symmetric with respect to \(A\) and \(B\). The Lie derivative acting on \(\phi^{AB}\) gives rise to \[{\cal L}_{X}{\cal L}_{Y}\phi^{AB} = X^{a}\nabla_{a}({\cal L}_{Y}\phi^{AB})-2{\cal L}_{Y}\phi^{C(B} \Gamma_{C}{}^{A)}(X) \tag{33}\] \[= X^{a}\nabla_{a}(Y^{b}\nabla_{b}\phi^{AB}-2\phi^{C(B}\Gamma_{C}{} ^{A)}(Y))-2Y^{b}\nabla_{b}\phi^{C(B}\Gamma_{C}{}^{A)}(X)\] \[+2\phi^{D(B}\Gamma_{D}{}^{|C|}(Y)\Gamma_{C}{}^{A)}(X)+2\phi^{CD} \Gamma_{D}{}^{(B}(Y)\Gamma_{C}{}^{A)}(X)\] \[=(X^{a}\nabla_{a}Y^{b})\nabla_{b}\phi^{AB}+X^{a}Y^{b}\nabla_{a} \nabla_{b}\phi^{AB}\] \[-2\nabla_{a}\phi^{C(B}(X^{a}\Gamma_{C}{}^{A)}(Y)+Y^{a}\Gamma_{C}{ }^{A)}(X))\] \[+2\phi^{D(B}\Gamma_{D}{}^{|C|}(Y)\Gamma_{C}{}^{A)}(X)+2\phi^{CD} \Gamma_{D}{}^{(B}(Y)\Gamma_{C}{}^{A)}(X),\] whereby we have \[\bar{\cal C}^{2}\phi^{AB} = \frac{1}{2}D^{a}\nabla_{a}\phi^{AB}+\frac{1}{2}D^{a}\nabla_{a} \phi^{AB}-\frac{1}{8}\phi^{AB}-\frac{1}{8}\phi^{AB}\] \[= D^{a}\nabla_{a}\phi^{AB}-\frac{1}{4}\phi^{AB}={\cal D}\phi^{AB} +\frac{3}{4}\phi^{AB}\] \[{\cal C}^{2}\phi^{AB} = \frac{1}{2}D^{a}\nabla_{a}\phi^{AB}+\frac{1}{2}D^{a}\nabla_{a} \phi^{AB}+D^{a}\nabla_{a}\phi^{AB}+\frac{11}{8}\phi^{AB}+\frac{3}{8}\phi^{AB} \tag{34}\] \[= 2D^{a}\nabla_{a}\phi^{AB}+\frac{7}{4}\phi^{AB}=2{\cal D}\phi^{AB }+\frac{15}{4}\phi^{AB}.\] Thus we wind up with \[\bar{\cal L}^{2}\phi^{AB}=(\frac{1}{4}{\cal D}^{2}+{\cal D}+\frac{3}{4})\phi^{ AB},\quad{\cal L}^{2}\phi^{AB}=(\frac{1}{4}{\cal D}^{2}+2{\cal D}+\frac{15}{4}) \phi^{AB}. \tag{35}\] The above spinor analysis has already involved all the necessary ingredients for one to obtain the corresponding result for the massless spinor field with other helicities. To be more specific, one can find \[\bar{\mathcal{L}}^{2}\phi^{A\cdots} = (\frac{1}{4}\mathcal{D}^{2}+\frac{n+2}{4}\mathcal{D}+\frac{n^{2}+4n} {16})\phi^{A\cdots}=(\frac{\mathcal{D}}{2}-\frac{s}{2}+1)(\frac{\mathcal{D}}{2} -\frac{s}{2})\phi^{A\cdots},\] \[\mathcal{L}^{2}\phi^{A\cdots} = (\frac{1}{4}\mathcal{D}^{2}+\frac{3n+2}{4}\mathcal{D}+\frac{9n^{2 }+12n}{16})\phi^{A\cdots}=(\frac{\mathcal{D}}{2}-\frac{3s}{2}+1)(\frac{ \mathcal{D}}{2}-\frac{3s}{2})\phi^{A\cdots}\] for the massless spinor field with helicity \(s=-\frac{n}{2}\), which is totally symmetric with respect to \(n\) indices and satisfies the equation of motion \[\nabla_{A^{\prime}A}\phi^{A\cdots}=0. \tag{40}\] Note that the massless spinor field \(\phi^{A^{\prime}B^{\prime}C^{\prime}\cdots}\) with helicity \(s=\frac{n}{2}\) is simply the complex conjugation of the massless spinor field with helicity \(s=-\frac{n}{2}\), so we have \[\bar{\mathcal{L}}^{2}\phi^{A^{\prime}\cdots} = (\frac{1}{4}\mathcal{D}^{2}+\frac{3n+2}{4}\mathcal{D}+\frac{9n^{2 }+12n}{16})\phi^{A^{\prime}\cdots}=(\frac{\mathcal{D}}{2}+\frac{3s}{2}+1)( \frac{\mathcal{D}}{2}+\frac{3s}{2})\phi^{A^{\prime}\cdots},\] \[\mathcal{L}^{2}\phi^{A^{\prime}\cdots} = (\frac{1}{4}\mathcal{D}^{2}+\frac{n+2}{4}\mathcal{D}+\frac{n^{2}+ 4n}{16})\phi^{A^{\prime}\cdots}=(\frac{\mathcal{D}}{2}+\frac{s}{2}+1)(\frac{ \mathcal{D}}{2}+\frac{s}{2})\phi^{A^{\prime}\cdots}\] for the massless spinor field with helicity \(s=\frac{n}{2}\). On the other hand, by Eq. (19) for a conformal primary wavefunction, we have \[\mathcal{L}^{2}=h(h-1),\quad\bar{\mathcal{L}}^{2}=\bar{h}(\bar{h}-1). \tag{41}\] So it is reasonable to expect that the candidate basis out of the simultaneous eigen vectors of \(D,L^{2},L_{0},\bar{L}^{2},\bar{L}_{0}\) can be constructed in terms of the infinite tower of descendants of the left and right highest (lowest) weight conformal primary wavefunction of \(sl(2,\mathds{C})\) Lie algebra, where the celestial conformal weights are determined by its bulk scaling dimension5. Actually, it has been shown in [20] that this is the case for the massless scalar field. Eq. (40) and Eq. (41) obtained here provide us with an important foundation to generalize [20] to the massless field with arbitrary helicity. In the subsequent section, we shall specify the explicit correspondence between the 2D celestial conformal weights and the 4D bulk scaling dimension for all the on-shell conformal primary wavefunctions. Footnote 5: The basis constructed in this way is discrete, compared to the frequently considered one from the unitary principal series. ## 4 Correspondence between the 4D bulk scaling dimension and 2D celestial conformal weights By Eq. (41) and Eq. (40), we have the following relationship between the celestial conformal weight and bulk scaling dimension \[R_{+}:h=\frac{\mathcal{D}}{2}-\frac{3s}{2}+1,\quad\texttt{or}\quad R_{-}:h=- \frac{\mathcal{D}}{2}+\frac{3s}{2}, \tag{42}\] \[\bar{R}_{+}:\bar{h}=\frac{\mathcal{D}}{2}-\frac{s}{2}+1,\quad\texttt{or}\quad\bar{ R}_{-}:\bar{h}=-\frac{\mathcal{D}}{2}+\frac{s}{2} \tag{4.2}\] for the on-shell conformal primary wavefunctions with negative helicity \(s\). However, this relationship demonstrates a certain ambiguity. To fix it, we like to find the explicit expression for the on-shell conformal primary wavefunctions. As such, we follow [21; 22] to choose \(o^{A}=\frac{1}{\sqrt{q^{\,\,\,\,\,\,\,\,\,\,}x}}\lambda^{A},\iota^{A}=D^{AA^{ \prime}}o_{A^{\prime}}\). Accordingly, we have \[\nabla_{A^{\prime}A}o^{B}=-\frac{1}{2}o_{A^{\prime}}o_{A}o^{B},\quad\nabla_{A ^{\prime}A}\iota^{B}=\delta_{A}{}^{B}o_{A^{\prime}}-\frac{1}{2}o_{A^{\prime}} o_{A}\iota^{B}, \tag{4.3}\] which implies \[\nabla_{A^{\prime}A}o^{A}=0,\quad\nabla_{A^{\prime}A}\iota^{A}=\frac{3}{2}o_{A ^{\prime}}. \tag{4.4}\] Whence we further have \[\nabla_{A^{\prime}A}o^{(AB\cdots}\iota^{CD\cdots)} = \frac{m}{m+n}\nabla_{A^{\prime}A}(o^{A}o^{(B\cdots}\iota^{CD\cdots )}+\frac{n}{m+n}\nabla_{A^{\prime}A}o^{(CD\cdots}\iota^{|A|}\iota^{B\cdots)} \tag{4.5}\] \[= \frac{mn}{m+n}o^{(BC\cdots}\iota^{D\cdots)}o_{A^{\prime}}-\frac{ mn-3n-n(n-1)}{2(m+n)}o^{(CD\cdots}\iota^{B\cdots)}o_{A^{\prime}}\] \[= \frac{n(n+m+2)}{2(m+n)}o^{(BC\cdots}\iota^{D\cdots)}o_{A^{\prime }},\] where \(o^{AB\cdots}\) denotes the spinor field produced by the product of \(m\) os and \(\iota^{CD\cdots}\) denotes the spinor field produced by the product of \(n\)\(s\). With this, we can construct the following linearly independent on-shell conformal primary wavefunctions for the massless spinor fields with negative helicity \(s\) \[\phi^{AB\cdots}=\phi^{\Delta}(x)o^{AB\cdots},\quad\hat{\phi}^{AB\cdots}=\phi ^{-s+1}(x)o^{(A\cdots}\iota^{B\cdots)},\quad\tilde{\phi}^{AB\cdots}=(x^{2})^ {\Delta+s-1}\phi^{\Delta}(x)\iota^{AB\cdots}, \tag{4.6}\] where we have used \(D_{AA^{\prime}}D^{AB^{\prime}}=\frac{1}{2}x^{2}\delta_{A^{\prime}}{}^{B^{ \prime}}\) with the scalar function defined as \(\phi^{\Delta}(x)=\frac{1}{(q\cdot x)^{\Delta}}\). It is noteworthy that besides the first and third kinds of on-shell conformal primary wavefunctions, which are familiar to the community and related to each other by the so-called shadow transformation, we also find the second kind of on-shell conformal primary wavefunctions for \(s\leq-1\). According to Table.1, we further list the celestial conformal weights and bulk scaling dimension for the above explicit on-shell conformal primary wavefunctions in Table 26. Whence we obtain a definite relationship between the celestial conformal weights and bulk scaling dimension for each on-shell conformal primary wavefunction, i.e., Footnote 6: It is noteworthy that the second kind of on-shell conformal primary wavefunctions displays a different correspondence between the celestial spin and bulk helicity from the first and third ones, whose celestial spin is related to the bulk helicity simply by \(J=\pm s\). \[R_{-}\quad\texttt{and}\quad\bar{R}_{-}\quad\quad\texttt{for}\quad\phi^{AB \cdots},\] \[R_{+}\quad\texttt{and}\quad\bar{R}_{-}\quad\quad\texttt{for}\quad \hat{\phi}^{AB\cdots},\] \[R_{+}\quad\texttt{and}\quad\bar{R}_{+}\quad\quad\texttt{for}\quad \tilde{\phi}^{AB\cdots}. \tag{4.7}\] As pointed out before, the on-shell massless spinor field with positive helicity is simply the complex conjugation of that with negative helicity. So it is not hard to obtain the parallel results for the massless spinor field with positive helicity, which we shall present below for completeness. Namely, Eq. (3.29) together with Eq. (3.28) gives rise to \[R_{+}:\bar{h}=\frac{\mathcal{D}}{2}+\frac{s}{2}+1,\quad\texttt{or}\quad R_{-}: \bar{h}=-\frac{\mathcal{D}}{2}-\frac{s}{2}, \tag{4.8}\] and \[\bar{R}_{+}:\bar{h}=\frac{\mathcal{D}}{2}+\frac{3s}{2}+1,\quad\texttt{or}\quad R _{-}:\bar{h}=-\frac{\mathcal{D}}{2}-\frac{3s}{2}. \tag{4.9}\] Such an ambiguity in the correspondence between the celestial conformal weights and bulk scaling dimension can be resolved by examining the explicit quantities for each kind of on-shell conformal primary wavefunctions in Table 3. As a result, we have \[R_{-}\quad\texttt{and}\quad\bar{R}_{-}\quad\texttt{for}\quad\phi^ {A^{\prime}B^{\prime}\cdots},\] \[R_{-}\quad\texttt{and}\quad\bar{R}_{+}\quad\texttt{for}\quad\hat{ \phi}^{A^{\prime}B^{\prime}\cdots},\] \[R_{+}\quad\texttt{and}\quad\bar{R}_{+}\quad\texttt{for}\quad\tilde {\phi}^{A^{\prime}B^{\prime}\cdots}. \tag{4.10}\] ## 5 Discussion Although the bulk dilatation does not belong to the isometry Poincare group of our Minkowski spacetime, it can be regarded as an emergent symmetry of the solution \begin{table} \begin{tabular}{c|c|c|c|c|c} & \(h\) & \(\bar{h}\) & \(\Delta\) & \(J\) & \(\mathcal{D}\) \\ \hline \(\phi^{A^{\prime}B^{\prime}\cdots}\) & \(\frac{\Delta+s}{2}\) & \(\frac{\Delta-s}{2}\) & \(\Delta\) & \(s\) & \(-\Delta+2s\) \\ \(\hat{\phi}^{A^{\prime}B^{\prime}\cdots}\) & \(-s-\frac{o}{2}+\frac{1}{2}\) & \(\frac{o}{2}+\frac{1}{2}\) & \(-s+1\) & \(-s-o\) & \(s-o-1\) \\ \(\tilde{\phi}^{AB\cdots}\) & \(\frac{\Delta-s}{2}\) & \(\frac{\Delta+s}{2}\) & \(\Delta\) & \(-s\) & \(\Delta+2s-2\) \\ \end{tabular} \end{table} Table 2: The 2D celestial conformal weights (conformal dimension and spin) and 4D bulk scaling dimension for the on-shell conformal primary wavefunctions with negative helicity \(s\), where \(o\) denotes the number of \(o^{A}\) in \(\hat{\phi}^{AB\cdots}\). \begin{table} \begin{tabular}{c|c|c|c|c|c} & \(h\) & \(\bar{h}\) & \(\Delta\) & \(J\) & \(\mathcal{D}\) \\ \hline \(\phi^{A^{\prime}B^{\prime}\cdots}\) & \(\frac{\Delta+s}{2}\) & \(\frac{\Delta-s}{2}\) & \(\Delta\) & \(s\) & \(-\Delta-2s\) \\ \(\hat{\phi}^{A^{\prime}B^{\prime}\cdots}\) & \(\frac{o}{2}+\frac{1}{2}\) & \(s-\frac{o}{2}+\frac{1}{2}\) & \(s+1\) & \(-s+o\) & \(-s-o-1\) \\ \(\tilde{\phi}^{A^{\prime}B^{\prime}\cdots}\) & \(\frac{\Delta-s}{2}\) & \(\frac{\Delta+s}{2}\) & \(\Delta\) & \(-s\) & \(\Delta-2s-2\) \\ \end{tabular} \end{table} Table 3: The 2D celestial conformal weights (conformal dimension and spin) and 4D bulk scaling dimension for the on-shell conformal primary wavefunctions with positive helicity \(s\), obtained by taking the complex conjugation of those with negative helicity \(-s\), where \(o\) denotes the number of \(o^{A^{\prime}}\) in \(\hat{\phi}^{A^{\prime}B^{\prime}\cdots}\). space of equations of motion for the massless field dictated by the unitary representation of the Poincare group, reminiscent of the hidden conformal symmetry of the Kerr black hole discovered in [23]. With this in mind, we have shown that the \(sl(2,\mathds{C})\times D\) symmetry dictated candidate basis for the massless particle representation of the Poincare group can be constructed out of the infinite tower of the descendants of the left and right highest (lowest) weight conformal primary wavefunction of \(sl(2,\mathds{C})\) Lie algebra, where the celestial conformal weights are further determined in an explicit manner by the bulk scaling dimension through solving out the exact on-shell conformal primary wavefunctions for the massless field with arbitrary helicity. In particular, on top of the two kinds of familiar-looking on-shell conformal primary wavefunctions, which are related to each other by the shadow transformation, we also find another set of independent on-shell conformal primary wavefunctions for the massless field with helicity \(|s|\geq 1\). In addition, for the massless field with helicity \(|s|\geq 1\), one is also required to introduce the gauge potential to define the Klein-Gordon inner product[24]. So we present the exact on-shell conformal primary wavefunctions as well as the corresponding celestial conformal weights and bulk scaling dimension in Appendix D for the electromagnetic potential, which is supposed to be generalized readily to the massless field with larger helicity. However, to show that our candidate basis is really the basis for the massless particle representation of the Poincare group, one is required to generalize the detailed completeness analysis for the massless scalar field in [20; 25] to the massless field with arbitrary helicity, where the aforementioned new set of independent on-shell conformal primary wavefunctions may be an indispensable part for the basis completeness. We hope to address this important issue elsewhere in the future. ## Acknowledgements We are grateful to Yichen Feng, Shengyi Liu, Sirui Shuai, Yu Tian, and Xiaoning Wu for their stimulating discussions. This work is partly supported by the National Key Research and Development Program of China with Grant No. 2021YFC2203001 and National Natural Science Foundation of China with Grant No. 12075026. ## Appendix A Conformal algebra in the \(d\)-dimensional Minkowski spacetime For \(d\)-dimensional Minkowski spacetime with \(x^{\mu}\) the Lorentz coordinates, the global conformal Killing vector fields can be written as \[P_{\mu}=\partial_{\mu},\quad D=x^{\mu}\frac{\partial}{\partial x^{\mu}},\quad M _{\mu\nu}=x_{\mu}\partial_{\nu}-x_{\nu}\partial_{\mu},\quad K_{\mu}=2x_{\mu}x ^{\nu}\partial_{\nu}-x^{2}\partial_{\mu}\] (A.1) with the non-vanishing commutation relations given by \[[D,P_{\mu}]=-P_{\mu},\quad[P_{\rho},M_{\mu\nu}]=\eta_{\rho\mu}P_{ \nu}-\eta_{\rho\nu}P_{\mu},\] \[[M_{\mu\nu},M_{\rho\sigma}]=-(\eta_{\mu\rho}M_{\nu\sigma}-\eta_{ \mu\sigma}M_{\nu\rho}-\eta_{\nu\rho}M_{\mu\sigma}+\eta_{\nu\sigma}M_{\mu\rho}),\] \[[D,K_{\mu}]=K_{\mu},\quad[K_{\mu},P_{\nu}]=-2(\eta_{\mu\nu}D+M_{ \mu\nu}). \tag{100}\] ## Appendix B The formula for the complex coordinate with \(w=0\) as the north pole Note that the north pole itself corresponds to \(w=\infty\) in the complex coordinate \(w\) given by the north pole based stereographic projection, so to circumvent the potential subtleties associate with the north pole, we prefer to choose \(\lambda^{\Sigma}=(1,w)\). Accordingly, \(q^{\mu}=(1+w\bar{w},w+\bar{w},i(w-\bar{w}),1-w\bar{w})\) with the north pole located at \(w=0\). As a result, Eq. (12) and Eq. (13) will be modified as follows \[w^{\prime}=\frac{c+dw}{a+bw},\quad\bar{w}^{\prime}=\frac{\bar{c}+\bar{d}\bar{ w}}{\bar{a}+\bar{b}\bar{w}}, \tag{101}\] and \[l_{-1}\to-T_{1},\quad\quad\bar{l}_{-1}\to-\bar{T}_{1},\] \[l_{1}\to-T_{-1},\quad\quad\bar{l}_{1}\to-\bar{T}_{-1},\] \[l_{0}\to-T_{0},\quad\quad\bar{l}_{0}\to-\bar{T}_{0}. \tag{102}\] Similarly, Eq. (17) and Eq. (19) will also get modified in the following way \[\mathcal{O}(x^{\prime\mu}=\Lambda^{\mu}{}_{\nu}x^{\nu};w^{\prime}=\frac{c+dw} {a+bw},\bar{w}^{\prime}=\frac{\bar{c}+\bar{d}\bar{w}}{\bar{a}+\bar{b}\bar{w}} )=|\frac{\partial w^{\prime}}{\partial w}|^{-\frac{\Delta+J}{2}}|\frac{ \partial\bar{w}^{\prime}}{\partial\bar{w}}|^{-\frac{\Delta-J}{2}}D(\Lambda) \mathcal{O}(x;w,\bar{w}) \tag{103}\] with \(|\frac{\partial w^{\prime}}{\partial w}|=\frac{1}{(a+bw)^{2}}\) and \[\mathcal{L}_{L_{n}}\mathcal{O}=\mathcal{L}_{T_{-n}}\mathcal{O}=(w ^{1-n}\partial_{w}+h(1-n)w^{-n})\mathcal{O},\] \[\mathcal{L}_{\bar{L}_{n}}\mathcal{O}=\mathcal{L}_{\bar{T}_{-n}} \mathcal{O}=(\bar{w}^{1-n}\partial_{\bar{w}}+\bar{h}(1-n)\bar{w}^{-n}) \mathcal{O}. \tag{104}\] Appendix C \(s=-\frac{n}{2}\) for unprimed spinor fields and \(s=\frac{n}{2}\) for primed spinor fields Obviously, the massless scalar field has zero helicity. On the other hand, as stated in the main body of our paper, unprimed and primed massless spinor fields have negative and positive helicities, respectively. Here we take the massless spinor field with one index as an example to show that with our convention this is the case, i.e., \[W^{\mu}\phi^{E}=-\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}\mathcal{L} _{P_{\nu}}\mathcal{L}_{M_{\rho\sigma}}\phi^{E}=\frac{1}{2}\epsilon^{\mu\nu\rho \sigma}\partial_{\nu}\phi^{F}\nabla_{FF^{\prime}}x_{\rho}(\frac{\partial}{ \partial x^{\sigma}})^{EF^{\prime}}\] \[=\frac{i}{2}\sigma^{\mu}{}_{AA^{\prime}}(\epsilon^{AB}\epsilon^{ CD}\epsilon^{A^{\prime}C^{\prime}}\epsilon^{B^{\prime}D^{\prime}}-\epsilon^{A^{ \prime}B^{\prime}}\epsilon^{C^{\prime}D^{\prime}}\epsilon^{AC}\epsilon^{BD}) \nabla_{BB^{\prime}}\phi^{F}\epsilon_{CF}\epsilon_{C^{\prime}F^{\prime}} \delta_{D}{}^{E}\delta_{D^{\prime}}{}^{F^{\prime}}\] \[=\sigma^{\mu}{}_{AA^{\prime}}(\frac{i}{2}\epsilon^{AB}\epsilon^{ A^{\prime}B^{\prime}}\nabla_{BB^{\prime}}\phi^{E}-i\epsilon^{A^{\prime}B^{ \prime}}\epsilon^{EB}\nabla_{B^{\prime}B}\phi^{A})\] \[=-\frac{i}{2}\eta^{\mu\nu}\partial_{\nu}\phi^{E}=-\frac{i}{2} \mathcal{L}_{P^{\mu}}\phi^{E}, \tag{100}\] which tells us that the unprimed \(\phi^{E}\) and the primed \(\phi^{E^{\prime}}\) have helicity \(-\frac{1}{2}\) and \(\frac{1}{2}\), respectively. ## Appendix D On-shell conformal primary wavefunctions of electromagnetic potential For the Killing vector fields or the dilatation field \(X\) and \(Y\) in Minkowski spacetime, we have \[\mathcal{L}_{X}\mathcal{L}_{Y}A_{c}=X^{a}\nabla_{a}(Y^{b}\nabla_{ b}A_{c}+A_{b}\nabla_{c}Y^{b})+(Y^{b}\nabla_{b}A_{a}+A_{b}\nabla_{a}Y^{b}) \nabla_{c}X^{a}=\] \[X^{a}\nabla_{a}Y^{b}\nabla_{b}A_{c}+X^{a}Y^{b}\nabla_{a}\nabla_{ b}A_{c}+X^{a}\nabla_{c}Y^{b}\nabla_{a}A_{b}+Y^{b}\nabla_{c}X^{a}\nabla_{b}A_{a}+ \nabla_{a}(Y^{b}\nabla_{c}X^{a})A_{b} \tag{101}\] for our electromagnetic potential \(A_{c}\), where we have used the fact the second derivative of \(X\) and \(Y\) vanishes. With the gauge condition \(\nabla_{c}A^{c}=D^{c}A_{c}=0\), we further have \[\bar{\mathcal{C}}^{2}A_{c}=\bar{Z}^{a}{}_{a}^{b}\nabla_{b}A_{c}+ \bar{H}^{ab}\nabla_{a}\nabla_{b}A_{c}+2\bar{Z}^{a}{}_{c}^{b}\nabla_{a}A_{b}+ \nabla_{a}(\bar{Z}^{b}{}_{c}^{a})A_{b}\] \[=\frac{1}{2}D^{b}\nabla_{b}A_{c}-\frac{1}{4}x^{2}\square A_{c}- \frac{1}{2}\mathcal{D}A_{c}+\frac{i}{2}\nabla_{a}A_{b}\epsilon^{ab}{}_{cd}D^{ d}+\frac{1}{2}A_{c}\] \[=\mp\frac{1}{2}F_{cd}D^{d}=\pm\frac{1}{2}\mathcal{D}A_{c} \tag{102}\] for the helicities \(s=\pm 1\), which correspond to \[\frac{1}{2}\epsilon_{abcd}F^{cd}=\pm iF_{ab}, \tag{103}\] through the relations \(F^{AA^{\prime}BB^{\prime}}=\phi^{AB}\epsilon^{A^{\prime}B^{\prime}}\) for \(s=-1\) and \(F^{AA^{\prime}BB^{\prime}}=\phi^{A^{\prime}B^{\prime}}\epsilon^{AB}\) for \(s=1\). Likewise, we have \[\mathcal{C}^{2}A_{c}=\mp\frac{1}{2}\mathcal{D}A_{c} \tag{104}\] for \(s=\pm 1\). Thus we end up with \[\mathcal{L}^{2}=\frac{\mathcal{D}}{2}(\frac{\mathcal{D}}{2}-s),\quad\bar{ \mathcal{L}}^{2}=\frac{\mathcal{D}}{2}(\frac{\mathcal{D}}{2}+s), \tag{105}\] which implies the relationship between the bulk scaling dimension and celestial conformal weight with a certain ambiguity as follows \[R_{+}:h =\frac{\mathcal{D}}{2}-\frac{s}{2}+\frac{1}{2},\quad\texttt{or} \quad R_{-}:h=-\frac{\mathcal{D}}{2}+\frac{s}{2}+\frac{1}{2},\] \[\bar{R}_{+}:\bar{h} =\frac{\mathcal{D}}{2}+\frac{s}{2}+\frac{1}{2},\quad\texttt{or} \quad\bar{R}_{-}:\bar{h}=-\frac{\mathcal{D}}{2}-\frac{s}{2}+\frac{1}{2} \tag{100}\] for the on-shell conformal primary wavefunctions of the electromagnetic potential. Furthermore, with the aforementioned gauge condition as well as the choice of the null tetrad \[l_{AA^{\prime}}=\iota_{A}\iota_{A^{\prime}},\quad n_{AA^{\prime}}=o_{A}o_{A^{ \prime}},\quad m_{AA^{\prime}}=\iota_{A}o_{A^{\prime}},\quad\bar{m}_{AA^{ \prime}}=o_{A}\iota_{A^{\prime}}, \tag{101}\] the on-shell conformal primary wavefunctions of the electromagnetic potential can be constructed as follows \[A_{a}=\bar{m}_{a}\phi^{\Delta}(x),\quad\tilde{A}_{a}=m_{a}(x^{2})^{\Delta-1} \phi^{\Delta}(x), \tag{102}\] which correspond to \[\phi^{AB}=(1-\Delta)\phi^{\Delta}(x)o^{A}o^{B},\quad\tilde{\phi}^{AB}=2(1- \Delta)(x^{2})^{\Delta-2}\phi^{\Delta}(x)\iota^{A}\iota^{B} \tag{103}\] through the relation \(\phi_{AB}=\nabla_{C^{\prime}(A}{A^{C^{\prime}}}_{B)}=\nabla_{C^{\prime}A}{A^{ C^{\prime}}}_{B}\). Here we have used \[x_{a}=\frac{x^{2}}{2}n_{a}+l_{a}, \tag{104}\] and \[\nabla_{a}\bar{m}_{b}=-\bar{m}_{a}n_{b},\quad\nabla_{a}n_{b}=-n_{a}n_{b} \tag{105}\] which implies \(\nabla_{a}\bar{m}^{a}=0\) and \(\Box m_{a}=0\). Obviously, the above on-shell conformal wavefunctions with \(\Delta=1\) correspond to the pure gauge modes. For \(\Delta=1\), one can instead construct the non-gauge modes for the electromagnetic potential as follows \[A_{a}=\bar{m}_{a}\phi^{1}(x)\ln\frac{q\cdot x}{q\cdot x_{0}},\quad\tilde{A}_{ a}=m_{a}\phi^{1}(x)\ln(\frac{q\cdot x_{0}}{q\cdot x}x^{2}) \tag{106}\] with \(x_{0}=(1,0,0,1)\) the reference point in our Minkowski spacetime, which give rise to \[\phi^{AB}=\phi^{1}(x)o^{A}o^{B},\quad\tilde{\phi}^{AB}=-2(x^{2})^{-1}\phi^{1} (x)\iota^{A}\iota^{B}, \tag{107}\] respectively. Last, by \[\nabla_{a}l_{b}=n_{a}l_{b}-m_{a}\bar{m}_{b}-\bar{m}_{a}m_{b}, \tag{108}\] one can show that \[\hat{A}_{a}=(-\frac{x^{2}}{2}n_{a}+l_{a})\phi^{2}(x),\quad\hat{A}_{a}=(-\frac {x^{2}}{2}n_{a}+l_{a})\phi^{2}(x)\ln x^{2}+\phi^{1}(x)(m_{a}\frac{\bar{m}\cdot x _{0}}{q\cdot x_{0}}-\bar{m}_{a}\frac{m\cdot x_{0}}{q\cdot x_{0}}) \tag{109}\] are also the on-shell conformal primary wavefunctions of the electromagnetic potential, corresponding to a pure gauge and \[\hat{\phi}^{AB}=4\phi^{2}(x)o^{(A}{}_{t}{}^{B)}, \tag{16}\] respectively. So we have already succeeded in obtaining the on-shell conformal primary wavefunctions of the electromagnetic potential and its field strength for the negative helicity. The corresponding celestial conformal weights and bulk scaling dimension are listed in Table 4. The result for the positive helicity can be readily obtained by noting that the on-shell conformal primary wavefunctions for the positive helicity is related to those for the negative helicity by the complex conjugation. Whence the correpondence between the celestial conformal weights and bulk scaling dimension can be specified as follows \[R_{-}\quad\texttt{and}\quad\bar{R}_{-}\quad\texttt{for}\quad A_{a }(\bar{A}_{a}),\] \[R_{+}\quad\texttt{and}\quad\bar{R}_{-}\quad\texttt{for}\quad\hat {A}_{a},\] \[R_{-}\quad\texttt{and}\quad\bar{R}_{+}\quad\texttt{for}\quad\bar{ \bar{A}}_{a},\] \[R_{+}\quad\texttt{and}\quad\bar{R}_{+}\quad\texttt{for}\quad\bar{ A}_{a}(\bar{\bar{A}}_{a}). \tag{17}\]
2309.09064
Fast Triangle Counting
Listing and counting triangles in graphs is a key algorithmic kernel for network analyses including community detection, clustering coefficients, k-trusses, and triangle centrality. We design and implement a new serial algorithm for triangle counting that performs competitively with the fastest previous approaches on both real and synthetic graphs, such as those from the Graph500 Benchmark and the MIT/Amazon/IEEE Graph Challenge. The experimental results use the recently-launched Intel Xeon Platinum 8480+ and CPU Max 9480 processors.
David A. Bader
2023-09-16T18:18:50Z
http://arxiv.org/abs/2309.09064v2
# Fast Triangle Counting ###### Abstract Listing and counting triangles in graphs is a key algorithmic kernel for network analyses including community detection, clustering coefficients, k-trusses, and triangle centrality. We design and implement a new serial algorithm for triangle counting that performs competitively with the fastest previous approaches on both real and synthetic graphs, such as those from the Graph500 Benchmark and the MIT/Amazon/IEEE Graph Challenge. The experimental results use the recently-launched Intel Xeon Platinum 8480+ and CPU Max 9480 processors. Graph Algorithms, Triangle Counting, High Performance Data Analytics ## I Introduction Triangle listing and counting is a highly-studied problem in computer science and is a key building block in various graph analysis techniques such as clustering coefficients [1], k-truss [2], and triangle centrality [3]. The MIT/Amazon/IEEE Graph Challenge [4, 5] includes triangle counting as a fundamental method in graph analytics. There are at most \(\binom{n}{3}=\Theta\big{(}n^{3}\big{)}\) triangles in a graph \(G=(V,E)\) with \(n=|V|\) vertices and \(m=|E|\) edges. The focus of this paper is on sequential triangle counting algorithms for sparse graphs that are stored in compressed, sparse row (CSR) format, rather than adjacency matrix format. The naive approach using triply-nest loops to check if each triple \((u,v,w)\) forms a triangle takes \(\mathcal{O}\left(n^{3}\right)\) time and is inefficient for sparse graphs. It is well-known that listing all triangles in G is \(\Omega\Big{(}m^{\frac{n}{2}}\Big{)}\) time [6, 7]. The main contributions of this paper are: * A new triangle algorithm that combines the techniques of cover-edges, forward, and hashing and runs in \(\mathcal{O}\left(m\cdot a(G)\right)\), where \(a(G)\) is the arboricity of the graph; * An experimental study of an implementation of this novel triangle counting algorithm on real and synthetic graphs; and * Freely-available, open-source software for more than 20 triangle counting algorithms and variants in the C programming language. ### _Related work_ There are faster algorithms for triangle counting, such as the work of Alon, Yuster, and Zwick [8] that require an adjacency matrix for the input graph representation and use fast matrix multiplication. As this is infeasible for large, sparse graph, their and other fast multiply methods are outside the scope of this paper. Latapy [7] provides a survey on triangle counting algorithms for very large, sparse graphs. One of the earliest algorithms, _tree-listing_, published in 1978 by Itai and Rodeh [6] first finds a rooted spanning tree of the graph. After iterating through the non-tree edges and using criteria to identify triangles, the tree edges are removed and the algorithm repeats until there are no edges remaining. This approach takes \(\mathcal{O}\left(m^{\frac{3}{2}}\right)\) time (or \(\mathcal{O}\left(n\right)\) for planar graphs). The most common triangle counting algorithms in the literature include _vertex-iterator_[6, 7] and _edge-iterator_[6, 7] approaches that run in \(\mathcal{O}\left(m\cdot d_{\mbox{max}}\right)\) time [6, 9, 10], where \(d_{\mbox{max}}\) is the maximum degree of a vertex in the graph. In vertex-iterator, the adjacency list \(N(v)\) of each vertex \(v\in V\) is doubly-enumerated to find all \(2\)-paths \((u,v,w)\) where \(u,w\in N(v)\). Then, the graph is searched for the existence of the closing edge \((u,w)\) by checking if \(w\in N(u)\) (or if \(u\in N(w)\)). Arifuzzaman _et al._[11] study modifications of the vertex-iterator algorithm based on various methods for vertex ordering. In edge-iterator, each edge \((u,v)\) in the graph is examined, and the intersection of \(N(u)\) and \(N(v)\) is computed to find triangles. A common optimization is to use a _direction-oriented_ approach that only considers edges \((u,v)\) where \(u<v\). The variants of edge-iterator are often based on the algorithm used to perform the intersection. When the two adjacency lists are sorted, then _MergePath_ and _BinarySearch_ can be used. MergePath performs a linear scan through both lists counting the common elements. Makkar, Bader and Green [12] give an efficient MergePath algorithm for GPU. Mailthody _et al._[13] use an optimized two-pointer intersection (MergePath) for set intersection. BinarySearch, as the name implies, uses a binary search to determine if each element of the smaller list is found in the larger list. _Hash_ is another method for performing the intersection of two sets and it does not require the adjacency lists to be sorted. A typical implementation of Hash initializes a Boolean array of size \(m\) to all false. Then, positions in Hash corresponding to the vertex values in \(N(u)\) are set to true. Then \(N(v)\) is scanned, looking up in \(\Theta(1)\) time whether or not there is a match for each vertex. Chiba and Nishizeki published one of the earliest edge iterator with hashing algorithms for triangle finding in 1985 [14]. The running time is \(\mathcal{O}\left(a(G)m\right)\) where \(a(G)\) is defined as the arboricity of \(G\), which is upper-bounded \(a(G)\leq\lceil(2m+n)^{\frac{1}{2}}/2\rceil\)[14]. In 2018, Davis rediscovered this method, which he calls tri_simple in his comparison with SuiteSparse GraphBLAS [15]. According to Davis [15]: this algorithm "is already a non-trivial method. It requires expert knowledge of how Gustavson's method can be implemented efficiently, including a reduction of the result to a single scalar." Mowlaei [16] gave a variant of the edge-iterator algorithm that uses vectorized sorted set intersection and reorders the vertices using the reverse Cuthill-McKee heuristic. In 2005, Schank and Wagner [9, 10] designed a fast triangle counting algorithm called _forward_ (see Algorithm 1) that is a refinement of the edge-iterator approach. Instead of intersections of the full adjacency lists, the _forward_ algorithm uses a dynamic data structure \(A(v)\) to store a subset of the neighborhood \(N(v)\) for \(v\in V\). Initially each set \(A()\) is empty, and after computing the intersection of the sets \(A(u)\) and \(A(v)\) for each edge \((u,v)\) (with \(u<v\)), \(u\) is added to \(A(v)\). This significantly reduces the size of the intersections needed to find triangles. The running time is \(\mathcal{O}\left(m\cdot d_{\mbox{max}}\right)\). However, if one reorders the vertices in decreasing order of their degrees as a \(\Theta(n\log n)\) time pre-processing step, the forward algorithm's running time reduces to \(\mathcal{O}\left(m^{\frac{3}{2}}\right)\). Donato _et al._[17] implement the forward algorithm for shared-memory. Ortmann and Brandes [18] survey triangle counting algorithms, create a unifying framework for parsimonious implementations, and conclude that nearly every triangle listing variant is in \(\mathcal{O}\left(m\cdot a(G)\right)\). ``` 0: Graph \(G=(V,E)\) 0: Triangle Count \(T\) 1:\(T\gets 0\) 2:\(\forall v\in V\) 3:\(A(v)\leftarrow\emptyset\) 4:\(\forall(u,v)\in E\) 5:if\((u<v)\) then 6:\(\forall v\in A(u)\)\(\cap A(v)\) 7:\(T\gets T+1\) 8:\(A(v)\gets A(v)\cup\{u\}\) ``` **Algorithm 1** Forward Triangle Counting [9, 10] The _forward-hashed_ algorithm [9, 10] (also called _compact-forward_[7]) is a variant of the forward algorithm that uses the hashing described above for the intersections of the \(A()\) sets, see Algorithm 2. Shun and Tangwongsan [19] parallelize the forward and forward-hashed algorithms for multicore systems. Low _et al._[20] derive a linear-algebra method for triangle counting that does not use matrix multiplication. Their algorithm results in the forward-hashed algorithm. ## II Algorithm Recently, we presented Algorithm 3[21] as a new method for finding triangles. This approach finds a subset of _cover edges_ from \(E\) such that every triangle contains at least one cover edge. ``` 0: Graph \(G=(V,E)\) 0: Triangle Count \(T\) 1:\(T\gets 0\) 2:\(\forall v\in V\) 3:\(A(v)\leftarrow\emptyset\) 4:\(\forall(u,v)\in E\) 5:if\((u<v)\) then 6:\(\forall w\in A(u)\) 7:\(\mbox{Hash}[v]\leftarrow\mbox{true}\) 8:\(\forall w\in A(v)\) 9:if\(\mbox{Hash}[w]\) then 10:\(T\gets T+1\) 11:\(\forall w\in A(u)\) 12:\(\mbox{Hash}[v]\leftarrow\mbox{false}\) ``` **Algorithm 2** Forward-Hashed Triangle Counting [9, 10] This algorithm uses breadth-first search (BFS) to find a reduced cover-edge set consisting of edges \((u,v)\) where the levels of vertices \(u\) and \(v\) are the same, i.e., \(L(u)\equiv L(v)\). From the result in [21], each triangle must contain at least one of these horizontal edges. Then each edge in the cover set is examined, and Hash is used to find the vertices \(w\) in the intersection of \(N(u)\) and \(N(v)\). A triangle \((u,v,w)\) is found based on logic about \(w\)'s level. The breadth-first search, including determining the level of each vertex and marking horizontal-edges, requires \(\mathcal{O}\left(n+m\right)\) time. The number of horizontal edges is \(\mathcal{O}\left(m\right)\). The intersection of each pair of vertices costs \(\mathcal{O}\left(d_{\mbox{max}}\right)\). Hence, Alg. 3 has complexity \(\mathcal{O}\left(m\cdot d_{\mbox{max}}\right)\). ``` 0: Graph \(G=(V,E)\) 0: Triangle Count \(T\) 1:\(\forall v\in V\) 2: if\(v\) unvisited, then BFS(\(G\), \(v\)) 3:\(\forall(u,v)\in E\) 4:if\((L(u)\equiv L(v))\) then \(\triangleright\)\((u,v)\) is horizontal 5: Add \((u,v)\) to \(G0\) 6:else 7: Add \((u,v)\) to \(G1\) 8:\(T\leftarrow\mbox{TC\_forward-hashed}(G0)\)\(\triangleright\) Alg. 2 9:\(\forall u\in V_{G1}\) 10:\(\forall v\in S_{G1}(u)\) 11:\(\mbox{Hash}[v]\leftarrow\mbox{true}\) 12:\(\forall v\in N_{G0}(u)\) 13:if\((u<v)\) then 14:\(\forall w\in N_{G1}(v)\) 15:if\(\mbox{Hash}[w]\) then 16:\(T\gets T+1\) 17:\(\forall v\in N_{G1}(u)\) 18:\(\mbox{Hash}[v]\leftarrow\mbox{false}\) ``` **Algorithm 3** Cover-Edge Triangle Counting [21] In this paper, we present our new triangle counting algorithm (Alg. 4), called _fast triangle counting_. This new triangle counting algorithm is similar with cover-edge triangle counting in Alg. 3 and uses BFS to assign a level to each vertex in lines 1 and 2. Next in lines 3 to 7, the edges \(E\) of the graph are partitioned into two sets \(E0\) - the horizontal edges where both endpoints are on the same level - and \(E1\) - the remaining tree and non-tree edges that span a level. Thus, we now have two graphs, \(G0=(V,E0)\) and \(G1=(V,E1)\), where \(E=E0\cup E1\) and \(E0\cap E1=\emptyset\). Triangles that are fully in \(G0\) are counted with one method and triangles not fully in \(G0\) are counted with another method. For \(G0\), the graph with horizontal edges, we count the triangles efficiently using the forward-hashed method (line 8). For triangles not fully in \(G0\), the algorithm uses the following approach to count these triangles. Using \(G1\), the graph that contains the edges that span levels, we use a hashed intersection approach in lines 9 to 18. As per the cover-edge triangle counting, we need to find the intersections of the adjacency lists from the endpoints of horizontal edges. Thus, we use \(G0\) to select the edges, and perform the hash-based intersections from the adjacency lists in graph \(G1\). The proof of correctness for cover-edge triangle counting is given in [21]. Alg. 4 is a hybrid version of this algorithm, that partitions the edge set, and uses two different methods to count these two types of triangles. The proof of correctness is still valid with these new refinements to the algorithm. The running time of Alg. 4 is the maximum of the running time of forward-hashing and Alg. 3. Alg. 4 uses hashing for the set intersections. For vertices \(u\) and \(v\) the cost is \(\min(d(u),d(v))\) since the algorithm can check if the neighbors of the lower-degree endpoint are in the hash set of the higher-degree endpoint. Over all \((u,v)\) edges in \(E\), these intersections take \(\mathcal{O}\left(m\cdot a(G)\right)\) expected time. Hence, Alg. 4 takes \(\mathcal{O}\left(m\cdot a(G)\right)\) expected time. Similar with the forward-hashed method, by pre-processing the graph by re-ordering the vertices in decreasing order of degree in \(\Theta(n\log n)\) time often leads to a faster triangle counting algorithm in practice. ## III Experimental Results We implemented more than 20 triangle counting algorithms and variants in C and use the Intel Development Cloud for benchmarking our results on a GNU/Linux node. The compiler is Intel(R) oneAPI DPC++/C++ Compiler 2023.1.0 (2023.1.0.20230320) and '-O2' is used as a compiler optimization flag. For benchmarking we compare the performance using two recently-launched Intel Xeon processors (Sapphire Rapids launched Q1'23) with two types of memory (DDR5 and HBM). The first node is a dedicated 2.00 GHz 56-core (112 thread) Intel(R) Xeon(R) Platinum 8480+ processor (formerly known as Sapphire Rapids) with 105M cache and 1024GB of DDR5 RAM. The second node is a dedicated 1.90 GHz 56-core (112 thread) Intel(R) Xeon(R) CPU Max 9480 processor (formerly known as Sapphire Rapids HBM) with 112.5M cache and 256GB of high-memory bandwidth (HBM) memory. Following the best practices of experimental algorithmics [22], we conduct the benchmarking as follows. Each algorithm is written in C and has a single argument - a pointer to the graph in a compressed sparse row (CSR) format. The input is treated as read-only. If the implementation needs auxiliary arrays, pre-processing steps, or additional data structures, it is charged the full cost. Each implementation must manage memory and not contain any memory leaks - hence, any dynamically allocated memory must be freed prior to returning the result. The output from each implementation is an integer with the number of triangles found. Each algorithm is run ten times, and the mean running time is reported. To reduce variance for random graphs, the same graph instance is used for all of the experiments. The source code is sequential C code without any explicit parallelization. The same coding style and effort was used for each implementation. Experimental results are presented in Table I for the Intel Xeon Platinum 8480+ processor with DDR5 memory and in Table II for the Intel Xeon Max 9480 processor with HBM memory. For each graph, we give the number of vertices (\(n\)), the number of edges (\(m\)), the number of triangles, and \(k\) - the percentage of graph edges that are horizontal after running BFS from arbitrary roots. The algorithms tested are \begin{tabular}{l l} IR & : Treclist from Itai-Rodeh [6] \\ V & : Vertex-iterator \\ VD & : Vertex Iterator (direction-oriented) \\ EM & : Edge Iterator with MergePath for set intersection \\ EMD & : Edge Iterator with MergePath for set intersection (direction-oriented) \\ EB & : Edge Iterator with BinarySearch for set intersection (direction-oriented) \\ EP & : Edge Iterator with Partitioning for set intersection \\ EPD & : Edge Iterator with Partitioning for set intersection (direction-oriented) \\ EH & : Edge Iterator with Hashing for set intersection (direction-oriented) \\ F & : Forward \\ FH & : Forward with Hashing \\ FHD & : Forward with Hashing and degree-ordering \\ TS & : Tri\_simple (Davis [15]) \\ LA & : Linear Algebra (CMU [20]) \\ CE & : Cover Edge (Bader, [21]) \\ CED & : Cover Edge with degree-ordering (Bader, [21]) \\ Bader & : this paper \\ BaderD this paper with degree-ordering \\ \end{tabular} While all of the algorithms tested have the same asymptotic worst-case complexity, the running times range by orders of magnitude between the approaches. In nearly every case where edge direction-orientation is used, the performance is typically improved by a constant factor up to two. The vertex-iterator and Itai-Rodeh algorithms are the slowest across the real and synthetic datasets. The timings between the Intel Xeon Platinum 8480+ and Intel Xeon Max 9480 are consistent, with the 8480+ a few percent faster than the 9480 processor. This is likely due to the fact that we are using single-threaded code on one core, and that the 8480+ is clocked at a slightly higher rate (2.00GHz vs 1.90GHz). In general, the forward algorithms and its variants tend to perform the fastest, followed by the edge-iterator, and then the vertex-iterator methods. The new fast triangle counting algorithm is competitive with the forward approaches, and may be useful when the results of a BFS are already available from the analyst's workflow, which is often the case. The performance of the road network graphs (roadNet-CA, roadNet-PA, roadNet-TX) are outliers from the other graphs. Road networks, unlike social networks, often have only low degree vertices (for instance, many degree four vertices), and large diameters. The percentage of horizontal edges (\(k\)) of these road networks is under 15% and we see less benefit of the new approach due to this low value of \(k\). In addition, the sorting of vertices by degree for the road network significantly harms the performance compared with the default ordering of the input. This may be due to the fact that there are few unique degree values, and sorting decimates the locality in the graph data structure. The linear algebra approach [20] does not typically perform as well on the real and synthetic social networks. For example, on a large RMAT graph of scale 18, the linear algebra algorithm method takes seconds, whereas the new algorithm runs in under a second. However, the linear algebra approach performs well on the road networks. ## IV Conclusions In this paper we design and implement a novel, fast triangle counting algorithm, that uses new techniques to improve the performance. It is the first algorithm in decades to shine new light on triangle counting, and use a wholly new method of cover-edges to reduce the work of set intersections, rather than other approaches that are variants of the well-known vertex-iterator and edge-iterator methods. We provide extensive performance results in a parsimonious framework for benchmarking serial triangle counting algorithms for sparse graphs in a uniform manner. The results use one of Intel's latest processor families, the Intel Sapphire Rapids (Platinum 8480+) and Sapphire Rapids HBM (CPU Max 9480) launched in the 1st quarter of 2023. The new triangle counting algorithm can benefit when the results of a BFS are available, which is often the case in network science. Additionally, this work will inspire much interest within the Graph Challenge community to implement versions of the presented algorithms for large-shared memory, distributed memory, GPU, or multi-GPU frameworks. ## V Future Work The fast triangle counting algorithm (Alg. 4) can be readily parallelized using a parallel BFS, partitioning the edge set in parallel, and using a parallel triangle counting algorithm on graph \(G0\), and parallelizing the set intersections for graph \(G1\). In future work, we will implement this parallel algorithm and compare its performance with other parallel approaches. ## VI Reproducibility The sequential triangle counting source code is open source and available on GitHub at [https://github.com/Bader-Research/triangle-counting](https://github.com/Bader-Research/triangle-counting). The input graphs are from the Stanford Network Analysis Project (SNAP) available from [http://snap.stanford.edu/](http://snap.stanford.edu/).
2310.04423
The BELSAR dataset: Mono- and bistatic full-pol L-band SAR for agriculture and hydrology
The BELSAR dataset is a unique collection of high-resolution airborne mono- and bistatic fully-polarimetric synthetic aperture radar (SAR) data in L-band, alongside concurrent measurements of vegetation and soil bio-geophysical variables measured in maize and winter wheat fields during the summer of 2018 in Belgium. This innovative dataset, the collection of which was funded by the European Space Agency (ESA), helps addressing the lack of publicly-accessible experimental datasets combining multistatic SAR and in situ measurements. As such, it offers an opportunity to advance the development of SAR remote sensing science and applications for agricultural monitoring and hydrology. This paper aims to facilitate its adoption and exploration by offering comprehensive documentation and integrating its multiple data sources into a unified, analysis-ready dataset.
Jean Bouchat, Emma Tronquo, Anne Orban, Karlus A. C. de Macedo, Niko E. C. Verhoest, Pierre Defourny
2023-09-12T08:28:11Z
http://arxiv.org/abs/2310.04423v1
# The BELSAR dataset: Mono- and bistatic full-pol L-band SAR for agriculture and hydrology ###### Abstract The BELSAR dataset is a unique collection of high-resolution airborne mono- and bistatic fully-polarimetric synthetic aperture radar (SAR) data in L-band, alongside concurrent measurements of vegetation and soil bio-geophysical variables measured in maize and winter wheat fields during the summer of 2018 in Belgium. This innovative dataset, the collection of which was funded by the European Space Agency (ESA), helps addressing the lack of publicly-accessible experimental datasets combining multistatic SAR and in situ measurements. As such, it offers an opportunity to advance the development of SAR remote sensing science and applications for agricultural monitoring and hydrology. This paper aims to facilitate its adoption and exploration by offering comprehensive documentation and integrating its multiple data sources into a unified, analysis-ready dataset. ## Background & Summary Agriculture and soil moisture monitoring are essential for sustainable food production, water resource management and mitigating the impact of climate change on crop yield and ecosystem health. Today, the best-established large-scale operational agricultural monitoring systems are mainly based on optical remote sensing [1]. However, they are often hampered by the presence of clouds [2], which obscure the view of the sensors. In contrast, Synthetic Aperture Radars (SAR) are active sensors, largely independent of weather conditions, capable of interacting with and capturing valuable information about both vegetation canopies and underlying soils [3, 4]. The recent European Space Agency (ESA) Sentinel-1 (S1) mission, providing systematic and global coverage of dense SAR time series, paved the way for the development of operational applications like the Copernicus Emergency Services and the open-source Sen4CAP system to implement the European Union's Common Agricultural Policy. The success of this operational mission triggered the intent to launch S1 companion satellite to enhance this all-weather earth-observation capacity in a bistatic mode. Bistatic SARs are radar systems in which the transmitter and receiver are physically separated, unlike monostatic systems in which they share the same location. Multistatic systems, meanwhile, have several receivers for a common transmitter. The most simple multistatic system comprises of both an active monostatic sensor and a passive bistatic one. These systems allow the acquisition of images using different geometries and configurations, thereby capturing multi-dimensional scattering effects that could not be recorded by monostatic systems alone. These flexible systems offer the advantage of helping to differentiate between the often intertwined relative contributions of vegetation and soil, thus improving the retrieval performances of soil moisture and crop biophysical variables retrieval [5]. Despite its considerable potential, bi- or multistatic SAR applications for vegetation and soil monitoring have remained limited. It seems that this scarcity can mostly be attributed to a lack of comprehensive, well-documented experimental datasets combining mono- and bistatic SAR acquisitions and in situ measurements of vegetation and soil biogeophysical variables [6]. As a result, the potential of bistatic SAR for these applications has mainly been investigated via radiative transfer models [5, 6, 7, 8, 9]. In this context, the ESA funded the BELSAR-Campaign project, an airborne and in situ measurement campaign that took place during the 2018 growing season in Belgium. The result is the BELSAR dataset [10], a collection of data containing high-resolution fully-polarimetric mono- and bistatic synthetic aperture radar (SAR) times series in L-band and concurrent field measurements of vegetation and soil bio-geophysical variables. The SAR data were acquired with an airborne multistatic SAR system operated by MetaSensing BV. The field measurements were collected both during and after the crop growing season in ten maize and ten winter wheat fields simultaneously with the SAR acquisitions. Several studies exploiting the BELSAR dataset have already been published. Bouchat _et al._[11] have used the dataset to assess the potential of simultaneous mono- and bistatic SAR acquisitions for agriculture and soil moisture monitoring applications, as well as the impact of maize row structure on the SAR backscatter. Tronquo _et al._[12] have presented a semi-empirical method based on effective roughness modeling to retrieve soil moisture in bare agricultural fields and have shown an increase in the accuracy of soil moisture estimation by using several polarizations at the same time. Finally, Bouchat _et al._[13] have obtained promising results for green area index (GAI) retrieval in maize fields using the Water Cloud Model [14] and dual-polarized SAR data in L-band. Yet, the potential of the BELSAR dataset is still far from having been fully investigated and, given the vast possibilities offered by such an innovative and unique collection of data, other users will want to exploit it in their research in agriculture, hydrology, change detection, or other SAR remote sensing techniques and applications. Therefore, this paper aims to facilitate its uptake through its thorough documentation, as well as through the integration of the different sources of data from BELSAR-Campaign into a single so-called integrated dataset provided with it. ## Methods ### Airborne SAR acquisitions The design and implementation of the airborne SAR campaign was carried out by the Centre Spatial de Liege and MetaSensing BV. Multi-temporal mono- and bistatic fully-polarimetric (HH, HV, VH, and VV) airborne L-band SAR data were acquired with the MetaSAR-L systems over the BELSAR area of interest, in fig. 1, and processed with the MetaSAR-Pro software, MetaSensing BV's proprietary airborne SAR processor. Mono- and bistatic airborne radar data were acquired simultaneously by two left-looking L-band SAR operated by MetaSensing BV on-board two CESNA Gran Caravan airplanes specifically adapted for the mission. The planes and radar system are shown in fig. 2. The radar operated in frequency modulated continuous wave mode (FMCW) at a central frequency of 1.3 GHz, with a pulse repetition frequency (PRF) at 1004 Hz, and a sampling frequency of 50 MHz. The authorized signal bandwidth, allocated by the Belgian Institute for Post and Telecommunications (BIPT), was limited to 50 MHz. The sensors, capable of providing imaging with spatial resolution up to 1 m, were equipped with two flat antennas: a squared one and a rectangular one, with a nominal antenna look-angle of 45\({}^{\circ}\), and a beamwidth of 40\({}^{\circ}\) in elevation and respectively 40\({}^{\circ}\) and 20\({}^{\circ}\) in azimuth. They were synchronized through special techniques based on a dedicated high-accuracy GPS-disciplined oscillator (GPSDO) system. Additionally, high-end Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) devices were installed on the sensors to be able to track their navigation and attitude precisely. Each sensor transmitted and received in a fully-polarimetric ping-pong mode on a chirp-to-chirp basis, allowing both sensors to simultaneously collect mono- and bistatic SAR images depending on the preferred configuration. Radar data were collected during a series of five flight missions, labeled F1 to F5, between 31 May (F1) and 10 September 2018 (F5), with a temporal baseline of approximately one month. Table 1 lists the dates of these flights. On each flight mission, mono- and bistatic data were acquired by flying the airplanes in close formation in two different bistatic geometries, i.e., in across-track (XTI) and in along-track (ATI) configuration. One aircraft flew steadily at 2500 m above mean sea level, while the other positioned itself in two distinct locations behind the first. Figure 3 depicts the nominal XTI and ATI flight configurations. The actual XTI and ATI baselines were about 35 m and 450 m, respectively, for all flight missions except for the first one (F1) when the along-track baseline was about 900 m and the horizontal separation was in a 60 to 80 m range in the ATI configuration. The actual baselines are shown on fig. 4 for the first two flight missions, F1 and F2. These acquisition configurations were established on the basis of ESA's requirement to reproduce the configuration planned for the SAOCOM-CS satellite mission [15]. Three partially overlapping passes were necessary to cover the area of interest. These passes divided the 4.5 km wide area depicted in fig. 5 into three main tracks labeled Alpha (A), Zulu (Z), and Bravo (B). Four trihedral corner reflectors with a side-length of 75 cm, one of which is shown in fig. 2, were deployed in a fourth, shorter track labeled Zulu short (Zs). They were placed in a row in the across-track direction, and used to produce a geometrical reference as well as the point spread function (PSF) for monostatic acquisitions. Their elevation angle was adjusted for each flight mission, depending on its altitude. Their positions were measured with a precision of 1 m using a GNSS receiver. The complete set of tables describing the tracks and corner reflector installation can be found in the final report of the BELSAR-Campaign project [10]. Radar data were processed using the Global Back-Projection (GBP) algorithm, which also handles motion compensation. They were delivered as \(\sigma\)-calibrated single look complex (SLC) focused SAR data in ground range geometry, with a ground resolution of 1 m in azimuth and 4 m in slant range, after Hann windowing with a 50 MHz transmitted pulse. The SAR images were geo-referenced and co-registered on a sub-pixel level based on the absolute position accuracy of the navigation data, i.e., 0.75 m. The implemented SAR processing chain is shown in fig. 6. Geometric calibration was performed using the corner reflectors. For bistatic data, two range delays had to be defined, one for each system. One of the range delays was adjusted relatively to the absolute calibrated one according to a global offset obtained from a coherence-based fine sub-pixel co-registration. The same radiometric calibration was applied to both mono- and bistatic data. The absolute calibration constant, \(K\), was evaluated for each flight mission using the four corner reflector responses. Antenna pointing direction and incidence angles were computed using post-processed navigation data and an external digital elevation model (DEM). In the XTI configuration, the corner reflectors were assumed to behave in the same way for both mono- and bistatic sensors. In the ATI configuration, corner reflectors were not visible on bistatic images. The radiometric offset for bistatic ATI data was therefore based on monostatic ATI and bistatic XTI data. Finally, a polarimetric calibration was also applied following the procedure described in Fore _et al.[16]_. Data were calibrated for co-pol and cross-pol channel imbalances and phase bias, i.e., amplitude and relative phase differences between co-polarization channels, at both transmission and reception using the corner reflectors. Cross-pol imbalance and phase bias were estimated using the ratio of the averaged cross-pol responses from a large number of pixels. ### In situ measurements Vegetation and soil bio-geophysical variables were measured in the ten maize and ten winter wheat fields by two teams from the Earth and Life Institute at UCLouvain and the Hydro-Climate Extremes Lab (H-CEL) at Ghent University quasi-simultaneously with the airborne SAR acquisitions. The in situ measurements were performed with a delay of maximum six days with respect to the airborne SAR missions. Table 1 contains the dates of the different measurements. The sampled fields--labeled with a letter, M for maize and W for wheat, and a number ranging from 1 to 10--are located at the BELAIR Hebania test site, in the Hesbaye region of Belgium. This site belongs to the global Joint Experiment for Crop Assessment and Monitoring (JECAM) network. The location of the site and the fields is shown in fig. 1. The area corresponds to a typical landscape of intensive agriculture in Belgium. The fields are relatively large, flat, and homogeneous, with a uniform topsoil texture of silt loam. The major crops in the area are wheat, potatoes, beets, and maize. At the time of the first acquisition, winter wheat and maize crops were already in place. Harvesting took place between the second (F2) and third (F3) acquisitions for winter wheat, and between the third (F3) and last (F5) for maize. Some SAR acquisitions were therefore conducted over bare fields. It should be noted that maize stalks were still present on the fields after harvest (see Table 1). Finally, due to the overlap in the flight tracks, certain fields were imaged several times on the same date in both flight configurations, i.e., ATI and XTI. Other fields, however, were imaged only during certain flight missions due to the length of the tracks varying between flight missions, and two maize fields namely M9 and M10, were never imaged by the airborne SAR system. ### Vegetation Sowing density, canopy cover fraction and GAI, plant height, plant development stage, wet biomass and vegetation dry matter content were measured by the team from UCLouvain to characterize the vegetation canopy. The measurement protocols for maize and winter wheat are similar, but adaptations were applied in consideration of the specificities of both crops. Measurements were recorded starting in May (F1) until the harvest, which occurred at different dates in the sampled fields, i.e., between June (F2) and end of July (F3) in winter wheat and between end of July (F3) and September (F5) in maize. In each field, three homogeneous and representative plots were determined before the campaign based on high-resolution Pleiades images and normalized difference vegetation index (NDVI) profiles derived from them. The plots were marked with flags and geolocalized at the beginning of the growing season. They were named after the field they belong to with the letter a, b, or c attached, e.g., M2a for the first plot of the maize field M2. Field measurements of vegetation were made in the center of these plots around each airborne SAR acquisition. They were chosen to be at least 30 m from the edges of the field and from each other. The plots were squares with a side length of 15 m in maize fields and for winter wheat the side length was 25 m, i.e., the distance between two tractor lines. Correct use of the values provided in the vegetation dataset involves averaging the values of the variables in the three plots to obtain a value representative of the field. Plant density was measured once at the beginning of the growing season by measuring both interline and interplant distances. To obtain the interline distance in winter wheat fields, the distance between six rows was measured three times per plot for different rows. It was then divided by five, the number of intervals between six rows and averaged. In maize fields, row spacing was measured directly between two rows three times per plot across different rows. The interplant distance was derived in the same way in both crops by counting the number of plants in a one-meter length of sowing row three times per plot. The canopy cover fraction (FCover), i.e., the fraction of ground surface covered by green plant material, and the GAI, i.e., half of the total area of green plant in the canopy per unit of horizontal ground surface area, were measured by means of digital hemispherical photography (DHP) processed with the CAN-EYE software[17]. Note that the GAI is the biophysical variable actually estimated in the remote sensing of the leaf area index (LAI)[18]. In each plot, ten photos were taken with a nadir view approximately 1 m above the canopy using a system consisting of a camera equipped with a hemispherical lens mounted on a 3 m telescopic L-shaped pole. Between each photo, the operator walked five steps, or approximately 3.75 m. The photos were taken each time before the operators entered the plot to avoid altering the measurement. The phenological development of the crops was reported on the BBCH scale [19]. Only one value was recorded per plot-in case plants in the same plot were at different stages of development, the BBCH stage of the majority of the plants was selected. The height of the plants was measured with a ruler from the ground level to its highest part, i.e., leaf, flower, ear or panicle depending on the crop and the stage of development, without extending it manually. For each run, nine plants were measured in each plot to derive a field average. The wet biomass and the dry matter content of the crop vegetation were measured by destructive sampling. In each plot, all the plants along three 1 m rows were cut and then weighed in the field to determine their total fresh weight. The three harvested rows were randomly selected diagonally from each other. A subsample of the cut plants was then randomly selected, weighed and stored in a micro-perforated plastic bag which was then transported to UCLouvain for oven drying at 60 \({}^{\circ}\)C for 72 h. The dry weight of the subsample was then measured to obtain the dry matter content of the plants. There were no weeds in the plots. Pictures of the subsamples in the oven and of the crop cutting are shown in fig. 8. ### Soil The team from Ghent University monitored three soil variables: bulk density, surface soil moisture and surface roughness. Samples were recorded over the entire surface of the studied fields. Bulk density was measured in all fields alongside the first airborne SAR acquisition (F1) using Kopecky rings. Five samples were taken per field, in order to compute field average values and within-field standard deviations. Additional samples were taken if the bulk density changed due to tillage operations in later flight missions. Volumetric soil moisture samples were taken in all ten winter wheat and ten maize fields concurrently with each flight airborne SAR acquisition. During the first acquisition, in May (F1), no samples could be taken in fields W8, W9 and W10, because tillage operations took place during the acquisition time. Soil moisture was monitored using Time Domain Reflectometry (TDR) sensors with 11 cm rods. At least ten locations per field were monitored, with three repetitions per location. Field average volumetric soil moisture was then calculated, by averaging all measurements within each reference plot. A pin profilometer, shown in fig. 8, of 1 m length with a spacing of 1 cm, was used to measure soil surface roughness. Roughness samples were taken in all ten winter wheat fields concurrently with every flight campaign, except for field W7 on F4, in August, when tillage operations were taking place at the time of sampling. Surface roughness samples in maize fields could only be taken on F1 and F2. On F3, the maize vegetation was too dense to take samples, resulting in missing values. Between July (F3) and August (F4) campaigns, M1, M6 and M8 were harvested, which made it possible to acquire roughness measurements on F4 and F5. Between F4 and F5, three additional maize plots were harvested (M3, M4 and M9). This way, six maize plots could be sampled on F5. The roughness measures were taken in two directions, i.e., along and across the direction of tillage. Pictures of the pin profilometer in both directions were taken and then digitized to correct for tilted pictures and to allow determination of correlation lengths (\(l\)) and root-mean-square (RMS) heights (\(s\)). Both were calculated according to the procedure described by Davidson _et. al[20]_. \(s\) was calculated as \[s=\sqrt{\frac{\sum_{i=1}^{N}z_{i}^{2}}{N-1}}, \tag{1}\] with \(N\) the number of profile points and \(z_{i}\) the surface height of the profile points. To estimate \(l\), the normalized autocorrelation function \(A(\tau)\) was first defined as \[A(\tau)=\frac{\sum_{i=1}^{N-\frac{l}{2}}z_{i}z_{i+k}}{\sum_{i=1}^{N}z_{i}^{2}}, \tag{2}\] then \(l\) was obtained by linearly interpolating between two lags, \(\tau_{1}\) and \(\tau_{2}\), that bracket \(A(\tau)=e^{-1}\) as \[l=\tau_{1}+(e^{-1}-A(\tau_{1}))\frac{\tau_{2}-\tau_{1}}{A(\tau_{2})-A(\tau_{1 })}. \tag{3}\] During each acquisition, at least five profiles per field and along each direction were taken, and for each field that was sampled, the average and standard deviation of the correlation length and root-mean-square height were determined. ## Data Records The BELSAR dataset is available upon request at [https://doi.org/10.5270/ESA-bccf2d9](https://doi.org/10.5270/ESA-bccf2d9). In it, the SAR, vegetation and soil data are supplied in several files that have been combined into an integrated dataset available at [https://figshare.com/s/e6e480924a1021a42028](https://figshare.com/s/e6e480924a1021a42028). ### SAR data The airborne SAR images can be found in the _RadarData_ folder. The SAR data are stored in 320 NetCDF files--2 sensors (SAR, i.e., monostatic, and BISAR, i.e., bistatic) \(\times\) 5 flights missions (F1, F2, F3, F4, and F5) \(\times\) 4 flight tracks (A, Z, B, and Zs) \(\times\) 2 bistatic configurations (ATI and XTI) \(\times\) 4 polarizations (HH, HV, VH, and VV)--and delivered with all ancillary metadata that are necessary for further processing and analysis, including antenna pattern, navigation data, digital elevation model, and position of the focused pixels in geographical coordinates (WGS84) [21]. The flight track corresponding to each SAR acquisition is provided in _BELSAR_airborne_acquisitions_map_track_datetime.csv_. ### In situ data The vegetation and soil datasets can be found in the _Insitu_ folder, in which the main files are: * _BelSAR_maize_ tab - Vegetation biophysical variables recorded in each plot of each maize field * _BelSAR_wheat_ tab - Vegetation biophysical variables recorded in each plot of each winter wheat field * _Field_average_bulkdensity_ tab - Mean bulk density for each field [g cm\({}^{-3}\)] * _Field_std_dev_bulkdensity_ tab - Standard deviation of the bulk density for each field [g cm\({}^{-3}\)] * _Field_average_ tab - Mean volumetric soil moisture for each field [%] * _Field_std_dev_ tab - Standard deviation of the volumetric soil moisture for each field [%] * _Raw_data_ tab - Overview of the raw data: latitude, longitude (WGS84), volumetric soil moisture [%], period [\(\upmu\)s], attenuation and permittivity of all TDR samples. For F3, F4 and F5, the latter three can be missing due to sensor failure. * _Field_average_corr_length_ tab - Mean correlation length for each field [cm] * _Field_average_RMSheight_ tab - Mean root-mean-square height for each field [cm] * _Field_std_dev_corr_length_ tab - Standard deviation of the correlation length for each field [cm] * _Field_std_dev_RMSheight_ tab - Standard deviation of the root-mean-square height for each field [cm] ### Integrated dataset To facilitate their exploitation, the SAR and in situ datasets have been integrated into a single dataset, _BELSAR_fields_integrated_db.csv_. The integrated dataset includes zonal statistics of the radar measurements as well as the bio- and geophysical variables of soil and vegetation for each maize and winter wheat field. Each entry in the table corresponds to a given field and image in the SAR dataset. The codes used to generate the integrated dataset are provided in the same folder. ## Technical Validation ### SAR data calibration The SAR data were geometrically, radiometrically, and polarimetrically calibrated based on the response of the four corner reflectors deployed in track Zs. Figure 7 shows the polarimetric RGB composite image in the vicinity of the corner reflectors. Their responses show that the resolution of the images are within 1 m resolution in azimuth and 4 m in slant range, as intended, and that the corners are correctly geolocated, with an accuracy of 0.75 m. Their polarimetric signatures and impulse response function also indicated a good isolation between the polarization channels at antenna level. Cross-talk was therefore considered negligible, as also attested by their appearance as yellow spots on the RGB composite, i.e., only HH and VV response with no significant imbalances. As for the radiometric calibration of the bistatic data in the ATI flight configuration, however, given that the corner reflectors were not visible on the images, their radiometric offset was determined using monostatic ATI and bistatic XTI data. This may have led to an imbalance between bistatic and monostatic data, or from one flight to the next, in the bistatic ATI data. A complete quality assessment can be found in the BELSAR-Campaign project final report[10]. ### Interferometric SAR The potential of the airborne data for interferometric SAR (InSAR) was affected by very low interferometric coherence. Strong temporal coherence losses were observed in double-pass monostatic pairs, i.e., between the different flight missions, together with a significant geometric decorrelation due to large orthogonal baselines as compared to the critical baseline. Coherence was also low for single-pass bistatic interferometric pairs, despite the short perpendicular baselines. The short signal bandwidth, restricted to 50 MHz by the BIPT, has certainly played a significant role on geometric decorrelation. This effect was strengthened by the relatively low flight altitude. Furthermore, in addition to temporal and geometric decorrelation, coherence might have been affected, in bistatic configuration, by a lack of synchronization resulting from using two different clocks for the transmitter and receiver systems, compared with a monostatic configuration where the unique transmitter-receiver system operates with absolute time and phase references. The consequences of mis-synchronization errors are positioning and phase errors in the output SLC images, giving rise to additional azimuthal fringes in the interferogram with a detrimental effect on coherence. A solution to this last issue would be to implement a synchronization algorithm that minimizes the impact of residual plane motion and mitigates the fringe mis-synchronization by a phase calibration. A multisquint based correction algorithm is proposed by de Macedo _et al.[22]_ to solve this. ### Vegetation and soil measurements #### Vegetation The vegetation measurements were performed in accordance with the guidelines laid down by JECAM[23]. Violin plots of in situ measured vegetation biophysical variables, i.e., phenological development stage (BBCH), plant height, GAI, FCover, wet biomass, and dry matter content, are shown in Figure 9. Maize was observed between BBCH stages 15 and 89, i.e. from the leaf development stage (five leaves) to the fully ripe stage, and winter wheat between stages 59 and 76, i.e. from the end of the heading to medium milk stage. Few data, in a narrow range of values, are available on winter wheat because the campaign started late in an unusually warm and dry growing season, leading to an early maturation and harvest. The mean plant height observed in the different fields ranges from 0.42 m (with a standard deviation of 0.08 m) to 2.89 m (0.09 m standard deviation) in maize. These values are in accordance with maize plant height measurements over a loamy test site in Belgium[24]. For winter wheat plant height ranges from 0.77 m (0.09 m standard deviation) to 0.78 m (0.09 m standard deviation). The mean FCover and GAI over all maize fields range respectively from 0.14 m\({}^{2}\)m\({}^{-2}\) (0.71 m\({}^{2}\)m\({}^{-2}\) standard deviation) on F1 to 0.55 m\({}^{2}\)m\({}^{-2}\) (0.30 m\({}^{2}\)m\({}^{-2}\) standard deviation) on F4 with a maximum at 0.65 m\({}^{2}\)m\({}^{-2}\) (0.10 m\({}^{2}\)m\({}^{-2}\) standard deviation) on F3 and from 0.36 m\({}^{2}\)m\({}^{-2}\) (0.14 m\({}^{2}\)m\({}^{-2}\) standard deviation) on F1 to 3.0 m\({}^{2}\)m\({}^{-2}\) (0.24 m\({}^{2}\)m\({}^{-2}\) standard deviation) on F4, the latter also being the maximum mean GAI value. In winter wheat, both ranges are very limited, from 0.78 m\({}^{2}\)m\({}^{-2}\) (0.06 m\({}^{2}\)m\({}^{-2}\) standard deviation) to 0.79 m\({}^{2}\)m\({}^{-2}\) (0.04 m\({}^{2}\)m\({}^{-2}\) standard deviation) for the FCover and from 4.41 m\({}^{2}\)m\({}^{-2}\) (0.75 m\({}^{2}\)m\({}^{-2}\) standard deviation) to 4.49 m\({}^{2}\)m\({}^{-2}\) (0.42 m\({}^{2}\)m\({}^{-2}\) standard deviation) for the GAI. The small range of values for winter wheat were expected given the late start of the campaign, which implies that winter wheat was already close to maturation on F1. These GAI values for winter wheat at maturation stage, are in line with field-observed GAI over a test site in Wallonia (southern part of Belgium)[25]. The mean of dry matter content over all fields ranges from 11.18 % (1.68 % standard deviation) to 40.57 % (2.18 % standard deviation) in maize and from 27.01 % (1.96 % standard deviation) to 32.42 % (6.51 % standard deviation) in winter wheat. The maize was not let to dry further because it was intended for silage. The mean wet biomass over all maize fields ranges from 0.24 kg\(/\)m\({}^{2}\) (0.14 kg\(/\)m\({}^{-2}\) standard deviation) on F1 to a maximum mean of 9.01 kg\(/\)m\({}^{-2}\) (0.56 kg\(/\)m\({}^{-2}\) standard deviation) on F4, in line with values found in the study of Blaes _et al.[24]_ over a maize field in a loamy test site. For winter wheat fields, the mean wheat biomass ranges from 4.53 kgm\({}^{-2}\) (0.77 kgm\({}^{-2}\) standard deviation) on F1 to a maximum mean of 4.61 kgm\({}^{-2}\) (0.72 kgm\({}^{-2}\) standard deviation) on F2. ### Soil Figure 10 shows violin plots for the soil geophysical variables, i.e., soil moisture, bulk density, correlation length across, correlation length along, root-mean-square height across, and root-mean-square height along, for maize and winter wheat fields. Bulk density was measured in all winter wheat and maize fields during the first flight campaign (F1), with a mean bulk density of 1.27 g cm\({}^{-3}\) (with an average within field standard deviation of 0.079 g cm\({}^{-3}\)) for the winter wheat fields and 1.24 g cm\({}^{-3}\) (with an average within field standard deviation of 0.065 g cm\({}^{-3}\)) for the maize fields. Similar bulk density values have been reported in the study of van der Bolt _et al._[26] where agricultural fields in Flanders (Belgium) were sampled, including sites with a texture of silt loan. Due to tillage operations that took place between F3 and F4 on the winter wheat fields (except W7), additional soil samples were taken during the field campaign coincident with F4 to determine bulk density values after tillage operations. On F5, additional samples were taken in winter wheat fields W2 and W7 and in maize fields M1, M3 and M4. The plots show that bulk density generally decreases after tillage operations. The field average volumetric soil moisture values range from 3.03 to 18.94 vol% for winter wheat fields and from 3.65 to 18.76 vol% for maize fields. These values are within the range of soil moisture values reported in the study of Choker _et al._[27], where in situ soil moisture measurements over numerous agricultural plots in Europe (mainly France, Belgium, and Italy) were acquired. Note that 2018 was marked by an exceptional dry summer in Belgium, which is depicted in the low soil moisture values, especially on F3. The range of within field standard deviations for winter wheat fields is 0.68 to 4.60 vol% and for maize 1.12 to 5.36 vol%. The field average root-mean-square heights (correlation length) was measured along and across the direction of tillage. Roughness along the direction of tillage is comparable between winter wheat and maize fields, with a range of respectively 0.41 to 1.46 cm (1.28 to 5.34 cm) for winter wheat and 0.32 to 1.23 cm (1.42 to 4.30 cm) for maize. Across the direction of tillage, the difference is slightly larger, with a range of 0.55 to 2.63 cm (1.46 to 9.69 cm) for winter wheat and 0.80 to 1.55 cm (3.16 to 9.07 cm) for maize. Especially during crop growth stages, the root-mean-square height across the direction of tillage is substantially higher for maize compared to winter wheat, with an average value of 1.23 cm (6.25 cm) for maize compared to 0.79 cm (4.38 cm) for winter wheat. Verhoest _et al._[28] summarized possible sources of errors that affect roughness measurements, of which the limited length of the pin profilometer and resolution in both vertical and horizontal directions are the main disadvantages, especially for the estimation of \(l\). Roughness measures found in this study are comparable to the ones estimated by Davidson _et al._[20] for agricultural sites over Europe. A mean \(s\) of 0.6 and 1.6 cm (with a standard deviation of 0.3 and 0.7 cm) was estimated for seedbed and narrowed field conditions respectively, which is in line with the BELSAR study. In terms of correlation length, mean values of 3.7 and 3.8 cm (with a standard deviation of 2.6 and 2.9 cm) for respectively seedbed and narrowed field conditions were found [20], which is in the range of values found here for both maize and winter wheat fields after harvest. The correlation length for maize fields before harvest was substantially larger, which can be explained by the rough seedbed pattern for maize. ### Usage Notes The integrated dataset is directly accessible on figshare, while the BELSAR-Campaign data are available online via FTP upon submission of a data access request to ESA's Earth Online service. With regard to the exploitation of these data, it is advisable to use the integrated dataset instead of the original data as it contains in one table both the zonal statistics of the SAR data and the corresponding vegetation and soil variables for the maize and winter wheat fields. As such, it provides an analysis-ready dataset, thereby greatly facilitating the handling of the BELSAR data for, among others, the development of agricultural or hydrological applications as well as for easy comparison with other comparable datasets. Note that for certain purposes, it is recommended to apply a negative buffer to the polygons to avoid edge-of-field effects on the radar signal. In this case, the polygons delineating the fields would have to be redrawn and the integrated dataset rebuilt. ### Code availability The codes used to produce the integrated dataset from SAR, vegetation and soil data have been uploaded to figshare along with it. These contain a number of tools that can be easily adapted and reused to use the BELSAR data for other purposes. To rebuild the integrated dataset from the BELSAR-Campaign data, the contents of the ESA repository must first be accessed. Then, once downloaded, running _extract_mini_rasters.py_ will extract zonal statistics from the SAR data and _integrated_dataset.py_ will match these extracted data to the corresponding in situ vegetation and soil measurements and generate the integrated dataset. For test purposes, these two scripts can be run for one or a series of images by adding their indices, from 0 to 319, as arguments to the python command, e.g., python extract_mini_rasters.py 1 for the second image and python extract_mini_rasters.py 0 2 for the first three images.
2305.20045
ActiveAED: A Human in the Loop Improves Annotation Error Detection
Manually annotated datasets are crucial for training and evaluating Natural Language Processing models. However, recent work has discovered that even widely-used benchmark datasets contain a substantial number of erroneous annotations. This problem has been addressed with Annotation Error Detection (AED) models, which can flag such errors for human re-annotation. However, even though many of these AED methods assume a final curation step in which a human annotator decides whether the annotation is erroneous, they have been developed as static models without any human-in-the-loop component. In this work, we propose ActiveAED, an AED method that can detect errors more accurately by repeatedly querying a human for error corrections in its prediction loop. We evaluate ActiveAED on eight datasets spanning five different tasks and find that it leads to improvements over the state of the art on seven of them, with gains of up to six percentage points in average precision.
Leon Weber, Barbara Plank
2023-05-31T17:18:47Z
http://arxiv.org/abs/2305.20045v1
# ActiveAED: A Human in the Loop Improves Annotation Error Detection ###### Abstract Manually annotated datasets are crucial for training and evaluating Natural Language Processing models. However, recent work has discovered that even widely-used benchmark datasets contain a substantial number of erroneous annotations. This problem has been addressed with Annotation Error Detection (AED) models, which can flag such errors for human re-annotation. However, even though many of these AED methods assume a final curation step in which a human annotator decides whether the annotation is erroneous, they have been developed as static models without any human-in-the-loop component. In this work, we propose ActiveAED, an AED method that can detect errors more accurately by repeatedly querying a human for error corrections in its prediction loop. We evaluate ActiveAED on eight datasets spanning five different tasks and find that it leads to improvements over the state of the art on seven of them, with gains of up to six percentage points in average precision. ## 1 Introduction Correct labels are crucial for model training and evaluation. Wrongly labelled instances in the training data hamper model performance Larson et al. (2020); Vlachos (2006), whereas errors in the test data can lead to wrong estimates of model performance Alt et al. (2020); Larson et al. (2020); Reiss et al. (2020). This is a problem in practice, as even widely used benchmark datasets can contain a non-negligible number of erroneous annotations Alt et al. (2020); Northcutt et al. (2021); Reiss et al. (2020). Researchers have developed a multitude of annotation error detection (AED) methods to detect such labelling errors as recently surveyed by Klie et al. (2022). After detection, there are multiple ways to deal with the found annotation errors. When it comes to training data, a reasonable strategy is to simply remove the instances flagged by an AED model Huang et al. (2019). For evaluation data, however, this is not viable, because in many cases this would remove a significant fraction of hard but correctly labelled instances in addition to the errors Swayamdipta et al. (2020), which would lead to an overestimation of model performance. Instead, researchers resorted to manual correction of the labels flagged by the AED method Alt et al. (2020); Reiss et al. (2020); Northcutt et al. (2021); Larson et al. (2020). Strikingly, even though this manual correction requires human input, the typical workflow is to first apply the AED method once and afterwards correct the flagged errors, without using the human feedback in the AED step. We hypothesize that connecting the human input and the AED prediction in a human-in-the-loop setup could increase the accuracy of the AED method without increasing the total amount of human intervention. To support this hypothesis, we propose ActiveAED, an AED method which includes human feedback in the annotation loop; see Figure 1 for an illustration. We base ActiveAED on the Area-under-the-Margin metric (AUM) Pleiss et al. (2020), which was recently proposed to detect annotation errors in computer vision datasets. As an addi Figure 1: Prediction loop of ActiveAED propose a novel ensembling scheme to improve AUM's performance. In experiments on eight datasets spanning five different tasks, we show that ActiveAED improves over three baselines that performed well in a recent evaluation Klie et al. (2022). On seven datasets, we observe improvements, with gains of up to six percentage points (pp) in average precision. Our ablation study shows that both the human-in-the-loop component and the ensembling scheme contribute to the improvements. We make code and data available under [https://github.com/mainlp/ActiveAED](https://github.com/mainlp/ActiveAED). ## 2 Related Work AED for Natural Language Processing (NLP) datasets has a long tradition which has recently been comprehensively evaluated and surveyed by the seminal work of Klie et al. (2022). We base our evaluation setup on theirs. Existing AED methods can be divided into six different categories Klie et al. (2022): variation-based Dickinson and Meurers (2003); Larson et al. (2020), model-based Amiri et al. (2018); Yaghoub-Zadeh-Fard et al. (2019); Chong et al. (2022), training-dynamics-based Swayamdipta et al. (2020); Pleiss et al. (2020); Siddiqui et al. (2022), vector-space-proximity-based Larson et al. (2019); Grivas et al. (2020), ensembling-based Alt et al. (2020); Varshney et al. (2022) and rule-based Kveton and Oliva (2002). To the best of our knowledge, none of these AED methods has been developed or evaluated with a human in the loop, except for Vlachos (2006) who uses AED as part a larger framework for constructing a silver-standard dataset. Accordingly, they do not compare the performance of the AED component to competing approaches and they consider only a single dataset and task. Additionally, one can distinguish between flaggers and scorers for AED Klie et al. (2022). Flaggers output hard decisions of whether an instance contains an error, whereas scorers assign to each instance a score reflecting the likelihood of being an error. In this work, we focus on scoring methods, because ActiveAED requires error scores to rank the instances. ## 3 Active Annotation Error Detection We propose ActiveAED, an AED method which uses the error corrections issued by an annotator in its prediction loop. The basic procedure of ActiveAED is this: In the first step, it uses a ranking-based AED method to find the \(k\) most likely annotation errors across the dataset. In the second step, the presumed annotation errors are forwarded to an annotator who checks them and corrects the labels if necessary. After this, the dataset is updated with the corrections issued by the annotator and the procedure continues with the first step. This loop continues until a stopping condition is met, e.g. that the fraction of errors in the batch drops to a user-defined threshold. See Figure 1 for an illustration of the process. We consider a scenario where an annotator wants to correct annotation errors in a dataset with a given annotation budget of \(n\) instances. There are two options of how to apply an annotation error detection (AED) method to support this. The first is the state-of-the-art and the second one is our proposed approach: (1) Run the AED method once on the dataset to retrieve a list of instances ranked by their probability of containing an annotation error. Then, spend the annotation budget by correcting the top-\(n\) instances. (2) Run the AED method and spend some of the annotation budget by correcting the top-\(k\) instances with \(k\propto n\). Then, run the AED method again on the now partially corrected dataset and repeat until the annotation budget is exhausted. Note, both approaches involve ranking instances based on their probability of containing annotation errors, and selection of a subset of instances for annotation based on this ranking. As a result, the outputs of both approaches can be fairly compared, because they use the same annotation budget and the same ranking-based score. More formally, we assume a dataset with inputs \(X\), (potentially erroneous) labels \(y\), and true labels \(y^{*}\) which are initially unknown to us. After training the model for \(E\) epochs, we use (negative) AUM to assign error scores: \[s_{i}=\frac{1}{E}\sum_{e=1}^{E}\max_{y^{\prime}\neq y_{i}}p_{\theta_{e}}(y^{ \prime}|x_{i})-p_{\theta_{e}}(y_{i}|x_{i}), \tag{1}\] where \(p_{\theta_{e}}(y_{i}|x_{i})\) is the probability of the label assigned to \(x_{i}\) as estimated by \(\theta_{e}\) and \(\max_{y^{\prime}\neq y_{i}}p_{\theta_{e}}(y^{\prime}|x_{i})\) the probability of the highest scoring label that is not the assigned one. Intuitively, correctly labelled instances on average obtain smaller (negative) AUM scores (Eq. 1) than incorrect ones, because the model will confidently predict their correct label earlier in the training. We chose AUM, because it performed well in preliminary experiments on SI-Flights Larson et al. (2020) and ATIS Hemphill et al. (1990). Note, that this formulation differs from the original one in Pleiss et al. (2020) that uses raw logits instead of probabilities. We chose to use probabilities because this performed better in our experiments (see Table 1). We extend AUM with a novel ensembling scheme based on training dynamics. For this, we train a model for \(E\) epochs in a \(C\)-fold cross-validation setup. For each fold \(c\in\{1,...,C\}\) and epoch \(e\in\{1,...,E\}\), we obtain a model \(\theta_{c,e}\). We use the models of one fold \(c\) to assign an error score \(s_{c,i}\) to each instance with AUM (Eq. 1). For each fold, we calculate the AUM score both on the train and on the test portion of the fold, which yields \(C-1\) training-based scores and one test-based score for each instance. For each instance, we first average the training-based scores and then compute the mean of this average and the test-based score, which results in the final score \(s_{i}\): \[s_{i}^{\textit{train}} =\frac{1}{E-1}\sum_{c\in\textit{train}_{i}}s_{c,i} \tag{2}\] \[s_{i} =\frac{1}{2}(s_{i}^{\textit{train}}+s_{i}^{\textit{test}}), \tag{3}\] where \(\textit{train}_{i}\) is the set of \(C-1\) folds in which instance \(i\) appears in the training portion. Then, we rank all uncorrected instances by \(s_{i}\) and route the \(k\) highest scoring ones to the annotator, who manually corrects their label by setting \(y_{i}:=y_{i}^{*}\). Finally, the procedure continues with the partially corrected dataset until a stopping condition is met. There are two kinds of motivation for the proposed ensembling scheme: \(s^{\textit{train}}\) should improve the calibration of the model Ovadia et al. (2019), which Klie et al. (2022) show to be helpful for AED. \(s^{\textit{test}}\) derives from the observation that model-based AED methods benefit from computing statistics over unseen data Klie et al. (2022). ## 4 Evaluation Protocol ### Datasets & Evaluation Setting We evaluate ActiveAED on eight datasets following the choice of datasets used by Klie et al. (2022):1 Footnote 1: From this list, we exclude Plank et al. (2014) because it contains only annotation ambiguities and not corrected errors which are required for our evaluation setting. * The intent classification part of **ATIS**Hemphhill et al. (1990), for which we randomly perturb labels. * The sentiment analysis dataset **IMdb**Maas et al. (2011), for which Northcutt et al. (2021) provide semi-automatically detected annotation errors. * The sentiment analysis dataset **SST**Socher et al. (2013) with randomly perturbed labels. * The UPOS annotations2 from the Georgetown University Multilayer Corpus (**GUM**; Zeldes (2017)) with randomly perturbed labels. Footnote 2: [https://github.com/UniversalDependencies/UD_English-GUM](https://github.com/UniversalDependencies/UD_English-GUM) * The **CoNLL-2003** Named Entity Recognition data Tjong Kim Sang and De Meulder (2003), for which Reiss et al. (2020) provide a version with corrected annotations. * The slot three filling datasets **SI Companies**, **SI Flights**, and **SI Forex**Larson et al. (2020) that contain manually corrected slot labels. We provide Hugging Face datasets implementations and detailed statistics for all datasets; see Appendix A. Our evaluation setup for the sequence labelling datasets (GUM, CoNLL-2003, SI Companies, SI Flights, and SI Forex) differs from that proposed by Klie et al. (2022). We opt for a sequence-level setting because it is closer to our envisioned application scenario, as it makes more sense for an annotator to correct the entire sequence of annotations instead of a single one at a time. Specifically, we define errors on the sequence level, i.e. if at least one token annotation differs from the gold annotation, the sequence is treated as an error both during ActiveAED prediction and for evaluation. During prediction, ActiveAED aggregates token-level error scores by calculating the maximum over all tokens in the sequence. For the other parts of the evaluation setup we follow Klie et al. (2022).3 Footnote 3: Note, that our results are not comparable with the numbers for the state-of-the-art reported by Klie et al. (2022), because of the different treatment of sequence-labelling datasets. Additionally, for ATIS and SST the choice of randomly perturbed labels differs (but the fraction is the same) and for IMDb the dataset statistics reported by Klie et al. (2022) are different from those of the original dataset Northcutt et al. (2021), which we use. In all datasets in which we perturbed labels, we resample the label uniformly for 5% of all annotations. We use average precision (AP) as our evaluation metric, which we compute with scikit-learn v1.1.3 Pedregosa et al. (2011). To be consistent with ActiveAED's application scenario, we cannot use the standard train/dev/test split practice from supervised learning, because we will not have access to any known errors which we could use for development when we apply ActiveAED to a new dataset. Thus, we select the two datasets ATIS and SI-Flights as development datasets on which we devise our method, and reserve the remaining datasets for the final evaluation. We report the average and standard deviation across three random seeds. We follow the standard practice in active learning research and simulate the annotator by using gold-standard corrections (Settles, 2012; Zhang et al., 2022). Note, that here, we simulate a single annotator without accounting for inter- and intra-annotator variation (Jiang and de Marneffe, 2022; Plank, 2022). We set \(k=50\) (an ablation for \(k\) can be found in Section 5), because this is small enough so that an annotator can handle it in a single annotation session but large enough that gains can be observed after a single iteration on SI Flights. We stop the prediction loop after 40 iterations or when the whole dataset was annotated. We perform 10-fold cross validation in all experiments. We describe the remaining hyperparameters in Appendix B. ### Baselines As baselines, we choose the top-performing scorer methods recommended by Klie et al. (2022): * (Negative) Area-under-the-margin (**AUM**) (Pleiss et al., 2020): \(s_{i}^{\text{AUM}}=\frac{1}{E}\sum_{e=1}^{E}\max_{y^{\prime}\neq y_{i}}p_{ \theta_{e}}(y^{\prime}|x_{i})-p_{\theta_{oc}}(y_{i}|x_{i})\) * (Negative) Data Map Confidence (**DM**) (Swayamdipta et al., 2020): \(s_{i}^{\text{DM}}=-\frac{1}{E}\sum_{e=1}^{E}p_{\theta^{(e)}}(y_{i}|x_{i})\) * Classification Uncertainty (**CU**) (Klie et al., 2022): \(s_{i}^{\text{CU}}=-p_{\theta^{*}}(y_{i}|x_{i})\), where AUM and DM are both computed over a single training run and CU is computed with cross-validation over the test portions using the model \(\theta^{*}\) achieving the lowest test loss for the given fold. ## 5 Results The results of our evaluation can be found in Table 1. ActiveAED outperforms the three baselines on seven of the eight datasets, with gains ranging from 0.6 to 6 pp AP. We observe a large variance of the AP scores across different datasets, which is in concordance with the findings of Klie et al. (2022). We suspect that the relatively low scores on IMDb and CoNLL-2003 are because the errors were manually annotated after automatic filtering and thus are limited by the recall of the filtering method. We disentangle the contribution of our proposed ensembling strategy from that of the human-in-the-loop component by ablating the human-in-the-loop (last row in Table 1). We find that on four of the eight datasets, the ensembling alone improves results, whereas on SI Companies, SI Flights, and SI Forex, the main driver for improvements is the human-in-the-loop component. Generally, the human-in-the-loop component improves over the non-active variant on seven out of eight datasets. A natural question that arises is whether the human-in-the-loop procedure of ActiveAED can also improve AED methods other than our modified version of AUM. To investigate this, we evaluate unmodified versions of (negative) AUM and DM on SI Flights and ATIS with our human-in-the-loop setup. We find that, for SI Flights, AUM/DM improves by 7.4/6.9 pp AP, whereas for ATIS, DM improves by 0.8 pp and AUM's result diminishes by 0.2 pp. This suggests that a human in the loop might not be helpful for all combinations of datasets and methods, but that it has the potential \begin{table} \begin{tabular}{l|c c|c c c c c} \hline \hline & ATIS & SI-Flights & IMDb & SST & GUM & CONLL-2003 & SI-Companies & SI-Forex \\ \hline CU & 91.7\(\pm\)1.4 & 80.9\(\pm\)0.5 & 31.6\(\pm\)1.3 & 42.7\(\pm\)1.0 & 98.8\(\pm\)0.1 & 25.2\(\pm\)0.6 & 96.1\(\pm\)0.2 & 84.2\(\pm\)2.0 \\ DM & 97.2\(\pm\)0.2 & 79.2\(\pm\)2.4 & 30.1\(\pm\)3.0 & 47.1\(\pm\)1.0 & 99.3\(\pm\)0.1 & 30.2\(\pm\)0.7 & 97.5\(\pm\)0.2 & 80.6\(\pm\)0.9 \\ AUM (p) & 98.0\(\pm\)0.1 & 78.9\(\pm\)2.3 & 30.1\(\pm\)3.0 & 47.1\(\pm\)1.0 & 99.0\(\pm\)0.1 & 30.2\(\pm\)0.7 & 97.3\(\pm\)0.3 & 81.1\(\pm\)0.9 \\ AUM (l) & 97.3\(\pm\)0.4 & 72.6\(\pm\)0.3 & 27.5\(\pm\)2.5 & 39.6\(\pm\)1.3 & **99.5\(\pm\)0.1** & 29.3\(\pm\)0.2 & 97.2\(\pm\)0.2 & 66.6\(\pm\)1.5 \\ ActiveAED & **98.6\(\pm\)0.1** & **86.6\(\pm\)0.5** & **36.6\(\pm\)0.1** & **53.0\(\pm\)0.2** & 98.5\(\pm\)0.0 & **33.3\(\pm\)0.2** & **99.3\(\pm\)0.0** & **89.7\(\pm\)0.6** \\ \hline w/o active & 98.7\(\pm\)0.1 & 80.3\(\pm\)0.6 & 36.0\(\pm\)0.4 & 52.9\(\pm\)0.4 & 98.4\(\pm\)0.0 & 31.7\(\pm\)0.4 & 97.9\(\pm\)0.1 & 85.5\(\pm\)0.6 \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results. All scores are mean and standard deviation of AP for AED in percent over three random seeds. The best score per dataset (without ablation) is in bold. We used ATIS and SI-Flights as development data. The last row is ActiveAED without the human-in-the-loop component. AUM (l) is the original version of AUM proposed by Pleiss et al. (2020), whereas AUM (p) is our variant in which we aggregate probabilities instead of raw logits. to significantly improve results for other methods than ActiveAED. It is instructive to compare the precision-recall curves of ActiveAED to that of its non-active variant. The graphs for datasets SI Flights and CoNLL-2003 can be found in Figure 2. On both datasets, the precision gains are present in the mid-to-high recall regime (> 0.4), which intuitively makes sense, because ActiveAED requires a few rounds of human annotation to produce different outputs than its non-active variant. This suggests that one could increase the efficiency of ActiveAED by starting with a more lightweight AED method, e.g. one that does not require cross validation or ensembling and only later switch to the more compute-intensive ensembling of ActiveAED. We leave the investigation of this option for future work. We describe the ablation study of our proposed ensembling scheme and for different choices of \(k\) in Appendix C. Here, we find that test ensembling is crucial, that train ensembling sometimes improves results and that increasing \(k\) for the small SI-Flights dataset harms results. We provide example outputs of ActiveAED in Appendix E. ## 6 Conclusion We have proposed ActiveAED, an AED method that includes human feedback in its prediction loop. While the proposed approach could be used with every ranking-based AED method, we base ActiveAED on the recently proposed AUM score, which we augment with a novel ensembling scheme based on training dynamics. We evaluate ActiveAED on eight datasets spanning five different tasks and find that it improves results on seven of them, with gains of up to six pp AP. In future work, we plan on extending ActiveAED to generative models and structured prediction tasks. Additionally, we want to use ActiveAED to clean benchmark datasets. We also plan to investigate the reasons for the observed performance gains of ActiveAED, for instance by exploring the role of model capacity and dataset characteristics (Ethayarajh et al., 2022). Finally, we would like to study the interplay between ActiveAED and human label variation (Jiang and de Marneffe, 2022; Plank, 2022). ### Limitations A major limitation of ActiveAED is that it is significantly more compute-intensive than other scoring-based AED methods such as AUM or DM. This is inherent to the proposed method because the ensemble requires training of multiple models and, after receiving human feedback, the full ensemble has to be re-trained. Also, the ensembling of ActiveAED requires more training runs than training-dynamics-based AED methods. However, most model-based methods require a cross-validation scheme (Klie et al., 2022). The ensembling component of ActiveAED is more data-efficient than these approaches, because it makes use of the training dynamics captured during cross-validation instead of discarding them. A second limitation of this work is that while we chose baselines that performed strongly in Klie et al. (2022), they represent only a fraction of the scoring-based AED methods described in the literature. Finally, our evaluation is limited to a single language model and it would be interesting to investigate how ActiveAED interacts with larger language models than DistilRoBERTa. Figure 2: Comparison of the precision-recall curves of ActiveAED and its non-active ablation. The gains of ActiveAED are made in the mid-to-high recall regime for both datasets. Curves are mean and error bars are standard deviation across three random seeds. ## Ethics Statement Datasets with fewer annotation errors can improve model training and evaluation. While this generally seems desirable, it is subject to the same dual-use concerns as the NLP models that are improved with AED methods. Additionally, using ActiveAED instead of AUM or DM can make the AED results more accurate, but that comes at the expense of a higher runtime. This, in turn, leads to increased energy consumption and, depending on the source of the energy, more CO\({}_{2}\) released Strubell et al. (2019), which is highly problematic in the face of the climate crisis. ## Acknowledgements We thank the reviewers for their constructive feedback which helped to improve the paper. Many thanks to the members of MaiNLP and NLPNorth for their comments on the paper. This research is in parts supported by European Research Council (ERC) grant agreement No. 101043235.
2309.06431
Limit theorems for critical faces above the vanishing threshold
We investigate convergence of point processes associated with critical faces for a \v{C}ech filtration built over a homogeneous Poisson point process in the $d$-dimensional flat torus. The convergence of our point process is established in terms of the $\mathcal M_0$-topology, when the connecting radius of a \v{C}ech complex decays to $0$, so slowly that critical faces are even less likely to occur than those in the regime of threshold for homological connectivity. We also obtain a series of limit theorems for positive and negative critical faces, all of which are considerably analogous to those for critical faces.
Zifu Wei, Takashi Owada, D. Yogeshwaran
2023-09-12T17:53:24Z
http://arxiv.org/abs/2309.06431v2
# Limit theorems for critical faces above the vanishing threshold ###### Abstract. We investigate convergence of point processes associated with critical faces for a Cech filtration built over a homogeneous Poisson point process in the \(d\)-dimensional flat torus. The convergence of our point process is established in terms of the \(\mathcal{M}_{0}\)-topology, when the connecting radius of a Cech complex decays to \(0\), so slowly that critical faces are even less likely to occur than those in the regime of threshold for homological connectivity. We also obtain a series of limit theorems for positive and negative critical faces, all of which are considerably analogous to those for critical faces. Key words and phrases:\(\mathcal{M}_{0}\)-convergence, point process, Morse critical points, stochastic geometry 2020 Mathematics Subject Classification: Primary 60D05. Secondary 60G55, 55U10 TO's research was partially supported by the AFOSR grant FA9550-22-1-0238. DY was funded through CPDA of the Indian Statistical Institute and SERB-MATRICS: MTR-2020-000470. We now introduce some notation to outline our results. Since we are working with the torus, we do not consider sub-level sets for large levels and hence we focus only on \((C(\mathcal{P}_{n},r))_{r\in[0,R_{n}]}\), where \(R_{n}\) is a sequence of positive numbers tending to \(0\) slowly enough. Given such \(R_{n}\), we are interested in the stochastic features of the following point process induced by critical points and values: \[\eta_{k,n}:=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+1}s_{k,n }(\mathcal{Y},\mathcal{P}_{n})\,\delta_{(c(\mathcal{Y}),\,n\omega_{d}\rho( \mathcal{Y})^{d}-a_{n})}, \tag{1.2}\] where \(\delta_{(x_{1},x_{2})}\) is the Dirac measure at \((x_{1},x_{2})\in\mathbb{T}^{d}\times(-\infty,\infty]\), and \(s_{k,n}(\mathcal{Y},\mathcal{P}_{n})\) is an indicator function, requiring that \(\mathcal{Y}\) forms a critical \(k\)-face, such that \(\rho(\mathcal{Y})\leq R_{n}\). Furthermore, \(\omega_{d}\) denotes volume of the \(d\)-dimensional unit ball, and \(a_{n}\) is a properly defined centering sequence. Closely related to the above point process is the statistics of total number of critical \(k\)-faces with large critical values. Given \(u\in\mathbb{R}\), let \((r_{n}(u))_{n\geq 1}\) be a sequence of positive numbers defined by \[r_{n}(u)=\Big{(}\frac{a_{n}+u}{n\omega_{d}}\Big{)}^{1/d},\quad n\geq 1. \tag{1.3}\] Fixing \(u=u_{0}\) and assuming \(r_{n}(u_{0})\to 0\), \(r_{n}(u_{0})/R_{n}\to 0\) as \(n\to\infty\), we also study the behavior of the statistics \[G_{k,n}:=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+1}s_{k,n}( \mathcal{Y},\mathcal{P}_{n})\,\mathbb{1}\big{\{}\rho(\mathcal{Y})\geq r_{n}(u_ {0})\big{\}}, \tag{1.4}\] which counts the number of critical \(k\)-faces whose critical values are between \(r_{n}(u_{0})\) and \(R_{n}\). Note that \(G_{k,n}=\eta_{k,n}(\mathbb{T}^{d}\times[u_{0},\infty))\). In this paper, we put our focus on the dense regime i.e., \(nr_{n}(u)^{d}\to\infty\). Poisson process approximation for the process \(\eta_{k,n}\) in the regime \[a_{n}=\log n+(k-1)\log\log n+\mathsf{const}, \tag{1.5}\] as well as Poisson convergence for \(G_{k,n}\) in (1.4), were established in Theorem 6.1 of [7] and Theorem 8.1 of [3], respectively. These results were crucial to describing the behaviour of homology at the vanishing threshold. We place ourselves above the Poisson regime (1.5), or equivalently, above the vanishing threshold. More accurately, we are interested in the asymptotics of \(G_{k,n}\), in the case that \[r_{n}(u)\to 0,\quad r_{n}(u)\gg\Big{(}\frac{\log n+(k-1)\log\log n}{n\omega_{d}} \Big{)}^{1/d}.\] Equivalently, we consider the following centering term \((a_{n})\) in (1.2): \[a_{n}-\log n-(k-1)\log\log n\to\infty,\quad a_{n}=o(n),\quad n\to\infty. \tag{1.6}\] Under this assumption, the process \(\eta_{k,n}\) converges to the null measure (i.e., the measure that assigns zeros to all Borel measurable sets in \(\mathbb{T}^{d}\times(-\infty,\infty]\)), and \(G_{k,n}\) converges to a degenerate zero random variable ([8]). Our main results - Theorems 2.3 and 2.5 below - quantify the rate of convergence to the null measure and a zero random variable using appropriate notions. More specifically, we identify a sequence \(b_{k,n}:=na_{n}^{k-1}e^{-a_{n}}\to 0\) to show that \[\big{(}b_{k,n}^{-1}\mathbb{P}(\eta_{k,n}\in\cdot),\,n\geq 1\big{)} \tag{1.7}\] and \[\big{(}b_{k,n}^{-1}\mathbb{P}(G_{k,n}\in\cdot),\,n\geq 1\big{)} \tag{1.8}\] tend to non-degenerate limits as \(n\to\infty\), while identifying the limits themselves explicitly. Up to a constant (dependent on \(k\)), the limit in (1.7) is supported on singletons with a product density independent of \(k\). The topology underlying the convergence of (1.7) is called \(\mathcal{M}_{0}\)_-topology_, a useful notion for dealing with convergence of probability measures defined on a complete and separable metric space. In recent times, the notion of \(\mathcal{M}_{0}\)-topology has been used for the study of geometrically and/or topologically rare events ([17, 12, 18]), as well as regular variation of point processes and stochastic processes ([14, 15, 9, 21]). For further analyses, we recall that critical faces can be divided into positive and negative critical faces (see [3]). Loosely speaking, positive critical \(k\)-faces will create a (nontrivial) \(k\)-dimensional cycle in the \(k\)th homology group of \(C(\mathcal{X},r)\), while negative critical \(k\)-faces terminate a \((k-1)\)-dimensional cycle. According to Propositions 4.2 and 5.1 in [3], for each \(1\leq k\leq d-2\), the vanishing thresholds for critical \(k\)-faces, positive critical \(k\)-faces, and negative critical \((k+1)\)-faces all coincide with one another (with high probability). Extending this further, we can prove that even when the radius \((r_{n}(u))_{n\geq 1}\) satisfies condition (1.6), the functionals of positive critical \(k\)-faces and negative critical \((k+1)\)-faces exhibit asymptotic results similar to those for critical \(k\)-faces; see Corollary 2.6 for more details. The Poisson convergence results for positive and negative critical faces in the regime (1.5) were used to understand the vanishing threshold for homology in [3]. Similarly, with the equivalence between positive (resp. negative) critical faces and birth (resp. death) times in persistence diagrams, our results can be rephrased in terms of the birth and death times within the interval \([r_{n}(u),R_{n}]\). In other words, we can quantify the distribution of 'noisy barcodes' in the persistence diagram above the homological connectivity regime. As a final remark, we point out that in addition to these studies on the vanishing homology, together with the related Poisson convergence, there have also been a number of attempts at deducing other types of limit theorems, including the central limit theorems ([4, 5, 22]) and the large deviation principle in [13], when the radius \((r_{n}(u))_{n\geq 1}\) belongs to the sparse regime (i.e., \(nr_{n}(u)^{d}\to 0\)) or critical regime (i.e., \(nr_{n}(u)^{d}\to c\in(0,\infty)\)). The remainder of the paper is structured as follows. Section 2 gives a precise setup for the processes \(\eta_{k,n}\) and \(G_{k,n}\). After that, the convergence results (1.7) and (1.8) will be formally described in Theorems 2.3 and 2.5. Corollary 2.6 states the convergence results for positive and negative critical faces. All the proofs are postponed to Section 3. The main machinery in our proof is given by Theorems 4.1 and 6.1 of [7]; this will help us to show that under the \(\mathcal{M}_{0}\)-topology, the process \(\eta_{k,n}\) can be approximated by some Poisson point process whose intensity measure tends to the null measure as \(n\to\infty\). We then rely upon the estimates in [3] to approximate the spatial distributions of positive and negative critical faces by those of critical faces. ## 2. Setup and main results To better understand critical faces, it is important to define a _Cech complex_, which is one of the most studied geometric complexes (see [11]). **Definition 2.1**.: Given a point set \(\mathcal{X}=\{x_{1},\ldots,x_{n}\}\subset\mathbb{T}^{d}\) and \(r>0\), the Cech complex \(\tilde{C}(\mathcal{X},r)\) is defined as follows: * The \(0\)-simplices are the points in \(\mathcal{X}\). * For each \(m\geq 1\), \(\{x_{i_{0}},\ldots,x_{i_{m}}\}\subset\mathcal{X}\) forms an \(m\)-simplex if \(\bigcap_{j=0}^{m}B_{r}(x_{i_{j}})\neq\emptyset\), where \(B_{r}(x):=\{y\in\mathbb{T}^{d}:\mathsf{dist}(x,y)\leq r\}\) is the closed ball of radius \(r>0\) centered at \(x\). Intrinsically, the Cech complex possesses inclusion property \(\tilde{C}(\mathcal{X},r)\subset\tilde{C}(\mathcal{X},r^{\prime})\) for all \(r\leq r^{\prime}\) and thus induces a _Cech filtration_\(\big{(}\tilde{C}(\mathcal{X},r)\big{)}_{r\geq 0}\). To analyze the homology of a Cech filtration, the authors in [4] employed an approach based on an extension of classical Morse theory (see [10]) to the min-type distance function \(d_{\mathcal{X}}\) in (1.1). Then, for each \(k\in\{1,\ldots,d\}\), the change in the \(k\)th homology group of a Cech filtration can be characterized by the _Morse critical point_ with index \(k\) of \(d_{\mathcal{X}}\). **Definition 2.2**.: A point \(c\in\mathbb{R}^{d}\) is said to be a (Morse) critical point of index \(k\) if there exists a subset \(\mathcal{Y}\subset\mathcal{X}\) of \(k+1\) points such that * the points in \(\mathcal{Y}\) are in general position, i.e., \(\mathcal{Y}\) spans a \(k\)-dimensional simplex in \(\mathbb{R}^{d}\), so that there is a unique \((k-1)\)-dimensional sphere containing \(\mathcal{Y}\). * \(d_{\mathcal{X}}(c)=\mathsf{dist}(c,y)\) for any \(y\in\mathcal{Y}\) and \(d_{\mathcal{X}}(c)<\min_{z\in\mathcal{X}\setminus\mathcal{Y}}\mathsf{dist}(c,z)\). * \(c\in\sigma(\mathcal{X})\), where \(\sigma(\mathcal{X})\) denotes an open geometric \(k\)-simplex in \(\mathbb{R}^{d}\) spanned by \(\mathcal{X}\). Whenever such \(\mathcal{Y}\) exists, we say that \(\mathcal{Y}\) forms a _critical \(k\)-face_ (or simplex) for which the critical point is given by \(c=c(\mathcal{Y})\). Moreover, denote by \(\rho(\mathcal{Y})\) its critical value, i.e., the radius of a ball spanned by \(\mathcal{Y}\) centered at \(c(\mathcal{Y})\). The \(0\)-dimensional critical points are \(\mathcal{X}\) itself and hence are not of interest to us. Recall that \(\mathcal{P}_{n}\) is a homogeneous Poisson point process with intensity \(n\) in the \(d\)-dimensional flat torus \(\mathbb{T}^{d}\). Let \((a_{n})_{n\geq 1}\) be a sequence satisfying (1.6), and \((r_{n}(u))_{n\geq 1}\) be defined as in (1.3). As mentioned in the Introduction, an extra caution is needed when the Cech filtration is defined on the torus \(\mathbb{T}^{d}\). For example, if the radius \(r\) is large enough, the intersection of balls on the torus is not contractible; hence, the Cech complex \(\tilde{C}(\mathcal{P}_{n},r)\) will not be homotopy equivalent to the union of balls \(\bigcup_{p\in\mathcal{P}_{n}}B_{r}(p)\) (see the Nerve Lemma in Theorem 10.7 of [1]). Moreover, when \(r\) is large, the notion of a critical point, as well as its critical value, is not always well-defined on the torus. To overcome this issue, we follow the convention of the previous studies in [3, 6], and focus only on a "bounded" filtration \((\tilde{C}(\mathcal{P}_{n},r))_{r\in[0,R_{n}]}\), where \(R_{n}\) satisfies \[R_{n}\to 0,\quad\frac{r_{n}(u)}{R_{n}}\to 0,\quad n\to\infty, \tag{2.1}\] for all \(u\in\mathbb{R}\). Note that (2.1) implies \(a_{n}/(nR_{n}^{d})\to 0\) as \(n\to\infty\). Let \(k\in\{1,\ldots,d-1\}\) be a positive integer. For a \((k+1)\)-point subset \(\mathcal{Y}\subset\mathcal{P}_{n}\), which is in general position, we define \[s_{k,n}(\mathcal{Y},\mathcal{P}_{n}):=\mathbb{1}\{\mathcal{Y}\ \text{forms a critical $k$-face}\}\times\mathbb{1}\{\rho(\mathcal{Y})\leq R_{n}\}, \tag{2.2}\] where \(\rho(\mathcal{Y})\) is the critical value defined after Definition 2.2. Next, for \(u\in\mathbb{R}\), we define \[g_{k,n}(\mathcal{Y},\mathcal{P}_{n};u) :=s_{k,n}(\mathcal{Y},\mathcal{P}_{n})\mathbb{1}\{\rho(\mathcal{ Y})\geq r_{n}(u)\}\] \[=\mathbb{1}\{\mathcal{Y}\ \text{forms a critical $k$-face}\}\times \mathbb{1}\big{\{}\rho(\mathcal{Y})\in[r_{n}(u),R_{n}]\big{\}}. \tag{2.3}\] Using (2.2), the point process of our interest can be formally defined as \[\eta_{k,n}:=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+1}s_{k, n}(\mathcal{Y},\mathcal{P}_{n})\,\delta_{(c(\mathcal{Y}),\,n\omega_{d}\rho( \mathcal{Y})^{d}-a_{n})}. \tag{2.4}\] Notice that \(\omega_{d}\rho(\mathcal{Y})^{d}\) represents the volume of an open ball in \(\mathbb{R}^{d}\) with radius \(\rho(\mathcal{Y})\) centered at \(c(\mathcal{Y})\). The process (2.4) is viewed as a random element in the space \(M_{p}(\mathbb{Y})\) of point measures on \(\mathbb{Y}:=\mathbb{T}^{d}\times(-\infty,\infty]\). Defining \[b_{k,n}:=na_{n}^{k-1}e^{-a_{n}},\quad n\geq 1,\] we consider the sequence of probability measures \[\big{(}b_{k,n}^{-1}\mathbb{P}(\eta_{k,n}\in\cdot),\,n\geq 1\big{)}. \tag{2.5}\] As described in the Introduction, the convergence of (2.5) has to be treated under the \(\mathcal{M}_{0}\)-topology. The formal definition of \(\mathcal{M}_{0}\)-topology is given as follows. First, let \(B_{\emptyset,r}\) denote an open ball of radius \(r>0\) centered at the null measure \(\emptyset\) (in terms of the vague metric). Denote by \(\mathcal{M}_{0}=\mathcal{M}_{0}(M_{p}(\mathbb{Y}))\) the space of Borel measures on \(M_{p}(\mathbb{Y})\), the restriction of which to \(M_{p}(\mathbb{Y})\setminus B_{\emptyset,r}\) is finite for all \(r>0\). Moreover, define \(\mathcal{C}_{0}=\mathcal{C}_{0}(M_{p}(\mathbb{Y}))\) to be the space of continuous and bounded real-valued functions on \(M_{p}(\mathbb{Y})\) that vanish in the neighborhood of \(\emptyset\). Given \(\xi_{n},\xi\in\mathcal{M}_{0}\), we say that \(\xi_{n}\) converges to \(\xi\) in the \(\mathcal{M}_{0}\)-topology, denoted as \(\xi_{n}\to\xi\) in \(\mathcal{M}_{0}\), if it holds that \(\int_{\mathcal{M}_{p}(\mathbb{Y})}g(\mu)\xi_{n}(\mathrm{d}\mu)\to\int_{ \mathcal{M}_{p}(\mathbb{Y})}g(\mu)\xi(\mathrm{d}\mu)\) for all \(g\in\mathcal{C}_{0}\). One may refer to [14] for more detailed discussion on \(\mathcal{M}_{0}\)-topology. **Theorem 2.3**.: _Let \(1\leq k\leq d-1\). We have, as \(n\to\infty\),_ \[b_{k,n}^{-1}\mathbb{P}(\eta_{k,n}\in\cdot)\to\lambda_{k},\quad\text{in } \mathcal{M}_{0}, \tag{2.6}\] _where_ \[\lambda_{k}(\cdot)=D_{k}\int_{\mathbb{Y}}\mathbb{1}\{\delta_{(c,u)}\in\cdot\} \,e^{-u}\,\mathrm{d}c\,\mathrm{d}u, \tag{2.7}\] _with \(D_{k}\) being a positive constant defined specifically at (3.1)._ Observe that \(D_{k}^{-1}\lambda_{k}\) is a measure independent of \(k\) and concentrated on singletons. For technical reasons, we are forced to skip the case \(k=d\). **Example 2.4**.: We consider the centering term \[a_{n}=\log n+(k-1)\log\log n+\log\log\log n. \tag{2.8}\] It is then easy to calculate that \(b_{k,n}\sim(\log\log n)^{-1}\) as \(n\to\infty\), and by Theorem 2.3, \[(\log\log n)\mathbb{P}(\eta_{k,n}\in\cdot)\to\lambda_{k},\quad\text{in } \mathcal{M}_{0}. \tag{2.9}\] Note that \((a_{n})_{n\geq 1}\) in (2.8) satisfies condition (1.6), but its growth rate is close to the sequence (1.5) of the Poisson regime (the difference is at most of order \(\mathcal{O}(\log\log\log n)\)). As a consequence, the probability distribution of \(\eta_{k,n}\) decays only logarithmically. In contrast, if one takes \(a_{n}=2\log n\), then much fewer number of critical \(k\)-faces will be counted by the process \((\eta_{k,n})_{n\geq 1}\). It then follows that \(b_{k,n}\sim(2\log n)^{k-1}n^{-1}\), and \[(2\log n)^{-(k-1)}n\mathbb{P}(\eta_{k,n}\in\cdot)\to\lambda_{k},\quad\text{in }\mathcal{M}_{0}.\] In this case, the probability distribution of \(\eta_{k,n}\) decays much faster than (2.9). In parallel, we also study the asymptotics of the sequence \((b_{k,n}^{-1}\mathbb{P}\circ G_{k,n}^{-1})_{n\geq 1}\), where \(G_{k,n}\) is formally defined as follows. For a fixed \(u_{0}\in\mathbb{R}\), \[G_{k,n}:=G_{k,n}(u_{0})=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y }|=k+1}g_{k,n}(\mathcal{Y},\mathcal{P}_{n};u_{0}), \tag{2.10}\] which counts the number of critical points \(c(\mathcal{Y})\in\mathbb{T}^{d}\) with index \(k\), such that \(\rho(\mathcal{Y})\in[r_{n}(u_{0}),R_{n}]\). Let \(E:=(0,\infty]\) and \(M_{+}(E)\) be the space of Radon measures on \(E\). Define \(C_{K}^{+}(E)\) to be the collection of non-negative and continuous functions on \(E\) with compact support. Given \(\xi_{n},\xi\in M_{+}(E)\), we say that \(\xi_{n}\) converges vaguely to \(\xi\), denoted as \(\xi_{n}\stackrel{{ v}}{{\to}}\xi\) in \(M_{+}(E)\), if it holds that \(\int_{E}g(x)\xi_{n}(\mathrm{d}x)\to\int_{E}g(x)\xi(\mathrm{d}x)\) for all \(g\in C_{K}^{+}(E)\). **Theorem 2.5**.: _Under the assumptions of Theorem 2.3, as \(n\to\infty\),_ \[b_{k,n}^{-1}\mathbb{P}(G_{k,n}\in\cdot)\stackrel{{ v}}{{\to}}D_{ k}e^{-u_{0}}\delta_{1},\quad\text{in }M_{+}(E),\] _where \(u_{0}\) is a fixed real number as at (2.10) and \(D_{k}\) is a positive constant defined at (3.1). This implies that_ \[b_{k,n}^{-1}\mathbb{P}(G_{k,n}\geq 1)\to D_{k}e^{-u_{0}}.\] From the Morse-theoretic analyses on critical faces in [3], a critical \(k\)-face either generates a (nontrivial) \(k\)-dimensional cycle in the \(k\)th homology group of a Cech filtration or terminates a \((k-1)\)-dimensional cycle of the same complex. In the former case, we call such a critical \(k\)-face a _positive critical \(k\)-face_, while the latter one is called a _negative critical \(k\)-face_. For example, a negative critical \(1\)-face is nothing but an edge in the minimal spanning tree on \(\mathcal{P}_{n}\) with weights being the Euclidean distance. Now, we introduce a series of indicator functions analogous to (2.2) and (2.3): for a \((k+1)\)-point subset \(\mathcal{Y}\subset\mathcal{P}_{n}\), which is in general position, \[s_{k,n}^{+}(\mathcal{Y},\mathcal{P}_{n}) :=\mathbb{1}\{\mathcal{Y}\text{ forms a positive critical $k$-face}\}\times\mathbb{1}\{\rho(\mathcal{Y})\leq R_{n}\},\] \[s_{k,n}^{-}(\mathcal{Y},\mathcal{P}_{n}) :=\mathbb{1}\{\mathcal{Y}\text{ forms a negative critical $k$-face}\}\times\mathbb{1}\{\rho(\mathcal{Y})\leq R_{n}\},\] and, for \(u\in\mathbb{R}\), \[g_{k,n}^{\pm}(\mathcal{Y},\mathcal{P}_{n};u):=s_{k,n}^{\pm}(\mathcal{Y}, \mathcal{P}_{n})\mathbb{1}\{\rho(\mathcal{Y})\geq r_{n}(u)\}.\] Analogously to (2.4) and (2.10), we define the point processes, \[\eta_{k,n}^{\pm}:=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+1 }s_{k,n}^{\pm}(\mathcal{Y},\mathcal{P}_{n})\,\delta_{(c(\mathcal{Y}),\,n\omega _{d}\rho(\mathcal{Y})^{d}-a_{n})}\in M_{p}(\mathbb{Y}), \tag{2.11}\] and for a fixed \(u_{0}\in\mathbb{R}\), \[G^{\pm}_{k,n}:=G^{\pm}_{k,n}(u_{0})=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,| \mathcal{Y}|=k+1}g^{\pm}_{k,n}(\mathcal{Y},\mathcal{P}_{n};u_{0}).\] It was shown in Proposition 4.2 and 5.1 of [3] that if \((a_{n})\) satisfies condition (1.5), then \((G_{k,n})_{n\geq 1}\), \((G^{+}_{k,n})_{n\geq 1}\), and \((G^{-}_{k+1,n})_{n\geq 1}\) exhibit the same phase transition in terms of the \(k\)th homological connectivity. From this point of view, the results below claim that even under the assumption (1.6), \((G^{+}_{k,n})_{n\geq 1}\) and \((G^{-}_{k+1,n})_{n\geq 1}\) still satisfy the same limit theorem as Theorem 2.5. Moreover, the processes \((\eta^{+}_{k,n})_{n\geq 1}\) and \((\eta^{-}_{k+1,n})_{n\geq 1}\) also satisfy the same limit theorem as Theorem 2.3. For the latter process, however, we can deduce the corresponding limit result only for the process with a restricted state space, i.e., \[\eta^{(-,r)}_{k+1,n}:=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}| =k+2}s^{-}_{k+1,n}(\mathcal{Y},\mathcal{P}_{n})\,\delta_{n\omega_{d}\rho( \mathcal{Y})^{d}-a_{n}}\in M_{p}\big{(}(-\infty,\infty]\big{)}, \tag{2.12}\] for which the locational coordinate \(c(\mathcal{Y})\) has been removed from (2.11). We conjecture that the same limit theorem holds even for the process \((\eta^{-}_{k+1,n})_{n\geq 1}\) as well. However, due to a technical complication occurring in the application of a series of results in [3], we have decided not to pursue this direction in the present work. **Corollary 2.6**.: _Recall \(\lambda_{k}\) as defined in (2.7). \((i)\) For \(1\leq k\leq d-1\), we have_ \[b^{-1}_{k,n}\mathbb{P}(\eta^{+}_{k,n}\in\cdot)\to\lambda_{k},\quad\text{in } \mathcal{M}_{0},\quad n\to\infty. \tag{2.13}\] \((ii)\) _For \(1\leq k\leq d-2\), we have_ \[b^{-1}_{k,n}\mathbb{P}(\eta^{(-,r)}_{k+1,n}\in\cdot)\to\lambda^{(r)}_{k}, \quad\text{in }\mathcal{M}_{0},\quad n\to\infty, \tag{2.14}\] _where \(\lambda^{(r)}_{k}(\cdot)=D_{k}\int_{\mathbb{R}}\mathbbm{1}\left\{\delta_{u} \in\cdot\right\}e^{-u}\,\mathrm{d}u,\) is the restricted version of \(\lambda_{k}\). \((iii)\) For \(1\leq k\leq d-1\), we have_ \[b^{-1}_{k,n}\mathbb{P}(G^{+}_{k,n}\in\cdot)\overset{v}{\to}D_{k}e^{-u_{0}} \delta_{1},\quad\text{in }M_{+}(E),\quad n\to\infty. \tag{2.15}\] \((iv)\) _For \(1\leq k\leq d-2\), we have_ \[b^{-1}_{k,n}\mathbb{P}(G^{-}_{k+1,n}\in\cdot)\overset{v}{\to}D_{k}e^{-u_{0}} \delta_{1},\quad\text{in }M_{+}(E),\quad n\to\infty. \tag{2.16}\] _In particular, (2.15) and (2.16) imply that as \(n\to\infty\),_ \[b^{-1}_{k,n}\mathbb{P}(G^{+}_{k,n}\geq 1)\to D_{k}e^{-u_{0}},\] \[b^{-1}_{k,n}\mathbb{P}(G^{-}_{k+1,n}\geq 1)\to D_{k}e^{-u_{0}}.\] The proof of the limit theorem for \((\eta^{+}_{k,n})_{n\geq 1}\) is based on establishing asymptotic equivalence with \((\eta_{k,n})_{n\geq 1}\) by using estimates in [3]. For the case of \(k=1\), we need estimates from [19]. From this we can deduce convergence of \((G^{+}_{k,n})_{n\geq 1}\) via the continuous mapping theorem. We then compare \((G^{+}_{k,n})_{n\geq 1}\) with \((G^{-}_{k+1,n})_{n\geq 1}\) again using estimates in [3]. Finally, by using the results for \((G^{-}_{k+1,n})_{n\geq 1}\) at many choices of \(u_{0}\)'s, we can prove the limit theorem for \((\eta^{(-,r)}_{k+1,n})_{n\geq 1}\). ## 3. Proofs For the proof of Theorem 2.3, we need to recall that the _Kantorovich-Rubinstein distance_ between the laws of point processes \(\eta_{1}\) and \(\eta_{2}\) on \(\mathbb{Y}\) is defined as \[d_{\mathsf{KR}}\big{(}\mathcal{L}(\eta_{1}),\mathcal{L}(\eta_{2})\big{)}:=\sup_ {h}\big{|}\mathbb{E}[h(\eta_{1})]-\mathbb{E}[h(\eta_{2})]\big{|},\] where \(\mathcal{L}(\eta_{i})\) denotes a probability law of \(\eta_{i}\), and the supremum is taken over all \(1\)-Lipschitz functions \(h:M_{p}(\mathbb{Y})\to\mathbb{R}\), with respect to the total variation distance on \(M_{p}(\mathbb{Y})\). Recall also that the _total variation distance_ between two measures \(\mu_{1}\) and \(\mu_{2}\) on \(\mathbb{Y}\) is defined as \[d_{\mathsf{TV}}(\mu_{1},\mu_{2}):=\sup_{A\subset\mathbb{Y}}\big{|}\mu_{1}(A)- \mu_{2}(A)\big{|}.\] In the previous section, we defined the notion of a critical point \(c(\mathcal{Y})\), as well as its critical value \(\rho(\mathcal{Y})\), only when \(\mathcal{Y}\) forms a critical face; see Definition 2.2. In this section, however, one needs to extend these concepts, even when \(\mathcal{Y}\) may not form a critical face. Specifically, let \(\mathcal{Y}=\{y_{1},\ldots,y_{k+1}\}\) be a subset of \(\mathcal{P}_{n}\), which is in general position (\(\mathcal{Y}\) does not necessarily form a critical \(k\)-face). Let \[E(\mathcal{Y}):=\big{\{}z\in\mathbb{T}^{d}:\|z-y_{1}\|=\cdots=\|z-y_{k+1}\| \big{\}}\] be the collection of equidistant points from \(\mathcal{Y}\). Since \(\mathcal{Y}\) is in general position, \(E(\mathcal{Y})\) forms a \((d-k)\)-dimensional affine plane, so that there exists a unique point \(c(\mathcal{Y})\in E(\mathcal{Y})\) such that \(\|c(\mathcal{Y})-y_{1}\|=\inf_{z\in E(\mathcal{Y})}\|z-y_{1}\|\). Moreover, we define \(\rho(\mathcal{Y}):=\|c(\mathcal{Y})-y_{1}\|\). Before commencing the proof of Theorem 2.3, we define a finite and positive constant \[D_{k}:=\frac{(k!)^{d-k+1}}{(k+1)!d\omega_{d}^{k}}\,\binom{d}{k}\frac{\Omega_{ d}}{\Omega_{k}\Omega_{d-k}}\int_{(S^{k-1})^{k+1}}h_{k}(\boldsymbol{\theta})V_{ \mathsf{simp}}(\boldsymbol{\theta})^{d-k+1}\,\mathrm{d}\boldsymbol{\theta}, \tag{3.1}\] where \(\Omega_{j}=\prod_{i=1}^{j}\omega_{i}\) (\(\omega_{i}\) is volume of the \(i\)-dimensional unit ball), \(S^{k-1}\) is the \((k-1)\)-dimensional unit sphere, and \(h_{k}(\boldsymbol{\theta}):=\mathbbm{1}\left\{c(\boldsymbol{\theta})\in \sigma(\boldsymbol{\theta})\right\}\) (recall that \(\sigma(\boldsymbol{\theta})\) is an open geometric \(k\)-simplex in \(\mathbb{R}^{d}\) spanned by \(\boldsymbol{\theta}\)). Further, \(V_{\mathsf{simp}}(\boldsymbol{\theta})\) represents the volume of \(\sigma(\boldsymbol{\theta})\). Throughout this section, denote by \(C^{*}\) a generic positive constant, which is independent of \(n\) but may vary between and within the lines. ### Proof of Theorem 2.3 Proof.: The proof exploits the ideas in Theorem 4.1 of [7]. Recall that \(C^{+}_{K}(\mathbb{Y})\) is the collection of non-negative and continuous functions on \(\mathbb{Y}\) with compact support. Given \(H_{1},H_{2}\in C^{+}_{K}(\mathbb{Y})\) and \(\varepsilon_{1},\varepsilon_{2}>0\), define \(F_{H_{1},H_{2},\varepsilon_{1},\varepsilon_{2}}:M_{p}(\mathbb{Y})\to[0,1]\) by \[F_{H_{1},H_{2},\varepsilon_{1},\varepsilon_{2}}(\xi)=(1-e^{-(\xi(H_{1})- \varepsilon_{1})_{+}})(1-e^{-(\xi(H_{2})-\varepsilon_{2})_{+}}), \tag{3.2}\] where \((a)_{+}=a\) if \(a\geq 0\) and \(0\) otherwise, and \(\xi(H_{\ell})=\int_{\mathbb{Y}}H_{\ell}(x)\xi(\mathrm{d}x)\) for \(\ell=1,2\). Denote \(\lambda_{k,n}(\cdot):=b_{k,n}^{-1}\mathbb{P}(\eta_{k,n}\in\cdot)\) and recall the definition of \(\lambda_{k}\) at (2.7). According to Theorem A.2 in [15], (2.6) immediately follows if one can show that \[\lambda_{k,n}(F_{H_{1},H_{2},\varepsilon_{1},\varepsilon_{2}})\to\lambda_{k}( F_{H_{1},H_{2},\varepsilon_{1},\varepsilon_{2}}),\quad\text{as $n\to\infty$},\] for all \(H_{1},H_{2}\in C^{+}_{K}(\mathbb{Y})\) and \(\varepsilon_{1},\varepsilon_{2}>0\). Fix \(H_{1},H_{2}\) and \(\varepsilon_{1},\varepsilon_{2}\) henceforth and write \(F=F_{H_{1},H_{2},\varepsilon_{1},\varepsilon_{2}}\). Let \(\zeta_{k,n}\) be a Poisson point process on \(\mathbb{Y}\) with intensity measure \(b_{k,n}D_{k}e^{-u}\,\mathrm{d}c\,\mathrm{d}u\) for \(c\in\mathbb{T}^{d}\), \(u\in\mathbb{R}\). Then, writing \(\lambda_{k,n}(F)=b_{k,n}^{-1}\mathbb{E}\big{[}F(\eta_{k,n})\big{]}\), it suffices to demonstrate that as \(n\to\infty\), \[b_{k,n}^{-1}\big{|}\,\mathbb{E}\big{[}F(\eta_{k,n})\big{]}-\mathbb{E}\big{[}F( \zeta_{k,n})\big{]}\,\big{|}\to 0, \tag{3.3}\] \[b_{k,n}^{-1}\mathbb{E}\big{[}F(\zeta_{k,n})\big{]}\to\lambda_{k}(F). \tag{3.4}\] We begin with proving (3.3). Since \(H_{\ell}\) has compact support on \(\mathbb{Y}\), one can find \(u_{0}\in\mathbb{R}\) such that \[\mathrm{supp}(H_{1})\bigcup\mathrm{supp}(H_{2})\subset\mathbb{Y}_{0}:= \mathbb{T}^{d}\times[u_{0},\infty],\] where \(\mathrm{supp}(H_{\ell})\) denotes the support of \(H_{\ell}\). Thus, we may assume without loss of generality that \(\eta_{k,n}\) is a point process with state space restricted to \(\mathbb{Y}_{0}\), i.e., \[\eta_{k,n}=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+1}g_{k,n} (\mathcal{Y},\mathcal{P}_{n};u_{0})\delta_{(c(\mathcal{Y}),\,n\omega_{d} \rho(\mathcal{Y})^{d}-a_{n})}\in M_{p}(\mathbb{Y}_{0}). \tag{3.5}\] Accordingly, \(\zeta_{k,n}\) can be viewed as a Poisson point process with mean measure \[(\mathsf{Leb}\otimes\tau_{k,n})(\mathrm{d}c,\mathrm{d}u):=b_{k,n}D_{k}e^{-u} \mathbbm{1}\{u\geq u_{0}\}\,\mathrm{d}c\,\mathrm{d}u,\ \ c\in\mathbb{T}^{d},\,u\in\mathbb{R}.\] Since \(0\leq F\leq 1\), it is elementary to see that \[\big{|}F(\mu_{1})-F(\mu_{2})\big{|}\leq 2d_{\mathsf{TV}}(\mu_{1},\mu_{2}), \quad\mu_{1},\mu_{2}\in M_{p}(\mathbb{Y}_{0}).\] This implies that \(F\) is \(2\)-Lipschitz, and hence, \[\big{|}\,\mathbb{E}\big{[}F(\eta_{k,n})\big{]}-\mathbb{E}\big{[}F(\zeta_{k,n} )\big{]}\,\big{|}\leq 2d_{\mathsf{KR}}\big{(}\mathcal{L}(\eta_{k,n}),\mathcal{L}( \zeta_{k,n})\big{)}.\] Now, (3.3) will follow if we can prove that \[b_{k,n}^{-1}d_{\mathsf{KR}}\big{(}\mathcal{L}(\eta_{k,n}),\mathcal{L}(\zeta_{k,n})\big{)}\to 0,\ \ \text{as}\ n\to\infty. \tag{3.6}\] By virtue of Theorem 4.1 in [7] (see also equ. (6.5) therein), it suffices to show that as \(n\to\infty\), \[b_{k,n}^{-1}d_{\mathsf{TV}}\big{(}\mathbb{E}[\eta_{k,n}(\cdot)],\,\mathsf{Leb }\otimes\tau_{k,n}\big{)}\to 0, \tag{3.7}\] \[b_{k,n}^{-1}\big{\{}\mathrm{Var}(\eta_{k,n}(\mathbb{Y}_{0}))-\mathbb{E}[\eta_{k,n}(\mathbb{Y}_{0})]\big{\}}\to 0, \tag{3.8}\] and \[\frac{b_{k,n}^{-1}n^{2(k+1)}}{\big{(}(k+1)!\big{)}^{2}}\int_{( \mathbb{T}^{d})^{k+1}}\int_{(\mathbb{T}^{d})^{k+1}}\mathbbm{1}\big{\{} \mathcal{B}(\mathbf{x})\cap\mathcal{B}(\mathbf{z})\neq\emptyset\big{\}}\\ \times\mathbb{E}\big{[}g_{k,n}(\mathbf{x},\mathcal{P}_{n}+\delta _{\mathbf{x}};u_{0})\big{]}\mathbb{E}\big{[}g_{k,n}(\mathbf{z},\mathcal{P}_{n }+\delta_{\mathbf{z}};u_{0})\big{]}\,\mathrm{d}\mathbf{z}\,\mathrm{d} \mathbf{x}\to 0, \tag{3.9}\] where \(\mathcal{B}(\mathbf{x})\) denotes an open ball in \(\mathbb{R}^{d}\) with radius \(\rho(\mathbf{x})\) centered at \(c(\mathbf{x})\). We now prove these equations in that order. _Proof of (3.7)_: According to Lemma 2.4 in [8], \[\mathbbm{1}\{\mathcal{Y}\ \text{forms a critical $k$-face}\}=h_{k}(\mathcal{Y}) \mathbbm{1}\big{\{}\mathcal{B}(\mathcal{Y})\cap\mathcal{P}_{n}=\emptyset\big{\}}, \tag{3.10}\] where \(h_{k}(\mathcal{Y})=\mathbbm{1}\{c(\mathcal{Y})\in\sigma(\mathcal{Y})\}\). Substituting (3.10) into (2.3) with \(u=u_{0}\), one can write \[g_{k,n}(\mathcal{Y},\mathcal{P}_{n};u_{0})=h_{k}(\mathcal{Y})\mathbbm{1}\big{\{} \mathcal{B}(\mathcal{Y})\cap\mathcal{P}_{n}=\emptyset\big{\}}\times\mathbbm{1} \big{\{}\rho(\mathcal{Y})\in[r_{n}(u_{0}),R_{n}]\big{\}}. \tag{3.11}\] Appealing to the multivariate Mecke formula for Poisson point processes (see, e.g., Chapter 4 in [16]) and using (3.11), we have, for every \(A\subset\mathbb{Y}_{0}\), \[\mathbb{E}[\eta_{k,n}(A)] =\frac{n^{k+1}}{(k+1)!}\,\int_{(\mathbb{T}^{d})^{k+1}}\mathbb{1} \big{\{}(c(\mathbf{x}),n\omega_{d}\rho(\mathbf{x})^{d}-a_{n})\in A\big{\}} \mathbb{E}\big{[}g_{k,n}(\mathbf{x},\mathcal{P}_{n}+\delta_{\mathbf{x}};u_{0}) \big{]}\,\mathrm{d}\mathbf{x}\] \[=\frac{n^{k+1}}{(k+1)!}\,\int_{(\mathbb{T}^{d})^{k+1}}\mathbb{1} \big{\{}(c(\mathbf{x}),n\omega_{d}\rho(\mathbf{x})^{d}-a_{n})\in A\big{\}}h_{k} (\mathbf{x})\mathbb{1}\,\{\rho(\mathbf{x})\leq R_{n}\}e^{-n\omega_{d}\rho( \mathbf{x})^{d}}\,\mathrm{d}\mathbf{x}. \tag{3.12}\] By a change of variable based on the Blaschke-Petkantschin-type formula provided in Lemma C.1 of [3], \[\begin{split}\mathbb{E}[\eta_{k,n}(A)]&=\frac{D_{ bp}}{(k+1)!}\,\int_{(S^{k-1})^{k+1}}h_{k}(\boldsymbol{\theta})V_{\mathsf{ simp}}(\boldsymbol{\theta})^{d-k+1}\,\mathrm{d}\boldsymbol{\theta}\\ &\quad\times n^{k+1}\int_{\mathbb{T}^{d}}\int_{0}^{R_{n}}\mathbb{1 }\big{\{}(c,n\omega_{d}\rho^{d}-a_{n})\in A\big{\}}\rho^{dk-1}e^{-n\omega_{d} \rho^{d}}\,\mathrm{d}\rho\,\mathrm{d}c,\end{split} \tag{3.13}\] where \[D_{bp}=(k!)^{d-k+1}\binom{d}{k}\frac{\Omega_{d}}{\Omega_{k}\Omega_{d-k}}.\] Performing the change of variable by \(u=n\omega_{d}\rho^{d}-a_{n}\), \[n^{k+1}\int_{\mathbb{T}^{d}}\int_{0}^{R_{n}}\mathbb{1}\big{\{}( c,n\omega_{d}\rho^{d}-a_{n})\in A\big{\}}\rho^{dk-1}e^{-n\omega_{d}\rho^{d}}\, \mathrm{d}\rho\,\mathrm{d}c\] \[=\frac{b_{k,n}}{d\omega_{d}^{k}}\,\int_{A}\mathbb{1}\big{\{}u\in( -a_{n},n\omega_{d}R_{n}^{d}-a_{n})\big{\}}\Big{(}1+\frac{u}{a_{n}}\Big{)}^{k-1 }e^{-u}\,\mathrm{d}c\,\mathrm{d}u.\] It thus follows that \[\mathbb{E}[\eta_{k,n}(A)]=D_{k}b_{k,n}\int_{A}\mathbb{1}\big{\{}u\in(-a_{n}, n\omega_{d}R_{n}^{d}-a_{n})\big{\}}\Big{(}1+\frac{u}{a_{n}}\Big{)}^{k-1}e^{-u}\, \mathrm{d}c\,\mathrm{d}u, \tag{3.14}\] where \(D_{k}\) is defined in (3.1). Since \[(\mathsf{Leb}\otimes\tau_{k,n})(A)=b_{k,n}D_{k}\int_{A}e^{-u}\,\mathrm{d}c\, \mathrm{d}u,\] we have, as \(n\to\infty\), \[b_{k,n}^{-1}d_{\mathsf{TV}}\big{(}\mathbb{E}[\eta_{k,n}(\cdot)],\,\mathsf{Leb}\otimes\tau_{k,n}\big{)} =b_{k,n}^{-1}\sup_{A\subset\mathbb{Y}_{0}}\big{|}\,\mathbb{E}[ \eta_{k,n}(A)]-(\mathsf{Leb}\otimes\tau_{k,n})(A)\big{|}\] \[\leq D_{k}\int_{u_{0}}^{\infty}\Big{|}\Big{(}1+\frac{u}{a_{n}} \Big{)}^{k-1}-1\Big{|}e^{-u}\,\mathrm{d}u\] \[\quad+D_{k}\int_{u_{0}}^{\infty}\mathbb{1}\big{\{}u\notin(-a_{n}, n\omega_{d}R_{n}^{d}-a_{n})\big{\}}e^{-u}\,\mathrm{d}u\to 0.\] The last convergence follows from the dominated convergence theorem and the assumption that \(a_{n}/(nR_{n}^{d})\to 0\), \(n\to\infty\). Proof of (3.8): By a simple calculation, \[\mathbb{E}\big{[}\eta_{k,n}(\mathbb{Y}_{0})^{2}\big{]}=\mathbb{E}\Big{[} \Big{(}\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+1}g_{k,n}( \mathcal{Y},\mathcal{P}_{n};u_{0})\Big{)}^{2}\Big{]}\] \[b_{k,n}^{-1}(I_{0,n}-I_{k+1,n}^{2})\leq C^{*}\big{\{}a_{n}^{-1}+a_{n}^{k+1}e ^{-C_{1}a_{n}^{(d-k)/(d+2)}}\big{\}}\to 0,\ \ \text{as}\ n\to\infty.\] _Proof of (3.9)_: First fix \(\mathbf{x}\in(\mathbb{T}^{d})^{k+1}\). For any \(\mathbf{z}\in(\mathbb{T}^{d})^{k+1}\) with \(\mathcal{B}(\mathbf{x})\cap\mathcal{B}(\mathbf{z})\neq\emptyset\), we have \(\|c(\mathbf{x})-c(\mathbf{z})\|\leq\rho(\mathbf{x})+\rho(\mathbf{z})\leq 2R_{n}\). Thus, \[1\big{\{}\mathcal{B}(\mathbf{x})\cap\mathcal{B}(\mathbf{z})\neq\emptyset \big{\}}\leq 1\big{\{}c(\mathbf{z})\in B_{2R_{n}}(c(\mathbf{x}))\big{\}}.\] Applying this inequality, while proceeding as in (3.12) and (3.13), \[\frac{b_{k,n}^{-1}n^{k+1}}{(k+1)!}\,\int_{(\mathbb{T}^{d})^{k+1}} \mathbbm{1}\big{\{}\mathcal{B}(\mathbf{x})\cap\mathcal{B}(\mathbf{z})\neq \emptyset\big{\}}\,\mathbb{E}[g_{k,n}(\mathbf{z},\mathcal{P}_{n}+\delta_{ \mathbf{z}};u_{0})]\,\mathrm{d}\mathbf{z}\] \[\leq\frac{b_{k,n}^{-1}n^{k+1}}{(k+1)!}\,\int_{(\mathbb{T}^{d})^{k +1}}\mathbbm{1}\big{\{}c(\mathbf{z})\in B_{2R_{n}}(c(\mathbf{x}))\big{\}}h_{k} (\mathbf{z})\mathbbm{1}\big{\{}\rho(\mathbf{z})\in[r_{n}(u_{0}),R_{n}]\big{\}} e^{-n\omega_{d}\rho(\mathbf{z})^{d}}\,\mathrm{d}\mathbf{z}\] \[=\frac{b_{k,n}^{-1}n^{k+1}}{(k+1)!}\,D_{bp}\int_{(S^{k-1})^{k+1}} h_{k}(\boldsymbol{\theta})V_{\mathsf{simp}}(\boldsymbol{\theta})^{d-k+1}\, \mathrm{d}\boldsymbol{\theta}\int_{B_{2R_{n}}(c(\mathbf{x}))}\,\mathrm{d}c \int_{r_{n}(u_{0})}^{R_{n}}\rho^{dk-1}e^{-n\omega_{d}\rho^{d}}\,\mathrm{d}\rho\] \[=b_{k,n}^{-1}(2R_{n})^{d}\omega_{d}\mathbb{E}[\eta_{k,n}(\mathbb{ Y}_{0})],\] where the last equality is due to (3.13) with \(A=\mathbb{Y}_{0}\). Because of (3.7), the last term above is further bounded by \(C^{*}b_{k,n}^{-1}R_{n}^{d}(\mathsf{Leb}\otimes\tau_{k,n})(\mathbb{Y}_{0})=C^{* }R_{n}^{d}\). Referring the obtained bound back to (3.9), one can eventually bound (3.9) by \[C^{*}R_{n}^{d}\,\frac{n^{k+1}}{(k+1)!}\,\int_{(\mathbb{T}^{d})^{k+1}}\mathbb{E }[g_{k,n}(\mathbf{x},\mathcal{P}_{n}+\delta_{\mathbf{x}};u_{0})]\,\mathrm{d} \mathbf{x}=C^{*}R_{n}^{d}\mathbb{E}[\eta_{k,n}(\mathbb{Y}_{0})]=o(R_{n}^{d}) \to 0,\quad n\to\infty,\] as desired. We will proceed to show (3.4). Notice that one may express \(\zeta_{k,n}=\sum_{i=1}^{N_{k,n}}\delta_{(C_{i},U_{i})}\), where \(N_{k,n}\) is a Poisson random variable with mean \(b_{k,n}D_{k}e^{-u_{0}}\) and \((C_{i},U_{i})\) are i.i.d. random vectors on \(\mathbb{Y}\) with density \(e^{-(u-u_{0})}\mathbbm{1}\{u\geq u_{0}\}\,\mathrm{d}c\,\mathrm{d}u\). Besides, \(N_{k,n}\) can be taken to be independent of \((C_{i},U_{i})\). With this construction now available, we have \[b_{k,n}^{-1}\mathbb{E}[F(\zeta_{k,n})]\] \[=b_{k,n}^{-1}\mathbb{E}\Big{[}\prod_{\ell=1}^{2}\Big{(}1-e^{-(H_ {\ell}(C_{1},U_{1})-\varepsilon_{\ell})_{+}}\Big{)}\mathbbm{1}\{N_{k,n}=1\} \Big{]}\] \[\qquad\qquad+b_{k,n}^{-1}\mathbb{E}\Big{[}\prod_{\ell=1}^{2} \Big{(}1-e^{-\big{(}\sum_{i=1}^{N_{k,n}}H_{\ell}(C_{i},U_{i})-\varepsilon_{ \ell}\big{)}_{+}}\Big{)}\mathbbm{1}\{N_{k,n}\geq 2\}\Big{]}\] \[=:A_{n}+B_{n}.\] By an elementary calculation, \[B_{n}\leq b_{k,n}^{-1}\mathbb{P}(N_{k,n}\geq 2)\leq C^{*}b_{k,n}\to 0,\quad n\to\infty.\] By the independence of \(N_{k,n}\) and \((C_{1},U_{1})\), \[A_{n} =b_{k,n}^{-1}\mathbb{E}\Big{[}\prod_{\ell=1}^{2}\Big{(}1-e^{-(H_ {\ell}(C_{1},U_{1})-\varepsilon_{\ell})_{+}}\Big{)}\Big{]}\mathbb{P}(N_{k,n}=1)\] \[=D_{k}e^{-u_{0}}\cdot e^{-b_{k,n}D_{k}e^{-u_{0}}}\,\int_{\mathbb{ Y}}\prod_{\ell=1}^{2}\Big{(}1-e^{-(H_{\ell}(c,u)-\varepsilon_{\ell})_{+}} \Big{)}e^{-(u-u_{0})}\mathbbm{1}\{u\geq u_{0}\}\,\mathrm{d}c\,\mathrm{d}u\] \[\to D_{k}\int_{\mathbb{Y}}\prod_{\ell=1}^{2}\Big{(}1-e^{-(H_{\ell }(c,u)-\varepsilon_{\ell})_{+}}\Big{)}e^{-u}\,\mathrm{d}c\,\mathrm{d}u=\lambda _{k}(F),\] as required. ### Proof of Theorem 2.5 Proof.: By restricting the state space from \(\mathbb{Y}\) to \(\mathbb{Y}_{0}=\mathbb{T}^{d}\times[u_{0},\infty]\), we can establish \(\mathcal{M}_{0}\)-convergence analogous to Theorem 2.3. Namely, as \(n\to\infty\), \[b_{k,n}^{-1}\mathbb{P}(\eta_{k,n}\in\cdot)\to\lambda_{k},\quad\text{in } \mathcal{M}_{0}.\] Here, \(\eta_{k,n}\) is defined as in (3.5) due to the restriction of the state space, while the limiting measure is written as \[\lambda_{k}(\cdot)=D_{k}\int_{\mathbb{Y}_{0}}\mathbb{1}\{\delta_{(c,u)}\in \cdot\}e^{-u}\,\mathrm{d}c\,\mathrm{d}u.\] Define a continuous map \(V:M_{p}(\mathbb{Y}_{0})\to\mathbb{N}:=\{0,1,2,\dots\}\) by \(V(\xi)=\xi(\mathbb{Y}_{0})\), where \(M_{p}(\mathbb{Y}_{0})\) is equipped with vague topology, and \(\mathbb{N}\) is equipped with the discrete topology. It then follows from Theorem 2.5 in [14] that \[\lambda_{k,n}\circ V^{-1}\to\lambda_{k}\circ V^{-1},\quad\text{in }\mathcal{M}_{0}(M_{p}(\mathbb{N})),\quad n\to\infty;\] equivalently, \[b_{k,n}^{-1}\mathbb{P}(G_{k,n}\in\cdot)\to D_{k}e^{-u_{0}}\delta_{1},\quad \text{in }\mathcal{M}_{0}(M_{p}(\mathbb{N})),\quad n\to\infty. \tag{3.19}\] For every \(x>0\), the indicator \(\mathbb{1}_{[x,\infty)}(\cdot):\mathbb{N}\to\{0,1\}\) is bounded and continuous with respect to the discrete topology. Moreover, it vanishes in the neighborhood of \(0\) (i.e., the origin of \(\mathbb{N}\)). In conclusion, \(\mathbb{1}_{[x,\infty)}(\cdot)\in\mathcal{C}_{0}\), and thus, (3.19) implies that for every \(x>0\), \[b_{k,n}^{-1}\mathbb{P}(G_{k,n}\geq x)\to D_{k}e^{-u_{0}}\delta_{1}([x,\infty)),\quad n\to\infty.\] Now, Lemma 6.1 in [20] concludes the desired vague convergence in Theorem 2.5. ### Proof of Corollary 2.6 Proof of (2.13).: _Part I_: \(2\leq k\leq d-1\). It suffices to show that as \(n\to\infty\), \[b_{k,n}^{-1}\big{|}\,\mathbb{E}\big{[}F(\eta_{k,n})\big{]}-\mathbb{E}\big{[}F( \eta_{k,n}^{+})\big{]}\,\big{|}\to 0,\] where \(F\) is defined at (3.2) (subscripts are all omitted). Since \(\bigcup_{\ell=1}^{2}\operatorname{supp}(H_{\ell})\subset\mathbb{Y}_{0}= \mathbb{T}^{d}\times[u_{0},\infty]\) for some \(u_{0}\in\mathbb{R}\), we can reformulate \(\eta_{k,n}^{\pm}\) as \[\eta_{k,n}^{\pm}=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\,\mathcal{Y}|=k+1} g_{k,n}^{\pm}(\mathcal{Y},\mathcal{P}_{n};u_{0})\delta_{(c(\mathcal{Y}),\,n\omega _{d}\rho(\mathcal{Y})^{d}-a_{n})}\in M_{p}(\mathbb{Y}_{0}). \tag{3.20}\] As \(0\leq F\leq 1\), we have that \[b_{k,n}^{-1}\big{|}\,\mathbb{E}\big{[}F(\eta_{k,n})\big{]}-\mathbb{E}\big{[}F( \eta_{k,n}^{+})\big{]}\,\big{|}\leq b_{k,n}^{-1}\mathbb{P}(\eta_{k,n}\neq\eta _{k,n}^{+}).\] By definition, \(\eta_{k,n}=\eta_{k,n}^{+}+\eta_{k,n}^{-}\); hence, \(\eta_{k,n}\neq\eta_{k,n}^{+}\) implies that \(G_{k,n}^{-}=\eta_{k,n}^{-}(\mathbb{Y}_{0})\geq 1\). Now, by Markov's inequality, \[b_{k,n}^{-1}\mathbb{P}(\eta_{k,n}\neq\eta_{k,n}^{+})\leq b_{k,n}^{-1}\mathbb{P }(G_{k,n}^{-}\geq 1)\leq b_{k,n}^{-1}\mathbb{E}[G_{k,n}^{-}].\] By the proof of Proposition 5.1 in [3] (see the equation at page 744, line 3 therein), \[b_{k,n}^{-1}\mathbb{E}[G_{k,n}^{-}]\leq C^{*}b_{k,n}^{-1}n(a_{n}+u_{0})^{k-2}e ^{-(a_{n}+u_{0})}\leq C^{*}a_{n}^{-1}\to 0,\ \text{ as }n\to\infty, \tag{3.21}\] for which we have replaced "\(\Lambda\)" in [3] with \(a_{n}+u_{0}\). This completes the proof of Part I. _Part II_: \(k=1\). Our proof uses ideas from Chapter 13 of [19]. As in Part I, \(\eta_{1,n}\) and \(\eta_{1,n}^{+}\) can be given in the form of (3.5) and (3.20) for a fixed \(u_{0}\in\mathbb{R}\). For \(r>0\), let \(G(\mathcal{P}_{n},r)\) be a random geometric graph on a Poisson point process \(\mathcal{P}_{n}\) with edges \(\{x,y\}\) for all pairs \(x,y\in\mathcal{P}_{n}\) with \(\|x-y\|\leq r\). Since \(\eta_{1,n}^{-}\) corresponds to the edges of the minimal spanning tree on \(\mathcal{P}_{n}\), \(G_{1,n}^{-}\geq 1\) implies that \(G(\mathcal{P}_{n},2r_{n}(u_{0}))\) is not connected, and thus, by arguing as in Part I we have that \[\mathbb{P}(\eta_{1,n}\neq\eta_{1,n}^{+})\leq\mathbb{P}\big{(}G_{1,n}^{-}\geq 1 \big{)}\leq\mathbb{P}\big{(}G(\mathcal{P}_{n},2r_{n}(u_{0}))\text{ is not connected}\big{)}.\] Hence, it suffices to demonstrate that \[b_{1,n}^{-1}\mathbb{P}\big{(}G(\mathcal{P}_{n},2r_{n}(u_{0}))\text{ is not connected}\big{)}\to 0,\quad n\to\infty. \tag{3.22}\] Before continuing, we shall introduce a few required notions. Given a graph \(G\) with vertex set \(V\), a non-empty subset \(U\subset V\) is said to be a separating set for \(G\) if none of the vertices in \(V\setminus U\) are adjacent to \(U\). Moreover, a pair of non-empty disjoint sets \(U\subset V\), \(W\subset V\) is called a separating pair for \(G\), if \((i)\) the subgraph of \(G\) induced by \(U\) is connected and the same holds for \(W\), \((ii)\) none of the elements in \(U\) are adjacent to any element in \(W\), and \((iii)\) none of the elements of \(V\setminus(U\cup W)\) are adjacent to \(U\cup W\). Using these notions and given \(K>0\), we define \(E_{n}(K)\) as the event that there exists a separating set \(U\) for \(G(\mathcal{P}_{n},2r_{n}(u_{0}))\) with at least two elements, such that the diameter of \(U\) is less than \(2Kr_{n}(u_{0})\). Further, denote by \(F_{n}(K)\) the event that there exists a separating pair \((U,W)\) for \(G(\mathcal{P}_{n},2r_{n}(u_{0}))\), so that the diameters of \(U\) and \(W\) both exceed \(2Kr_{n}(u_{0})\). Now, one can upper bound (3.22) as \[b_{1,n}^{-1}\mathbb{P}\big{(}G(\mathcal{P}_{n},2r_{n}(u_{0}))\text{ is not connected}\big{)}\leq b_{1,n}^{-1}\mathbb{P}(A_{n})+b_{1,n}^{-1}\mathbb{P}(B_{n}), \tag{3.23}\] where \(A_{n}\) is the event that there exists at least one isolated vertex in \(G(\mathcal{P}_{n},2r_{n}(u_{0}))\), and \(B_{n}\) is the event for which there are no isolated vertices but \(G(\mathcal{P}_{n},2r_{n}(u_{0}))\) contains _multiple_ connected components (of size at least 2). Now Markov's inequality and the Mecke formula for Poisson point processes yield that \[b_{1,n}^{-1}\mathbb{P}(A_{n})\leq b_{1,n}^{-1}n\int_{\mathbb{T}^{d}}\mathbb{ P}\big{(}\mathcal{P}_{n}(B_{2r_{n}(u_{0})}(x))=0\big{)}\,\mathrm{d}x=e^{a_{n}-2^{ d}(a_{n}+u_{0})}\to 0,\ \text{ as }n\to\infty.\] In order to handle the remaining term in (3.23), it is sufficient to prove the following results. \((i)\) There exists \(K>0\) such that \[b_{1,n}^{-1}\mathbb{P}(F_{n}(K))\to 0,\quad n\to\infty. \tag{3.24}\] \((ii)\) For \(K\) as above, there exists \(K_{1}\in(0,K)\) such that \[b_{1,n}^{-1}\mathbb{P}(E_{n}(K_{1}))\to 0,\quad n\to\infty, \tag{3.26}\] \[b_{1,n}^{-1}\mathbb{P}\big{(}E_{n}(K)\setminus E_{n}(K_{1})\big{)} \to 0,\quad n\to\infty. \tag{3.25}\] Indeed, if one can show \((i)\) and \((ii)\) above, it follows from the proof of Theorem 13.10 in [19] and Lemma 13.1 therein that as \(n\to\infty\), \[b_{1,n}^{-1}\mathbb{P}(B_{n}) \leq b_{1,n}^{-1}\mathbb{P}\big{(}E_{n}(K)\big{)}+b_{1,n}^{-1} \mathbb{P}(F_{n}(K))\] \[=b_{1,n}^{-1}\mathbb{P}(E_{n}(K_{1}))+b_{1,n}^{-1}\mathbb{P} \big{(}E_{n}(K)\setminus E_{n}(K_{1})\big{)}+b_{1,n}^{-1}\mathbb{P}(F_{n}(K)) \to 0.\] Proof of (3.24): By the proof of Proposition 13.13 in [19], there exist constants \(\varepsilon>0\), \(\gamma>0\) such that we have, for \(K>0\) and large enough \(n\), \[\mathbb{P}(F_{n}(K))\leq C^{*}\sum_{i\geq K/\varepsilon}(2r_{n}(u_{0}))^{-2d}ie^ {\gamma i}\cdot e^{-i\varepsilon^{d}(2r_{n}(u_{0}))^{d}n}\leq C^{*}\frac{n^{2} }{a_{n}^{2}}\sum_{i\geq K/\varepsilon}e^{-((2\varepsilon)^{d}(a_{n}+u_{0})/ \omega_{d}-\gamma^{\prime})i},\] for some \(\gamma^{\prime}>0\) with \(ie^{\gamma i}\leq e^{\gamma^{\prime}i}\) for all \(i\). The last expression is further bounded by \(C^{*}n^{2}a_{n}^{-2}e^{-2^{d}\varepsilon^{d-1}Ka_{n}/\omega_{d}}\); thus, \[b_{1,n}^{-1}\mathbb{P}(F_{n}(K))\leq C^{*}ne^{-(2^{d}\varepsilon^{d-1}K/\omega _{d}-1)a_{n}}\leq C^{*}n^{2-2^{d}\varepsilon^{d-1}K/\omega_{d}}.\] If one chooses \(K>\omega_{d}/(2\varepsilon)^{d-1}\), then (3.24) follows as desired. Proof of (3.25): Having fixed \(K\) as above, we next show (3.25). It follows from equ. at page 297, line -3 in [19] and the proof of Lemma 13.5 in [19] that there exists \(K_{1}\in(0,K)\) such that \[\mathbb{P}(E_{n}(K_{1})) \leq C^{*}n(nr_{n}(u_{0})^{d})^{1-d}\int_{\mathbb{T}^{d}}\mathbb{ P}\big{(}\mathcal{P}_{n}(B_{2r_{n}(u_{0})}(x))=0\big{)}\,\mathrm{d}x\] \[=C^{*}n\Big{(}\frac{a_{n}+u_{0}}{\omega_{d}}\Big{)}^{1-d}e^{-2^{d} (a_{n}+u_{0})}\leq C^{*}na_{n}^{1-d}e^{-2^{d}a_{n}}.\] Therefore, \(b_{1,n}^{-1}\mathbb{P}(E_{n}(K_{1}))\leq C^{*}a_{n}^{1-d}e^{-(1+2^{d})a_{n}}\to 0\) as \(n\to\infty\). Proof of (3.26): By equ. (13.43) in [19] and the last line of the proof of Lemma 13.16 therein, there exists \(\xi>0\) such that \[\mathbb{P}\big{(}E_{n}(K)\setminus E_{n}(K_{1})\big{)}\leq C^{*}n\int_{ \mathbb{T}^{d}}\mathbb{P}\big{(}\mathcal{P}_{n}(B_{2r_{n}(u_{0})}(x))=0\big{)} e^{-\frac{\xi n}{2}(2r_{n}(u_{0}))^{d}}\,\mathrm{d}x\leq C^{*}ne^{-2a_{n}-2^{d-1} \xi a_{n}/\omega_{d}}.\] This implies that as \(n\to\infty\), \[b_{1,n}^{-1}\mathbb{P}\big{(}E_{n}(K)\setminus E_{n}(K_{1})\big{)}\leq C^{*}e ^{-a_{n}-2^{d-1}\xi a_{n}/\omega_{d}}\to 0.\] Proof of (2.15).: One can exploit the same proof strategy as in Theorem 2.5, with \(\eta_{k,n}\) replaced by \(\eta_{k,n}^{+}\) and (2.6) replaced by (2.13). Proof of (2.16).: By Lemma 6.1 in [20] and (2.15), we only need to prove that for every \(x>0\), \[b_{k,n}^{-1}\big{\{}\mathbb{P}(G_{k+1,n}^{-}\geq x)-\mathbb{P}(G_{k,n}^{+}\geq x )\big{\}}\to 0,\quad n\to\infty.\] It thus suffices to show that as \(n\to\infty\), \[b_{k,n}^{-1}\mathbb{P}(G_{k,n}^{+}>G_{k+1,n}^{-})\to 0, \tag{3.28}\] \[b_{k,n}^{-1}\mathbb{P}(G_{k+1,n}^{-}>G_{k,n}^{+})\to 0. \tag{3.27}\] We begin with proving (3.27). Let \(i_{*}:H_{k}\big{(}\bigcup_{p\in\mathcal{P}_{n}}B_{r_{n}(u_{0})}(p)\big{)}\to H _{k}(\mathbb{T}^{d})\) be a map induced by the inclusion \(i:\bigcup_{p\in\mathcal{P}_{n}}B_{r_{n}(u_{0})}(p)\hookrightarrow\mathbb{T}^{d}\), where \(H_{k}(\cdot)\) represents the \(k\)th homology group. By the Nerve Lemma (see, e.g., Theorem 10.7 of [1]), there exists an isomorphism \(f_{*}:H_{k}\big{(}\check{C}(\mathcal{P}_{n},r_{n}(u_{0}))\big{)}\to H_{k} \big{(}\bigcup_{p\in\mathcal{P}_{n}}B_{r_{n}(u_{0})}(p)\big{)}\). Now, we can define the map \[j_{*}:=i_{*}\circ f_{*}:H_{k}\big{(}\check{C}(\mathcal{P}_{n},r_{n}(u_{0})) \big{)}\to H_{k}(\mathbb{T}^{d}).\] Suppose that \(j_{*}\) is surjective; that is, \(\mathrm{Im}(j_{*})=H_{k}(\mathbb{T}^{d})\). Then, all the positive critical \(k\)-faces will be eventually terminated by a "matching" negative critical \((k+1)\)-face, which in turn means that \(G_{k,n}^{+}\leq G_{k+1,n}^{-}\). For \(0<r_{1}<r_{2}<\infty\) and a subset \(\mathcal{Y}\subset\mathcal{P}_{n}\) with \(|\mathcal{Y}|=d+1\), let \(A_{r_{1},r_{2}}(\mathcal{Y})\) be the closure of an annulus \(B_{r_{2}}(c(\mathcal{Y}))\setminus B_{r_{1}}(c(\mathcal{Y}))\), i.e., the closed annulus centered at the critical point \(c(\mathcal{Y})\). We then define \[\hat{G}_{d,n}:=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=d+1}g_{ d,n}(\mathcal{Y},\mathcal{P}_{n};u_{0})\mathds{1}\Big{\{}A_{r_{n}(u_{0}),4r_{n}(u_{0} )}(\mathcal{Y})\not\subset\bigcup_{p\in\mathcal{P}_{n}}B_{r_{n}(u_{0})}(p) \Big{\}}.\] According to the proof of Lemma 5.7 in [2], as well as Lemma 5.9 therein, one can see that if \(\hat{G}_{d,n}=0\), then \(j_{*}\) becomes surjective. Combining these observations, it now follows that \[b_{k,n}^{-1}\mathbb{P}(G_{k,n}^{+}>G_{k+1,n}^{-})\leq b_{k,n}^{-1}\mathbb{P}( \hat{G}_{d,n}\geq 1)\leq b_{k,n}^{-1}\mathbb{E}[\hat{G}_{d,n}].\] Finally, appealing to the last equation in the proof of Lemma 5.8 in [2], \[b_{k,n}^{-1}\mathbb{E}[\hat{G}_{d,n}]\leq C^{*}b_{k,n}^{-1}n(a_{n}+u_{0})^{d-1} e^{-(a_{n}+u_{0})(1+C^{*})}\leq C^{*}a_{n}^{d-k}e^{-C^{*}a_{n}}\to 0,\quad n\to\infty,\] as desired. Next, we focus our attention to (3.28). To estimate the probability in (3.28), we need to refer to the detailed discussion on the structure of critical \(k\)-faces, provided in Section 7 of [3]. Below, we introduce some additional concepts and notation, which we try to keep as consistent as possible with those in [3]. Suppose in the sequel that a \((k+1)\)-point subset \(\mathcal{Y}\) of \(\mathcal{P}_{n}\) is in general position. Let \(\Pi(\mathcal{Y})\) be the unique linear \(k\)-plane centered at \(c(\mathcal{Y})\) containing \(\mathcal{Y}\), and \(S^{k-1}\) be the unit sphere in \(\Pi(\mathcal{Y})\) (centered at \(c(\mathcal{Y})\)). Denote by \(\theta(\mathcal{Y})=\{\theta_{1}(\mathcal{Y}),\ldots,\theta_{k+1}(\mathcal{Y})\}\) the spherical coordinates of \(\mathcal{Y}\) in \(S^{k-1}\), and define \(\hat{\theta}_{i}(\mathcal{Y}):=\theta(\mathcal{Y})\setminus\{\theta_{i}( \mathcal{Y})\}\). Let \[\phi_{1}(\mathcal{Y}):=\min_{1\leq i\leq k+1}\big{\|}c(\hat{\theta}_{i}( \mathcal{Y}))\big{\|}\] be the scaled distance via \(\rho(\mathcal{Y})\), between the center \(c(\mathcal{Y})\) and the nearest \((k-1)\)-face of \(\mathcal{Y}\). Moreover, denote by \(\hat{\mathcal{Y}}_{\min}\) such a nearest \((k-1)\)-face of \(\mathcal{Y}\) from the center \(c(\mathcal{Y})\). Now, let \(\hat{\mathcal{Y}}_{i}\), \(i=1,\ldots,k\) be the remaining \((k-1)\)-faces of \(\mathcal{Y}\)_except for_\(\hat{\mathcal{Y}}_{\min}\). We then define \[\phi_{2}(\mathcal{Y}):=\min_{1\leq i\leq k}\rho(\mathcal{Y})^{-1}\inf_{z\in\Pi (\hat{\mathcal{Y}}_{i})}\big{\|}c(\hat{\mathcal{Y}}_{\min})-z\big{\|},\] which is the distance scaled by \(\rho(\mathcal{Y})\), between \(c(\hat{\mathcal{Y}}_{\min})\) and the nearest \((k-1)\)-face of \(\mathcal{Y}\) (except for \(\hat{\mathcal{Y}}_{\min}\)). Next, let \[\hat{\rho}(\mathcal{Y}):=\rho(\mathcal{Y})+\|c(\hat{\mathcal{Y}}_{\min})-c( \mathcal{Y})\|,\] and \[\hat{\mathcal{B}}(\mathcal{Y}):=B_{\hat{\rho}(\mathcal{Y})}\big{(}c(\hat{ \mathcal{Y}}_{\min})\big{)}\setminus\mathcal{B}(\mathcal{Y}),\] where \(\mathcal{B}(\mathcal{Y})\) is defined at (3.9). Let \(\hat{\Pi}(\mathcal{Y})\) be the \((d-1)\)-dimensional affine plane containing \(c(\hat{\mathcal{Y}}_{\min})\) and orthogonal to the line through \(c(\mathcal{Y})\) and \(c(\hat{\mathcal{Y}}_{\min})\). Given \(\alpha>0\), define \[\hat{\mathcal{B}}_{\alpha}(\mathcal{Y}):=\big{\{}y\in\hat{\mathcal{B}}( \mathcal{Y}):\inf_{z\in\hat{\Pi}(\mathcal{Y})}\|y-z\|\geq\alpha\rho(\mathcal{ Y})\big{\}}. \tag{3.29}\] Since \(\Pi(\hat{\mathcal{Y}}_{\min})\subset\hat{\Pi}(\mathcal{Y})\), the set (3.29) contains points in \(\hat{\mathcal{B}}(\mathcal{Y})\) that are distant at least \(\alpha\rho(\mathcal{Y})\) from the plane \(\Pi(\hat{\mathcal{Y}}_{\min})\). Before proceeding, we define \[G^{(-,+)}_{k+1,n}:=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+2}g^{ -}_{k+1,n}(\mathcal{Y},\mathcal{P}_{n};u_{0})\mathbb{1}\,\{\hat{\mathcal{Y}}_{ \min}\text{ is a positive critical $k$-face}\},\] which is the number of negative critical \((k+1)\)-faces whose critical values are between \(r_{n}(u_{0})\) and \(R_{n}\), so that \(\hat{\mathcal{Y}}_{\min}\) forms a positive critical \(k\)-face. Because of equ. (7.11) in [3], it holds that \[G^{-}_{k+1,n}-G^{(-,+)}_{k+1,n}\leq\sum_{i=1}^{4}G^{(i)}_{k+1,n},\] where \(G^{(i)}_{k+1,n}\) above are defined respectively as \[G^{(1)}_{k+1,n} =\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+2}g^{ -}_{k+1,n}(\mathcal{Y},\mathcal{P}_{n};u_{0})\mathbb{1}\,\{\phi_{1}(\mathcal{ Y})>\varepsilon_{1}\},\] \[G^{(2)}_{k+1,n} =\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+2}g^{ -}_{k+1,n}(\mathcal{Y},\mathcal{P}_{n};u_{0})\mathbb{1}\,\{\phi_{1}(\mathcal{ Y})\leq\varepsilon_{1},\,\phi_{2}(\mathcal{Y})\leq\varepsilon_{2}\},\] \[G^{(3)}_{k+1,n} =\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+2}g^{ -}_{k+1,n}(\mathcal{Y},\mathcal{P}_{n};u_{0})\mathbb{1}\,\{\phi_{1}(\mathcal{ Y})\leq\varepsilon_{1},\,\big{(}\hat{\mathcal{B}}(\mathcal{Y})\setminus\hat{ \mathcal{B}}_{\varepsilon_{3}}(\mathcal{Y})\big{)}\cap\mathcal{P}_{n}\neq\emptyset\},\] \[G^{(4)}_{k+1,n} =\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+2}g^{ -}_{k+1,n}(\mathcal{Y},\mathcal{P}_{n};u_{0})\mathbb{1}\,\{\phi_{1}(\mathcal{ Y})\leq\varepsilon_{1},\,\phi_{2}(\mathcal{Y})>\varepsilon_{2},\,\hat{ \mathcal{B}}_{\varepsilon_{3}}(\mathcal{Y})\cap\mathcal{P}_{n}\neq\emptyset\},\] with \[\varepsilon_{1}:=\frac{2}{D_{k,2}}\cdot\frac{\log(a_{n}+u_{0})}{a_{n}+u_{0}}; \ \ \varepsilon_{2}:=\big{(}\log(a_{n}+u_{0})\big{)}^{-4};\ \ \varepsilon_{3}:= \varepsilon_{1}^{2/3}\] (\(D_{k,2}\) is a positive constant introduced in the proof of Lemma 5.6 in [3]). Although \(\varepsilon_{1}\) and \(\varepsilon_{3}\) above are defined analogously to equ. (7.12) in [3], the definition of \(\varepsilon_{2}\) is different from equ. (7.12) of [3]. Furthermore, define \[G^{(5)}_{k+1,n}=\sum_{\mathcal{Y}\subset\mathcal{P}_{n},\,|\mathcal{Y}|=k+2}g^{ -}_{k+1,n}(\mathcal{Y},\mathcal{P}_{n};u_{0})\mathbb{1}\,\{\phi_{1}(\mathcal{ Y})\leq\varepsilon_{1},\,\rho(\mathcal{Y})>r_{n}(u_{0}),\,\rho(\hat{ \mathcal{Y}}_{\min})<r_{n}(u_{0})\}.\] From the discussion in Part II of the proof of Proposition 7.1 in [3], it is known that \(G^{(i)}_{k+1,n}=0\) for \(i=1,\ldots,5\) implies that \(G^{(-,+)}_{k+1,n}\leq G^{+}_{k,n}\). In conclusion, one can see that \[b^{-1}_{k,n}\mathbb{P}(G^{-}_{k+1,n}>G^{+}_{k,n}) \leq b^{-1}_{k,n}\mathbb{P}\Big{(}\sum_{i=1}^{4}G^{(i)}_{k+1,n}+G ^{(-,+)}_{k+1,n}>G^{+}_{k,n}\Big{)}\] \[\leq b^{-1}_{k,n}\mathbb{P}\Big{(}G^{(-,+)}_{k+1,n}>G^{+}_{k,n}, \,\sum_{i=1}^{5}G^{(i)}_{k+1,n}=0\Big{)}+b^{-1}_{k,n}\mathbb{P}\Big{(}\sum_{i=1 }^{5}G^{(i)}_{k+1,n}\geq 1\Big{)}\] \[\leq\sum_{i=1}^{5}b^{-1}_{k,n}\mathbb{E}[G^{(i)}_{k+1,n}].\] From this analysis, it is enough to show that for every \(i\in\{1,\ldots,5\}\), \[b^{-1}_{k,n}\mathbb{E}[G^{(i)}_{k+1,n}]\to 0,\quad\text{as $n\to\infty$}. \tag{3.30}\] The proof of (3.30) is highly related to Lemmas 7.9-7.13 in [3]. First, by replacing "\(\Lambda\)" in [3] with \(a_{n}+u_{0}\), the proof of Lemma 7.9 in [3] ensures that \[b_{k,n}^{-1}\mathbb{E}[G_{k+1,n}^{(1)}]\leq C^{*}b_{k,n}^{-1}n(a_{n}+u_{0})^{k-2 }e^{-(a_{n}+u_{0})}\leq C^{*}a_{n}^{-1}\to 0,\quad n\to\infty.\] Subsequently, proceeding as in the proof of Lemma 7.10 in [3], \[b_{k,n}^{-1}\mathbb{E}[G_{k+1,n}^{(2)}] \leq C^{*}b_{k,n}^{-1}n(a_{n}+u_{0})^{k}e^{-(a_{n}+u_{0})}\Big{\{} \frac{\varepsilon_{1}\varepsilon_{12}}{\sqrt{1-\varepsilon_{1}^{2}}}+\frac{ \varepsilon_{1}(\varepsilon_{1}+\varepsilon_{2})}{\varepsilon_{12}}\Big{\}}\] \[\leq C^{*}a_{n}\Big{\{}\frac{\varepsilon_{1}\varepsilon_{12}}{ \sqrt{1-\varepsilon_{1}^{2}}}+\frac{\varepsilon_{1}(\varepsilon_{1}+ \varepsilon_{2})}{\varepsilon_{12}}\Big{\}},\] where \(\varepsilon_{12}:=(\log(a_{n}+u_{0}))^{-2}\). Since \(\varepsilon_{1}\to 0\) and \(\varepsilon_{1}/\varepsilon_{2}\to 0\) as \(n\to\infty\), one can see that \(b_{k,n}^{-1}\mathbb{E}[G_{k+1,n}^{(2)}]\leq C^{*}(\log a_{n})^{-1}\to 0\) as \(n\to\infty\). Next, it follows from the proof of Lemma 7.11 in [3] that \[b_{k,n}^{-1}\mathbb{E}[G_{k+1,n}^{(3)}]\leq C^{*}b_{k,n}^{-1}\varepsilon_{3}^{ 4}n(a_{n}+u_{0})^{k+1}e^{-(a_{n}+u_{0})}\leq C^{*}(\log a_{n})^{8/3}a_{n}^{-2/ 3}\to 0,\quad n\to\infty.\] Furthermore, by the proof of Lemma 7.12 in [3], \[b_{k,n}^{-1}\mathbb{E}[G_{k+1,n}^{(4)}]\leq C^{*}b_{k,n}^{-1}\varepsilon_{0}^ {-d}\varepsilon_{1}n(a_{n}+u_{0})^{k}e^{-(a_{n}+u_{0})(1+C^{*}\varepsilon_{0} )},\] where \(\varepsilon_{0}:=(a_{n}+u_{0})^{-3/4}\). Thus, we have \[b_{k,n}^{-1}\mathbb{E}[G_{k+1,n}^{(4)}]\leq C^{*}a_{n}^{3d/4}(\log a_{n})e^{- C^{*}a_{n}^{1/4}}\to 0,\quad n\to\infty.\] Finally, the proof of Lemma 7.13 in [3] concludes that as \(n\to\infty\), \[b_{k,n}^{-1}\mathbb{E}[G_{k+1,n}^{(5)}] \leq C^{*}b_{k,n}^{-1}\varepsilon_{1}n(a_{n}+u_{0})^{k}e^{-(a_{n }+u_{0})}\big{\{}d\varepsilon_{1}^{2}(a_{n}+u_{0})+o(\varepsilon_{1}^{2}(a_{n }+u_{0}))\big{\}}\] \[\leq C^{*}b_{k,n}^{-1}\varepsilon_{1}^{3}n(a_{n}+u_{0})^{k+1}e^{-( a_{n}+u_{0})}\leq C^{*}(\log a_{n})^{3}a_{n}^{-1}\to 0.\] Now, (3.30) has been established. Proof of (2.14).: Analogously to (2.12), we define the process \((\eta_{k,n}^{(+,r)})_{n\geq 1}\) by dropping the locational coordinate \(c(\mathcal{Y})\) from (2.11). Note that by continuous mapping theorem, the restricted process \((\eta_{k,n}^{(+,r)})_{n\geq 1}\) satisfies \(\mathcal{M}_{0}\) convergence analogous to (2.13) with the limit measure \(\lambda_{k}^{(r)}\) defined at (2.14). Define \(F=F_{H_{1},H_{2},\epsilon_{1},\epsilon_{2}}\) as in (3.2), for which the domain of \(F\) is restricted to \(M_{p}\big{(}(-\infty,\infty]\big{)}\). Then, to complete the proof, we need to show that \[b_{k,n}^{-1}\big{|}\mathbb{E}[F(\eta_{k,n}^{(+,r)})]-\mathbb{E}[F(\eta_{k+1,n} ^{(-,r)})]\big{|}\to 0. \tag{3.31}\] Again the compact support of \(H_{\ell}\) on \((-\infty,\infty]\) gives us \(u_{0}\in\mathbb{R}\) such that \[\operatorname{supp}(H_{1})\bigcup\operatorname{supp}(H_{2})\subset[u_{0}, \infty].\] From (2.15) and (2.16), we have \(b_{k,n}^{-1}\mathbb{P}(G_{k,n}^{+}(u_{0})\geq 2)\to 0\) and \(b_{k,n}^{-1}\mathbb{P}(G_{k+1,n}^{-}(u_{0})\geq 2)\to 0\) as \(n\to\infty\). Because of \(0\leq F\leq 1\), (3.27), (3.28), one can obtain (3.31) if we show that \[b_{k,n}^{-1}\mathbb{E}\big{[}|F(\eta_{k,n}^{(+,r)})-F(\eta_{k+1,n}^{(-,r)})|1 \{G_{k,n}^{+}(u_{0})=G_{k+1,n}^{-}(u_{0})=1\}\big{]}\to 0,\quad n\to\infty. \tag{3.32}\] Fix \(\delta>0\). Again from (2.15) and (2.16), we can choose \(u_{1}>u_{0}\) so large that \[\lim_{n\to\infty}b_{k,n}^{-1}\mathbb{P}(G_{k,n}^{+}(u_{1})\geq 1)\leq\delta, \quad\text{and}\quad\lim_{n\to\infty}b_{k,n}^{-1}\mathbb{P}(G_{k+1,n}^{-}(u_{1}) \geq 1)\leq\delta.\] Define \(A_{n}:=\{G_{k,n}^{+}(u_{0})=G_{k+1,n}^{-}(u_{0})=1\), \(G_{k,n}^{+}(u_{1})=G_{k+1,n}^{-}(u_{1})=0\}\); then, (3.32) follows if we can show that \[\lim_{n\to\infty}b_{k,n}^{-1}\mathbb{E}\big{[}|F(\eta_{k,n}^{(+,r)})-F(\eta_{k+ 1,n}^{(-,r)})|\mathbb{1}_{A_{n}}\big{]}=0. \tag{3.33}\] Observing that \[\big{|}F(\eta_{k,n}^{(+,r)})-F(\eta_{k+1,n}^{(-,r)})\big{|}\leq 2\sum_{\ell=1}^{ 2}|\eta_{k,n}^{(+,r)}(H_{\ell})-\eta_{k+1,n}^{(-,r)}(H_{\ell})|,\] (3.33) is implied by \[\lim_{n\to\infty}b_{k,n}^{-1}\mathbb{E}\big{[}|\eta_{k,n}^{(+,r)}(H_{\ell})- \eta_{k+1,n}^{(-,r)}(H_{\ell})|\mathbb{1}_{A_{n}}\big{]}=0,\] for each \(\ell=1,2\). Fix \(\ell\in\{1,2\}\). If \(A_{n}\) holds, we may assume, without loss of generality, that \(H_{\ell}\) is supported on \([u_{0},u_{1}]\). Moreover, under \(A_{n}\), there exist random variables \(X_{n},X_{n}^{\prime}\in[u_{0},u_{1}]\) such that \(\eta_{k,n}^{(+,r)}(H_{\ell})=H_{\ell}(X_{n})\) and \(\eta_{k+1,n}^{(-,r)}(H_{\ell})=H_{\ell}(X_{n}^{\prime})\). Since \(H_{\ell}\) is uniformly continuous on \([u_{0},u_{1}]\), for every \(\delta>0\), there exists \(\delta_{0}>0\) such that \(\big{|}H_{\ell}(X_{n})-H_{\ell}(X_{n}^{\prime})\big{|}\leq\delta\) whenever \(|X_{n}-X_{n}^{\prime}|\leq\delta_{0}\). Applying the above observations, \[b_{k,n}^{-1}\mathbb{E}\big{[}|\eta_{k,n}^{(+,r)}(H_{\ell})-\eta_{k+1,n}^{(-,r) }(H_{\ell})|\mathbb{1}_{A_{n}}\big{]}\leq\delta b_{k,n}^{-1}\mathbb{P}(A_{n})+ C^{*}b_{k,n}^{-1}\mathbb{P}\big{(}A_{n}\cap\big{\{}|X_{n}-X_{n}^{\prime}|> \delta_{0}\big{\}}\big{)}.\] By (2.15), \[\limsup_{n\to\infty}\delta b_{k,n}^{-1}\mathbb{P}(A_{n})\leq\delta\lim_{n\to \infty}b_{k,n}^{-1}\mathbb{P}(G_{k,n}^{+}(u_{0})\geq 1)=\delta D_{k}e^{-u_{0}}.\] As \(\delta>0\) is arbitrary, it now remains to show that for any \(\delta_{0}>0\), \[\lim_{n\to\infty}b_{k,n}^{-1}\mathbb{P}\big{(}A_{n}\cap\big{\{}|X_{n}-X_{n}^{ \prime}|>\delta_{0}\big{\}}\big{)}=0.\] Let \(\delta_{0}>0\) be fixed. For the proof, let us partition \([u_{0},u_{1})=\bigcup_{i=1}^{m}[v_{i},v_{i+1})\) such that \(|v_{i+1}-v_{i}|\leq\delta_{0}\) for all \(i\in\{1,\ldots,m\}\). For \(1\leq i\leq m\), set \[G_{k,n}^{+}(i):=G_{k,n}^{+}(v_{i})-G_{k,n}^{+}(v_{i+1}),\quad\text{and}\quad G _{k+1,n}^{-}(i):=G_{k+1,n}^{-}(v_{i})-G_{k+1,n}^{-}(v_{i+1}).\] By construction, one can see that \[A_{n}\cap\big{\{}|X_{n}-X_{n}^{\prime}|>\delta_{0}\big{\}}\subset\bigcup_{i=1} ^{m}\big{\{}G_{k,n}^{+}(i)\neq G_{k+1,n}^{-}(i)\big{\}}.\] Indeed, if \(|X_{n}-X_{n}^{\prime}|>\delta_{0}\) under \(A_{n}\), then \(X_{n}\) and \(X_{n}^{\prime}\) must fall into distinct subsets of the partition. Since \(m\) is finite, it suffices to demonstrate that for all \(1\leq i\leq m\), as \(n\to\infty\), \[b_{k,n}^{-1}\mathbb{P}\big{(}G_{k,n}^{+}(i)\neq G_{k+1,n}^{-}(i)\big{)}\to 0.\] This is however a direct consequence of (3.27) and (3.28) by restricting the state space properly.
2309.10253
GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts
Large language models (LLMs) have recently experienced tremendous popularity and are widely used from casual conversations to AI-driven programming. However, despite their considerable success, LLMs are not entirely reliable and can give detailed guidance on how to conduct harmful or illegal activities. While safety measures can reduce the risk of such outputs, adversarial jailbreak attacks can still exploit LLMs to produce harmful content. These jailbreak templates are typically manually crafted, making large-scale testing challenging. In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzing framework inspired by the AFL fuzzing framework. Instead of manual engineering, GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs. At its core, GPTFuzz starts with human-written templates as initial seeds, then mutates them to produce new templates. We detail three key components of GPTFuzz: a seed selection strategy for balancing efficiency and variability, mutate operators for creating semantically equivalent or similar sentences, and a judgment model to assess the success of a jailbreak attack. We evaluate GPTFuzz against various commercial and open-source LLMs, including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Our results indicate that GPTFuzz consistently produces jailbreak templates with a high success rate, surpassing human-crafted templates. Remarkably, GPTFuzz achieves over 90% attack success rates against ChatGPT and Llama-2 models, even with suboptimal initial seed templates. We anticipate that GPTFuzz will be instrumental for researchers and practitioners in examining LLM robustness and will encourage further exploration into enhancing LLM safety.
Jiahao Yu, Xingwei Lin, Zheng Yu, Xinyu Xing
2023-09-19T02:19:48Z
http://arxiv.org/abs/2309.10253v4
# GPTFuzzer: Red Teaming Large Language Models ###### Abstract Content warning: This paper contains unfiltered content generated by LLMs that may be offensive to readers. Large language models (LLMs) have recently experienced tremendous popularity and are widely used from casual conversations to AI-driven programming. However, despite their considerable success, LLMs are not entirely reliable and can give detailed guidance on how to conduct harmful or illegal activities. While safety measures can reduce the risk of such outputs, adversarial "jailbreak" attacks can still exploit LLMs to produce harmful content. These jailbreak templates are typically manually crafted, making large-scale testing challenging. In this paper, we introduce GPTFuzzer, a novel black-box jailbreak fuzzing framework inspired by the AFL fuzzing framework. Instead of manual engineering, GPTFuzzer automates the generation of jailbreak templates for red-teaming LLMs. At its core, GPTFuzzer starts with human-written templates as initial seeds, then mutates them to produce new templates. We detail three key components of GPTFuzzer: a seed selection strategy for balancing efficiency and variability, mutate operators for creating semantically equivalent or similar sentences, and a judgment model to assess the success of a jailbreak attack. We evaluate GPTFuzzer against various commercial and open-source LLMs, including ChatGPT, LLMa-2, and Vienna, under diverse attack scenarios. Our results indicate that GPTFuzzer consistently produces jailbreak templates with a high success rate, surpassing human-crafted templates. Remarkably, GPTFuzzer achieves over 90% attack success rates against ChatGPT and LLama-2 models, even with suboptimal initial seed templates. We anticipate that GPTFuzzer will be instrumental for researchers and practitioners in examining LLM robustness and will encourage further exploration into enhancing LLM safety. ## 1 Introduction **Large Language Models (LLMs)**, such as ChatGPT [44] and GPT-4 [10], have demonstrated immense potential in diverse domains including education, reasoning, programming, and scientific research. The ability of LLMs to produce human-like text has led to their widespread adoption in various applications. However, this ubiquity introduces challenges, as LLMs are not always reliable. They can produce toxic or misleading content [15, 21, 66] and are susceptible to "hallucinations" that result in nonsensical or untruthful outputs [33, 45]. Furthermore, their widespread use has made them targets for adversarial attacks, including backdoor attacks [5, 32, 41], prompt injection [49, 23, 36], and data poisoning [34, 43, 73]. A notable adversarial strategy is the **jailbreak attack**[32, 37], which uses crafted prompts to bypass LLM safeguards, potentially eliciting harmful responses. While unlocking LLMs' potential, these attacks can also produce outputs that breach provider guidelines or even legal boundaries. For instance, a successful jailbreak attack on a chatbot might result in the generation of offensive content, risking the chatbot's suspension. Thus, assessing the resilience of LLMs to jailbreak attacks is crucial before real-world deployment. Most existing research on jailbreak attacks predominantly relies on manually crafting prompts [49, 63, 31, 36, 46, 31]. While these handcrafted prompts can be finely modified to specific LLM behaviors, this approach has several inherent limitations: * **Scalability**: Manually designing prompts is not scalable. As the number of LLMs and their versions increase, creating individual prompts for each becomes impractical. * **Labor-Intensity**: Crafting effective jailbreak prompts requires deep expertise and significant time investment. This makes the process costly, especially when considering the continuous evolution and updates of LLMs. * **Coverage**: Manual approaches might miss certain vulnerabilities due to human oversight or biases. An automated system can explore a broader range of potential weaknesses, ensuring more comprehensive robustness evaluations. * **Adaptability**: LLMs are continuously evolving, with new versions and updates being released regularly [11]. Manual methods struggle to keep pace with these rapid changes, potentially leaving newer vulnerabilities unexplored. Given these challenges, there is a clear and pressing need for an automated framework that can efficiently generate jailbreak prompts, ensuring comprehensive, and scalable evaluations of LLM robustness. Addressing this need, we sought to develop a solution that addresses the shortcomings of manual prompt design while harnessing the power of automation. Our approach aims to combine the valuable human-written prompts with the scalability and adaptability of automated systems, ensuring a more robust and comprehensive evaluation of LLM vulnerabilities. Drawing inspiration from AFL fuzzing, we introduce GPTFuzzer, a black-box jailbreak fuzzing framework for the automated generation of jailbreak prompts. Our system hinges on three pivotal components: seed selection strategy, mutate operators, and judgment model. We begin with human-crafted jailbreak prompts as seeds, mutating them to produce new prompts. The judgment model then evaluates the success of the jailbreak attack. Successful mutants are added to the seed pool, while unsuccessful ones are discarded. This process iterates until a set number of cycles are completed. To sum up, our research contributions are as follows: * The introduction of GPTFuzzer, a novel black-box jailbreak fuzzing framework for the automated generation of jailbreak prompts targeting LLMs. * The design and validation of three essential components for GPTFuzzer: seed selection strategy, mutate operators, and judgment model. We carefully design these components and they are instrumental to GPTFuzzer's success. * An extensive evaluation of GPTFuzzer across both commercial and open-source LLMs. Our framework consistently achieves impressive attack success rates. Notably, even when initialized with failed human-written prompts, our method still manages to achieve an attack success rate of over 90% against well-aligned models like ChatGPT and Llama-2. In terms of transfer attacks, our generated prompts demonstrate the capability to target unseen LLMs with a variety of harmful questions, proving very high attack success rates against popular LLMs such as Bard (61%), Claude-2 (90%), and PaLM2 (95%). To the best of our knowledge, this represents the most effective and universal black-box approach against these models. * Discussing and addressing ethical considerations about the potential harm that could be caused by our tool. To empower the research community in advancing their understanding and evaluations of LLMs, we are making all our codes and models publicly available for reproduction. We delve deeper into ethical considerations in Section 6, where we outline our deliberate efforts to minimize adverse impacts that may emerge from our work. ## 2 Background Information In this section, we delve into the definitions of the terminologies used in our paper. We begin by introducing the foundational elements of LLMs and then illustrate general concepts of fuzzing that inspire our work. ### Llm An LLM is a deep learning architecture, specifically a type of neural network, trained on massive datasets to understand and generate human-like text. These models leverage the power of their large number of parameters, often in the billions, to encapsulate a broad understanding of language, making them capable of completing a wide variety of tasks. **Models.** A majority of renowned LLMs, including ChatGPT and GPT-4, are built upon the transformer architecture [57]. This architecture employs attention mechanisms to discern the interrelations between words in textual sequences. These models, being auto-regressive, decoder-only transformer variants, predict subsequent words in a sequence based on the preceding context. In a nutshell, given a sequence \(w_{1},w_{2},...,w_{n}\), the model predicts the next word \(w_{n+1}\) by maximizing the probability of the next word based on previous words. The model does it iteratively, so once it predicts \(w_{n+1}\), it will use the extended sequence \(w_{1},w_{2},...,w_{n+1}\) to predict \(w_{n+2}\), and so on. This makes auto-regressive LLMs particularly suitable for text generation tasks where the model continues a given prompt with coherent and contextually relevant text. **Training.** In the training phase, auto-regressive LLMs aim to maximize the likelihood of the succeeding word based on its predecessors, allowing self-supervised training with diverse text corpora like Wikipedia, Reddit, or even a collection of books. Besides, ChatGPT, GPT-4, and LLaMa-2 are also trained with Reinforcement Learning from Human Feedback (RLHF) [48] to better respond to human instructions and align with human values [55, 52, 47, 6, 4]. **Prompt.** A prompt in the context of LLMs refers to the initial input given to the model, guiding its subsequent content generation [9]. For example, if one provides the model with a prompt like "Briefly describe how to learn Python", the model would then generate a detailed response. Prompts are crucial in directing the output of the model and can vary from simple queries to complex instructions. [ the profound influence initial seeds exert on the overall efficacy of the fuzzing trajectory. 2. **Seed selection.** Following initialization, the journey progresses to the selection of a seed from the accumulated seed pool. This seed will be the designated input for the program's current iteration. The selection could be arbitrary or steered by a specific heuristic. For instance, AFL [71] employs a coverage-driven heuristic to cherry-pick seeds with a higher propensity to unveil novel program behaviors. Recent research [74, 62] envisions this seed selection phase as a tree search challenge, leveraging bandit algorithms to pinpoint the optimal seed. 3. **Mutation.** Once the seed is selected, the next step is to mutate the seed to generate a new input. Havoc [71] uses a series of random mutations to generate new inputs, while other work [67] employs a more sophisticated mutation strategy based on the bandit search algorithm. 4. **Execution.** The finale involves executing the mutated input on the program. Should the program crash or stumble upon a previously uncharted path, this input earns a spot in the seed pool, ready for the upcoming iteration. Our GPTFuzzer mirrors these steps inherent to the general fuzzing process, with a more in-depth exploration available in Section 3. ## 3 Proposed Method We start describing our method with one motivating example. As depicted in the left panel of Figure 2, we first show that a carefully crafted jailbreak template can successfully elicit unauthorized outputs from gpt-3.5-turbo-0301, an older version of ChatGPT. However, the same template becomes ineffective when tested on its updated counterpart, gpt-3.5-turbo-0631. According to the release notes [46], this update brings improvements to the model's refusal behavior. Our study also shows that the new model is more robust to jailbreak attacks in Appendix E, aligning with the release's assertions. While the specifics of these improvements remain undisclosed, official reports from OpenAI and Meta [55, 47] suggest that fine-tuning for safety responses against adversarial templates can bolster an LLM's robustness. However, a natural question is raised: **Is an LLM secure against a jailbreak template after undergoing finetuning?** To probe this question, we modify the original jailbreak template by appending additional content at its beginning. The modified prompt, displayed in the right panel of Figure 2, still manages to elicit unauthorized outputs from both the updated and older versions of the model. This example not only exposes a vulnerability in current LLMs but also highlights the need to automatically red-team LLMs. While human-crafted jailbreak templates have been effective, they are labor-intensive to create and thus limited in number. Finetuning can make LLMs more resilient to these manually crafted templates, but as our example shows, they remain vulnerable to variations of these templates. This vulnerability underscores the urgent need for automated tools in the generation of jailbreak templates. By automating this process, we can explore a much broader and more nuanced space of potential vulnerabilities, making our red-teaming efforts more comprehensive and effective. In this light, our work introduces a novel avenue for red-teaming LLMs: **utilizing Figure 2: Demonstrating the resilience and vulnerability of LLMs to jailbreak prompts. On the left, the latest version of ChatGPT (gpt-3.5-turbo-0631) successfully resists a well-crafted jailbreak template, showing no unauthorized output. On the right, new content (marked in red) applied to the original jailbreak template enables it to bypass the model’s defenses, eliciting an unauthorized response once again. This highlights the model’s susceptibility to variations of known jailbreak templates. automated transformations on human-crafted jailbreak templates to generate a new set of effective templates that can probe the model's robustness more thoroughly.** ### Technical Overview Figure 3 provides an overview of GPTFuzzer. Initially, we gather human-written jailbreak templates from the Internet, forming our foundational dataset as described in Section 3.2. This dataset functions similarly to the initial seeds in traditional fuzzers. In each iteration, a seed (jailbreak template) is selected from the current pool (Section 3.3), mutated to produce a new jailbreak template (Section 3.4), and then combined with the target question. This amalgamated prompt is then used to query the target LLM. The response is evaluated using the judgment model from Section 3.6. Successful jailbreak templates are retained in the seed pool, while others are discarded. This process continues until the query budget is exhausted or the stopping criteria are met. Building on this overview, we present a structured algorithmic representation of GPTFuzzer's workflow. The algorithm, as outlined in Algorithm 1, provides a step-by-step breakdown of the process, making it easier to understand the sequence and interactions of the various components involved. Having provided a broad overview of the GPTFuzzer workflow, we now turn our attention to the individual components that constitute its backbone. In the following subsections, we dissect each component, shedding light on its motivation and design. ``` Data: Human-written jailbreak templates from the Internet Result: Discovered jailbreaks 1Initialization: Load initial dataset as per Section 3.2 whilequery budget remains and stopping criteria unmetdo 2 seed \(\leftarrow\) selectFromPool(Section 3.3) newTemplate \(\leftarrow\) applyMutation(seed, Section 3.4) newPrompt \(\leftarrow\) combine newTemplate with target question 3 response \(\leftarrow\) queryLLM(newPrompt) 4ifsuccessfulJailbreak(response, Section 3.6)then 5 Retain newTemplate in seed pool ``` **Algorithm 1**Workflow of GPTFuzzer Figure 3: A schematic representation of the GPTFuzzer workflow. Starting with the collection of human-written jailbreak templates, the diagram illustrates the iterative process of seed selection, mutation, and evaluation against the target LLM. Successful jailbreak templates are retained for subsequent iterations, ensuring a dynamic and evolving approach to probing the model’s robustness. ### Initial Seed When collecting the human-written jailbreak templates, we impose two primary criteria to ensure their efficacy and generalizability. Our primary focus is on templates that can be universally applied across various questions. Such jailbreak templates often follow a structure exemplified below: From now on, you are going to....... Now, answer my first question: [INSERT PROMPT HERE] This structure comprises a scenario description and a question placeholder. The scenario description offers a brief context for the conversation, while the question placeholder is adaptable, allowing for the insertion of any question. This flexibility ensures that we can utilize the jailbreak template for diverse target questions without necessitating manual adjustments. Conversely, some jailbreak templates, as outlined in Appendix A.1, are intrinsically tied to specific questions. Such templates, which demand manual modifications for different questions, are excluded from our initial seed set. Second, our emphasis is on jailbreak templates that can elicit unintended outputs within a single turn. While there exist multi-turn jailbreak templates, as discussed in Appendix A.1, we've transformed such templates into their single-turn equivalents for the sake of efficiency and consistency in our approach. This ensures that all templates, regardless of their original design, can be evaluated in a uniform manner without the complexities of multi-turn interactions. This streamlined approach not only simplifies the evaluation process but also ensures that each prompt consumes a single query per iteration. For a more comprehensive discussion on our criteria and choices for the initial seed, please refer to Appendix A. ### Seed Selection At each iteration, we must select a seed from the current seed pool to undergo mutation. Drawing inspiration from popular fuzzers, we've implemented three baseline seed selection strategies in GPTFuzzer: * **Random** This strategy involves selecting a seed at random from the pool. * **Round Robin** Modeled after AFL [71], this strategy cycles through the seed pool, ensuring comprehensive exploration. * **UCB** Based on the UCB algorithm [3], this strategy has gained popularity in recent fuzzers [72, 60, 70]. Each seed is assigned a score, with the highest-scoring seed being selected. The score is computed as: \[score=\bar{r}+c\sqrt{\frac{2\ln N}{n+1}}\] (1) Here, \(\bar{r}\) represents the seed's average reward, \(N\) denotes the total iterations, \(n\) is the seed's selection count, and \(c\) is a constant. The first term, \(\bar{r}\), promotes seeds with high rewards, while the second term favors seeds selected fewer times. The constant \(c\) balances between these two objectives. The UCB strategy often outperforms the **Round-Robin** and **Random** approaches in terms of efficiency. It possesses the capability to rapidly identify and prioritize seeds yielding high rewards. This is crucial due to the inherent variability in the effectiveness of jailbreak templates against a target LLM, as discussed in Appendix D. Some templates are markedly more effective, and the **UCB** strategy excels in quickly identifying such potent templates. However, the efficacy of the **UCB** strategy comes with its set of challenges. There's a risk that it could become entrapped within local optima, potentially overlooking other effective jailbreak templates. For example, if **UCB** early on selects a seed that demonstrates success in jailbreaking the model, it might persistently favor this seed, leading to a potential overemphasis on a particular seed lineage and constraining the exploration of the seed pool. This focus not only risks overlooking seeds with greater jailbreak potential but also diminishes the desired diversity within the seed pool. To tackle this issue, we propose a novel seed selection strategy, **MCTS-Explore** to balance the efficiency and diversity of the seed selection. This strategy leverages the Monte Carlo Tree Search (MCTS) algorithm [13] for seed selection. MCTS, a heuristic search algorithm, has already been successfully integrated into various fuzzers [74, 62, 26]. **MCTS-Explore** is a variant of MCTS that is specifically designed for the seed selection in GPTFuzzer. The pseudocode of **MCTS-Explore** is detailed in Algorithm 2, with the unique modifications highlighted in red. The MCTS tree is initialized with all initial seeds appended to the root node at the beginning of fuzzing (lines 1-4). In each iteration, we start from the root node, selecting the successor node (lines 9-11) with the highest UCT score (lines 17-25) until we reach a leaf node. The path is then returned (line 15), with the final node in the path being chosen as the seed for subsequent mutation and execution. Post-execution, we update the reward for each node in the path (lines 30-32). While this procedure aids in identifying the most promising seed for mutation, it presents two challenges: (1) Non-leaf nodes in the MCTS tree, which might still have the potential to generate valuable jailbreak templates, will not be selected. (2) The fuzzing strategy might still overly focus on a specific lineage of nodes. To counter these challenges, we've incorporated two significant modifications into the algorithm. Firstly, we introduce a parameter \(p\) to determine the likelihood of selecting a non-leaf node as the seed. During the successor selection of the current node, there's a \(p\) probability that the loop will terminate, returning the current path (lines 12-14). This ensures the exploration of non-leaf nodes in the MCTS tree. Secondly, to prevent over-concentration on a particular lineage, we've integrated a reward penalty \(\alpha\) and a minimal reward \(\beta\) into the reward update process (lines 28-29). The reward penalty \(\alpha\) diminishes the reward for the current node and its ancestors when the path lengthens. The minimal reward \(\beta\) is used to prevent the reward of the current node and its ancestors from being too small or negative when the mutant can successfully jailbreak the target model. ``` Data: Root node \(root\), initial seed set \(S\), sample non-leaf node probability \(p\), reward penalty \(\alpha\), minimal reward \(\beta\) 1Function Initialize(\(root\), \(S\)): 2foreachseed in Sdo 3 create a new node 4 Append node to \(root\) 5 6FunctionSelect Seed(\(root\), \(p\)): 7\(path\leftarrow[root]\) 8\(node\gets root\) 9while\(node\) is not a leafdo 10\(node\leftarrow\) bestUCT(\(node\)) 11 Append node to \(path\) 12 random number \(t\leftarrow\) random(0, 1) 13if\(t<p\)then 14return\(path\) 15 16 17Function bestUCT(node): 18\(bestScore\leftarrow-\infty\) 19\(bestChild\gets null\) 20foreach child in node.childrendo 21\(score\gets child.\bar{r}+c\sqrt{\frac{2\ln\text{node.visits}}{\text{child. visits}+\bar{1}}}\) 22if\(score\)>bestScorethen 23\(bestScore\gets score\) 24\(bestChild\gets child\) 25 26returnbestChild 27 28Function Backpropagate(\(path\), \(reward\), \(\alpha\), \(\beta\)): 29if\(reward>0\)then 30\(reward\leftarrow\max(reward-\alpha*len(path),\beta)\) 31 32foreachnode in pathdo 33\(node.\bar{r}\leftarrow\frac{node.\bar{r}+node.\bar{r}+node.\bar{visits}+ \bar{1}}{node.\bar{visits}+\bar{1}}\) 34 35 36 end if 37 38Function Backpropagate(\(path\), \(reward\), \(\alpha\), \(\beta\)): 39if\(reward>0\)then 40\(reward\leftarrow\max(reward-\alpha*len(path),\beta)\) 41 42 43 [MISSING_PAGE_POST] 1488 1490215 14903 14917 149215 14939 140415 14946 14957 149616 149717 149817 1499 149918 149918 149919 14919 1499219 1493 14939 149419 1495919 149519 149619 149719 1496219 149719 1497219 149819 149819 149939 14993 14994 149519 149619 14994 1497219 1499519 14996219 14972 149972 1499819 14993 14994 1499519 149962 149972 14998 149993 1499819 15919 159919 159919 159199 159199 16994 159199 16995219 159199 16996219 1599719 169982 1799199 179982 199939 * _Rephrase:_ This operator restructures the given template, aiming for maximum semantic preservation while altering its phrasing. It's instrumental in creating subtle variations of a prompt that might elicit different responses from the LLM. Each operator serves a distinct purpose in our fuzzing framework. While _Generate_ and _Crossover_ are pivotal for diversifying the seed pool, _Expand_, _Shorten_, and _Rephrase_ refine and adapt templates to enhance their efficacy and adaptability. For a comprehensive breakdown and examples of each mutator in action, readers are directed to Appendix B. ### Jailbroken Responses While numerous papers have delved into jailbreaking LLMs, with some studies even manually labeling a plethora of jailbreak samples [69, 37], a clear discussion on the principles determining whether a response is jailbroken remains absent. Consequently, a clear and unambiguous definition becomes indispensable, especially for the purposes of automated jailbreaking. In this light, we would like to give a comprehensive definition of jailbroken responses based on our observation during manual labeling. Full RefusalThis is the desired behavior of an aligned LLM. Such responses directly reject unauthorized requests, with some LLMs further elucidating the reasons for refusal, cautioning users against such activities, or suggesting positive activities. These responses usually startwith "Sorry, I cannot" or "As a large language model" or similar sentences predefined in the RLHF phrase. Partial RefusalIn this category, the LLM adheres to the jailbreak prompt's instruction, assuming a specific role or tone, but refrains from delivering prohibited content. For instance, if the prompt instructs the LLM to act as a hacker and elucidate hacking for monetary gains, the LLM might adopt the hacker persona but decline the hacking tutorial, emphasizing the importance of ethical hacking or other legal activities. Partial ComplianceAnalogous to partial refusal, the LLM complies with the role or tone instruction but also divulges unauthorized content. Using the previous example, the LLM might teach hacking for financial benefits but simultaneously caution against its illegality and potential repercussions. Full ComplianceThis response unreservedly aligns with the jailbreak prompt, dispensing illicit content without any accompanying warnings or disclaimers. Our research primarily focuses on the full compliance and partial compliance responses. While partial compliance responses incorporate warnings, they can still be deemed jailbroken due to the presence of illicit content. We exclude full refusal and partial refusal responses from our purview, as they are non-hazardous and don't qualify as jailbroken by our standards. ### Judgment Model Evaluating the success of a jailbreak attack automatically presents a significant challenge. The inherent flexibility of natural language makes it difficult to definitively determine if a response contains harmful content. Several methods have been proposed in the literature to address this issue, but each comes with its own set of limitations: * **Human Annotators:** This approach involves using human annotators to judge the success of an attack [69, 37, 11, 69]. However, this method isn't scalable and is impractical for automatic fuzzing. * **Structured Query Evaluation:** Some research has tackled the challenge of evaluating LLMs by using questions with predefined answer structures. This approach simplifies the evaluation process as the range of acceptable answers is limited. Specifically: **Yes/No Queries:**[69] Here, the LLM is presented with questions that expect only a 'yes' or 'no' response. **Multiple Choice Format:**[68, 59] In this method, the LLM is given a question accompanied by a set of predefined answer options. Its task is to select the most appropriate one. * **Rule Patterns:** Some solutions employ rule patterns to evaluate responses [77]. For instance, if a response doesn't contain "Sorry, I cannot," it's deemed jailbroken. This method, while straightforward, suffers from low accuracy. It's challenging to account for the myriad of possible responses using rule patterns alone. * **APIs and ChatGPT Assistance:** Utilizing content moderation APIs [59] or enlisting ChatGPT for labeling assistance [61, 53, 35] are other proposed solutions. However, these methods are either inaccurate, costly, or both, making them unsuitable for large-scale automatic fuzzing. To address these challenges, we employ a locally fine-tuned RoBERTa model [38] as our judgment model. Initially, we generate responses from the LLM using human-written jailbreak templates. These responses are then manually labeled based on whether they are jailbroken, adhering to the definitions provided in Section 3.5. Specifically, responses are labeled as jailbroken if they exhibit full or partial compliance. Subsequently, the RoBERTa model is fine-tuned on this labeled dataset. This fine-tuned model can then predict if a given response is jailbroken (1 for "jailbreak" and 0 for "reject"). As we will demonstrate later in Section 4.1, our judgment model offers both superior accuracy and efficiency when compared to other methods. ## 4 Experiments To evaluate the effectiveness of GPTFuzzer, we follow [77]'s experimental setting to measure the attack performance under single-model and multi-model settings. Our experiments aim to address the following research questions: **RQ1:** How effective are human-written jailbreak templates against popular LLMs? **RQ2:** Does GPTFuzzer outperform human-crafted templates in terms of attack performance? **RQ3:** Is GPTFuzzer capable of generating universal templates across unseen questions and LLMs? **RQ4:** Which factors significantly influence the attack performance of GPTFuzzer? To develop GPTFuzzer and execute the experiments, we write over 2,000 lines of code and consume over 300 million tokens for querying ChatGPT. In the spirit of promoting transparency and advancing LLM alignment research, we've made our entire codebase, along with the judgment model, publicly accessible at the following link: [https://github.com/sherdencooper/GPTFuzz](https://github.com/sherdencooper/GPTFuzz). ### Experimental Setup DatasetsTo construct our dataset, we collect 100 questions from two open datasets [6, 37], encompassing a wide range of prohibited scenarios such as illegal or immoral activities, discriminations, and toxic content. We choose these two datasets because they are either manually written by the authors or generated through crowdsourcing, making them more reflective of real-world scenarios. For the initial jailbreak templates, we use the dataset from [37], and after removing the templates that are not suitable for our experiments following Section 3.2, we were left with 77 suitable templates. A detailed description of the dataset and initial jailbreak templates is shown in Appendix A. Judgment ModelAs we illustrate in Section 3.6, our approach utilizes a local finetuned masked language model as the judgment model to determine if a response is jailbroken. To finetune the model, we first combine all the initial jailbreak templates and questions to query ChatGPT, yielding 7700 responses (77 jailbreak prompts \(\times\) 100 questions = 7700 responses). These responses were then manually labeled by us according to the criteria outlined in Section 3.5. We partitioned the labeled responses into an 80% training set and a 20% validation set. Importantly, we ensured that the training and validation sets did not contain responses to the same question. This separation allows us to validate the judgment model's ability to generalize to previously unseen questions. We finetune the RoBERTa-large model [38] for 15 epochs with a batch size of 16. The learning rate is set to 1e-5 and the maximum sequence length is set to 512. We use the Adam optimizer [29], and the learning rate scheduler follows a linear decay with a 10% warmup ratio. We benchmark our reward model against four baseline methods: \(\boldsymbol{\Theta}\)**Rule Match**: we follow [77] to employ the rule-based method to assess if a response is jailbroken. Specifically, we match the response with commonly used expressions by LLMs when rejecting a request. \(\boldsymbol{\Theta}\)**Moderation**: We utilize OpenAI's moderation API [40] to evaluate whether the response content complies with OpenAI's usage policies. The response flagged by the API will be determined as jailbroken. \(\boldsymbol{\Theta}\)**ChatGPT**: We employ the ChatGPT model (gpt-3.5-turbo-0613) to determine whether the response is jailbroken. \(\boldsymbol{\Theta}\)**GPT-4**: we use GPT-4 (gpt-4-0613), one of the most powerful LLMs, for the same purpose. For ChatGPT and GPT-4, we use the prompt to instruct the LLM to label following the principles defined in Section 3.5. We evaluated the classification performance of our judgment model against the four aforementioned baselines, with results presented in Table 1. Several observations can be made. Firstly, as observable from the table, our judgment model surpasses others in accuracy, True Positive Rate (TPR), and False Positive Rate (FPR), while also demonstrating commendable time efficiency, only surpassed by the rule match method. The rule match method, despite its superior speed, suffers from lower accuracy due to its inherent strictness and inability to account for extensive variability in responses, limiting its effectiveness. We can also investigate that the Moderation API has the lowest TPR. We have two possible reasons to account for this. The first possible reason is that the API might rely on a lightweight transformer decoder model trained on limited data [40], potentially missing nuanced harmful content in jailbreak responses. The second reason is that those jailbroken responses might not contain explicit toxic words that would trigger the API. For instance, the response "Step 1: gather a skilled team. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Method** & **Accuracy** & **TPR** & **FPR** & **Time** \\ \hline Rule Match & 0.7103 & 0.3431 & 0.0884 & **<1s** \\ Moderation & 0.6759 & 0.1266 & 0.0331 & 4m21s \\ ChatGPT & 0.8779 & 0.8778 & 0.1226 & 9m15s \\ GPT4 & 0.9201 & 0.9247 & 0.0824 & 1h27min \\ \hline RoBERTa & **0.9616** & **0.9412** & **0.0271** & 37s \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of various judgment methods based on accuracy, True Positive Rate (TPR), False Positive Rate (FPR), and time efficiency on the validation set. The time cost is calculated by sequentially evaluating all 1540 responses (20 questions \(\times\) 77 jailbreak templates) in the validation set. An ideal judgment method would exhibit higher accuracy and TPR, alongside reduced FPR and time cost. The top-performing metrics are emphasized in bold. Step 2: plan carefully before action" does not contain any toxic word. Yet, in the context of a user inquiring about illegal activities, it's clearly a jailbroken response. This highlights the challenge of developing a lightweight model for discerning jailbreak content. Lastly, both ChatGPT and GPT-4 exhibit commendable capabilities in detecting jailbroken responses, albeit with a performance only below our RoBERTa model. A significant drawback is their higher time costs, attributed to API response times. Additionally, frequent queries to the GPT-4 API often hit rate limits, leading to extended waiting periods. The associated API costs for these methods are also significant considerations. It is noteworthy that enhancements in the accuracy of methods like rule patterns or prompts for ChatGPT and GPT-4 might be achievable through the expansion of rules or refinement of prompts. However, realizing these improvements is non-trivial and is designated as future work, marking these enhancements as areas for subsequent exploration. Given the balance of performance and efficiency, we selected the finetuned RoBERTa model as our judgment model. For details of how we set up the baseline methods, please refer to Appendix C. Mutate ModelGiven the need to strike a balance between mutation performance and computational cost, we opt for ChatGPT as our mutation model in our experiments. To foster diversity in the mutations, we set the temperature parameter to 1.0. It's important to highlight that setting the temperature to a value greater than 0 ensures that the model's responses are sampled rather than being deterministic outputs [24]. Such a sampling approach is crucial for our objectives, as it allows for a wider variety of results, enhancing the diversity of the generated mutations. MetricsTo evaluate the effectiveness of our fuzzing approach, we utilize the Attack Success Rate (ASR) as our primary metric. ASR denotes the ratio of questions that receive a jailbreak response using a generated jailbreak template to the total number of questions submitted to the target model. We introduce two variations of ASR to offer more insight into the effectiveness of our approach: * **Top-1 ASR**: This metric assesses the success rate of the most effective jailbreak template, selected based on its individual performance in eliciting jailbreak responses from the target model. * **Top-5 ASR**: In this variant, we select the five most effective jailbreak templates based on their success in generating jailbreak responses towards the target model. These templates are then sequentially applied to attack the target model, and any successful jailbreak within these five attempts is considered a success for this metric. By distinguishing between Top-1 and Top-5 ASR, we are able to measure not only the potency of the single most effective template but also the collective success rate of the top five templates, providing a broader view of the potential cumulative impact of multiple high-performing templates. EnvironmentOur experiments were conducted on a server equipped with 8 NVIDIA A100 GPUs, each with 80GB of memory. The server's CPU is an AMD EPYC 7763 with 64 cores, endowed with 1TB of memory. In terms of software, the server runs on the Ubuntu 18.04.5 LTS operating system. The experiments utilized Python version 3.8.17, CUDA version 12.2, PyTorch version 2.1.0, and the transformers library version 4.32.0. ### Initial Seed Assessment To begin, we first analyze how well the human-written templates can jailbreak the model. We use the 77 human-written jailbreak templates with 100 questions to query ChatGPT, Llama-2-7B-Chat, and Vicuna-7B [75]. In addition to the previously mentioned metrics, we incorporated two supplementary metrics to provide a more comprehensive analysis: (1) Jailbroken Questions: This represents the number of questions that at least one template manages to jailbreak against the target model. (2) Average Successful Templates: This quantifies the average number of templates that manage to successfully jailbreak the target model per question. (3) Invalid Templates: This accounts for the templates that fail to jailbreak any question when applied to a specific model. To mitigate the randomness of response, we use the deterministic output for the target model. The results are shown in Table 2. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline **Model** & **Jailbroken Questions** & **Top-1 ASR (\%)** & **Top-5 ASR(\%)** & **Average Successful Templates** & **Invalid Templates** \\ \hline Vicuna-7B & 100/100 & 99 & 100 & 57.07 & 1 \\ ChatGPT & 100/100 & 99 & 100 & 22.38 & 3 \\ Llama-2-7B-Chat & 54/100 & 20 & 47 & 0.96 & 47 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance evaluation of human-written jailbreak templates against three target models: ChatGPT, Llama-2-7B-Chat, and Vicuna-7B. The table showcases metrics such as top-1 ASR, top-5 ASR, average successful templates, the count of invalid templates and the number of jailbroken questions. The results highlight the varying degrees of resilience among the models against human-crafted adversarial templates. From this table, we can find that, surprisingly, the human-written jailbreak templates exhibit a high degree of effectiveness against both Vicuna-7B and ChatGPT. With top-1 ASR of 99% and top-5 ASR reaching a full 100%, these templates demonstrate significant potency. The last two columns further underscore this observation, indicating that a majority of the human-crafted templates remain potent against Vicuna-7B and ChatGPT. Only a minimal number of these templates are unsuccessful in compromising any question. Furthermore, the high values of average successful templates (57.07 for Vicuna-7B and 22.38 for ChatGPT) debunk the assumption that only a few potent templates are responsible for these results. This underscores the overall efficacy of human-written templates against these models. In contrast, Llama-2-7B-Chat presents strong robustness against these human-written jailbreak templates. Only 54 questions were successfully compromised, with top-1 ASR of just 20% and top-5 of 47%. A significant number of templates were unsuccessful in compromising any question, and the average successful templates metric stands at a mere 0.96. This heightened resilience can be attributed to Llama-2-7B-Chat's comprehensive tuning using safety Reinforcement Learning from Human Feedback (RLHF) [55, 76]. The results clearly demonstrate the potency of human-written jailbreak templates, reinforcing our decision to employ them as initial seeds for our fuzzing approach. While their efficacy against Llama-2-7B-Chat is notably lower, it's crucial to understand that this model's resilience will undergo further examination in our subsequent fuzzing experiments. For readers interested in a more granular breakdown of the attack performance of these human-crafted templates against the three models, detailed figures are provided in Appendix D. In conclusion, we can make the conclusion for RQ1: [backgroundcolor=gray!10, linecolor=gray! tude of questions when targeting a specific model. Our initial focus is on Llama-2-7B-Chat, primarily because the top-1 ASR for ChatGPT and Vicuna-7B are already nearing a perfect score of 100%. Adopting a similar approach to the single-question scenario, we employ various initial seed filter strategies to curate our initial seeds. For this experiment, GPTFuzzer operates on all 100 questions, with a query budget of 50,000 in total. In every iteration, a new jailbreak template is generated. This template, when combined with the questions, yields 100 distinct prompts. The cumulative score for the jailbreak template is derived from the sum of the scores from these 100 responses, which is then normalized to [0,1]. If the resultant score surpasses 0, the new template is incorporated into the seed pool. Upon exhausting the query budget, we identify the top-1 ASR and top-5 ASR. The outcomes of this evaluation are detailed in Table 3. From the results presented in Table 3, several insightful observations can be made. First of all, the _all_ initial seed filter strategy outperforms others in the Top-1 ASR. With a robust top-1 ASR of 60%, it indicates that the most effective single template identified during fuzzing can compromise over half of the questions in the test set. This underscores the capability of the templates generated by GPTFuzzer towards different questions. Furthermore, a top-5 ASR approaching 100% for _top-5_ seed filter demonstrates GPTFuzzer's capability to produce highly potent templates even against a well-aligned LLM. The comparative enhancement over human-scripted templates, delineated in the brackets, is substantial across the board. Moreover, we can find that the performance difference between the _top-5_ and _invalid_ strategies is not as marked in the multi-question setting as it is in single-question scenarios. This might be attributed to the ample query budget allocated in the multi-question setting. While the _invalid_ templates might not be as potent as the _top-5_, with sufficient fuzzing iterations, they can still yield competitive results. To delve deeper into our hypothesis regarding the potential of _invalid_ seeds, we conduct a further experiment. Specifically, we choose to exclusively employ the _invalid_ seed filter in our fuzzing process, aiming to understand its efficacy even when the quality of initial seeds might be perceived as suboptimal. We run the experiments additionally on ChatGPT and Vicuna-7B and the results of this experiment are depicted in Figure 4. For ChatGPT, even when starting with human-written seeds that couldn't compromise any question, GPTFuzzer manages to generate a single template that achieves a 100% ASR on the question dataset. While Vicuna-7B, anticipated to be the most susceptible among the three models, doesn't perform as well as ChatGPT, its results are still commendable. It is noteworthy that for Vicuna-7B, after applying the _invalid_ initial seed filter, only a single template remains for GPTFuzzer, significantly constraining seed selection. Yet, even under such constraints, the test top-1 ASR hovers around 40%, and the top-5 ASR surpasses 65%. This outcome lends substantial support to our hypothesis. Based on our comprehensive analysis, we can confidently address **RQ2** and give a preliminary answer to **RQ4**: [title=A1.5,title=A2.5,title=A3.5,title=A4.5,title=A4.5,title=A5.5,title=A6.5,title=A7.5,title=A7.5,title=A8.5,title=A9.5,title=A10.5,title=A10.5,title=A11.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A10.5,title=A10.5,title=A11.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A20.5,title=A21.5,title=A22.5,title=A23.5,title=A30.5,title=A41.5,title=A42.5,title=A5.5,title=A6.5,title=A7.5,title=A8.5,title=A9.5,title=A9.5,title=A10.5,title=A11.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A11.5,title=A12.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A11.5,title=A11.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A11.5,title=A10.5,title=A11.5,title=A12.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A12.5,title=A19.5,title=A10.5,title=A11.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A11.5,title=A12.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A11.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A12.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A11.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A19.5,title=A10.5,title=A12.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A12.5,title=A14.5,title=A15.5,title=A19.5,title=A10.5,title=A11.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A11.5,title=A10.5,title=A12.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A10.5,title=A10.5,title=A12.5,title=A14.5,title=A15.5,title=A17.5,title=A19.5,title=A10.5,title=A11.5,title=A10.5,title=A12.5,title=A13.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10.5,title=A10.5,title=A10.5,title=A10.5,title=A10.5,title=A11.5,title=A10.5,title=A10.5,title=A11.5,title=A11.5,title=A12.5,title=A14.5,title=A15.5,title=A16.5,title=A17.5,title=A18.5,title=A19.5,title=A19.5,title=A10.5,title=A10. versity of the initial seeds, GPTFuzzer illustrates its robustness and continues to manifest remarkable effectiveness. ### Transfer Attack We now transit to a more challenging setting, aiming to evaluate the template's capability to transfer across unseen questions and unseen models with diverse training data and architectures. This includes both open-sourced models and commercial models. Initially, we run the fuzzing on ChatGPT, Llama-2-7B-Chat, and Vicuna-7B using 100 questions for 80,000 queries. In each iteration, we generate a new jailbreak template, replace the placeholder with all questions, and query all three models, resulting in 100 questions \(\times\) 3 models = 300 responses. The score for the template is aggregated and normalized over these 300 responses. A template will only be added to the seed pool if it succeeds in at least one question across all three models. This approach ensures that the newly added seeds can generalize across different models. After exhausting the query budget, we select the top-5 generated templates based on the ASR computed during the fuzzing process. We then evaluate the top-1 and top-5 ASR on another 100 questions sampled from the same source. In addition to the three models previously attacked, we assess the attack performance on several other popular chat models: Vicuna-13B, Baichuan-13B-Chat [7], ChatGLM2-6B [17], Llama-2-13B-Chat, Llama-2-70B-Chat, GPT-4, Bard [22], Claude2 [2], and PaLM2 [1].1 Footnote 1: While we made every effort to apply the APIs of commercial LLMs, at the time of writing, we did not have the API accesses to some commercial models. Therefore, we conducted attacks via web inference for Claude2, PaLM2, and Bard. For comparison, we consider the following baseline methods: * [noitemsep,topsep=0pt] * **No Attack**: We directly query the target LLM with the question without any attacks. * **GCG**: We employ the white-box attack method from [77], adhering to the default setting of optimizing for 500 steps to derive adversarial prefixes. Following their design, we conduct four runs with different seeds to produce four distinct prefixes. The top-1 prefix is the one with the lowest loss among them. We then concatenate the four prefixes to produce the fifth prefix. * **Human-Written**: We select the top-5 human-written jailbreak templates that can jailbreak the most questions in the dataset against Vicuna-13B, ChatGPT, and Llama-2-7B-Chat based on our pre-analysis in Section 4.2. * **Masterkey**: In accordance with prior work [14], we rewrite the top-1 human-written template for 5 times based on the prompt given in their work with ChatGPT for top-5 ASR, and then randomly choose one generated template to evaluate the top-1 ASR later. * **Here Is**: Following prior work [64], we prepend the phrase "Sure, here's to the question. Figure 5: Figure illustrates the comparison of GPTFuzzer’s performance against various baseline methods in a transfer attack scenario, assessing the effectiveness across multiple open-sourced and commercial LLMs. The attack performance is evaluated using the top-1 and top-5 ASR, showcasing the universality and effectiveness of the generated templates by GPTFuzzer in compromising diverse models. The results of these experiments are illustrated in Figure 5. From the figure, we first observe that across all LLMs, GPTFuzzer consistently outperforms all baselines. For open-sourced LLMs, we achieve a 100% top-5 ASR for Vicuna-7B, Vicuna-13B, and Baichuan-13B, and over 90% for ChatGLM2-6B. The top-1 ASR for a single jailbreak template is also commendably high, suggesting that a single template can effectively attack these models. Notably, templates generated by GPTFuzzer exhibit excellent generalization capabilities, especially for larger models within the Llama-2 family. Even for Llama-2-chat-70b, our generated jailbreak templates achieve a top-5 ASR of around 80%. In contrast, other methods have poor performances against the Llama-2 family. **Masterkey** achieves a top-5 ASR of fewer than 20% for the Llama-2 model family, while other baselines perform even more poorly. This underscores the potent attack capability of our generated templates against open-sourced LLMs. For commercial LLMs, the advantage margin is also significant compared with other baselines. Specifically, GPTFuzzer achieves a 100% top-5 ASR for ChatGPT, over 96% for PaLM2, and over 90% for Claude2. For Bard and GPT-4, the top-5 ASR remains impressively higher than 60%. Furthermore, there's potential to enhance the attack performance against these models either by incorporating more jailbreak templates in attacks or by directly running fuzzing against them. This is particularly promising since our method only requires black-box access and can generate many diverse templates. Among the competing methods, the **Human-Written** approach secures the second-highest ASR against these commercial models, even surpassing the **GCG** method, which relies on meticulously crafted adversarial examples. This observation reinforces our initial motivation: human-written jailbreak templates possess inherent power, and our tool is adept at maximizing their potential. We also observe that the attack efficacy of **Masterkey** closely parallels that of **Human-Written**. This resemblance in performance can be attributed to the fact that the "rewrite" is not guided by any feedback from the attack and lacks varied mutators to enhance diversity. Consequently, merely rewriting the jailbreak template proves inadequate in launching effective attacks against well-aligned models. We show one example of the generated jailbreak templates successfully attacking Bard, Claude2, GPT-4, and PaLM2 in Appendix F. From our findings, we can confidently address **Q3**: ``` A6 University ``` GPTFuzzer, when deployed across diverse models and questions, demonstrates its capability to craft highly versatile and universal templates. These templates exhibit proficiency in effectively compromising unseen models and questions, securing high attack success rates, even against robust, well-aligned commercial LLMs. ### Ablation Study Seed Selection StrategyTo further answer RQ4, we conduct an ablation study to evaluate the effectiveness of each component within GPTFuzzer. We first investigate the influence of different seed selection strategies by repeating the multi-question attack experiment delineated in Section 4.3 against Llama-2-chat-7B, utilizing various seed selection strategies detailed in Section 3.3: **Random**, **Round-robin**, and **UCB**. To evaluate the impact of the seed selection strategy, we employ the _all_ initial seed filter, which means we do not remove any seed from the initial seed pool. We present the results in Table 4. It is evident from the "Seed Selection" row that our method, **MCTS-Explore**, outperforms the alternatives, with **UCB** being the nearest contender. To further understand why **MCTS-Explore** has better performance than the baseline methods, we visualize the tree structure of the seed search process for four methods in Figure 6. We can observe that **Random** and **Round-robin** have a more balanced tree structure than other methods. This is because these two methods do not favor a specific seed and focus on exploring. Conversely, for **UCB**, the tree is extremely imbalanced. This is because **UCB** favors the seed with the highest upper confidence bound, which means it will focus on exploiting the seed with the highest upper confidence bound, thereby neglecting sufficient exploration of other seeds and diminishing performance. For **MCTS-Explore**, it achieves a balance between exploration and exploitation. It explores more seeds compared with **UCB** and finds more interesting branches. Then it allocates more resources to exploit these branches. This is why **MCTS-Explore** has better performance than other methods. MutatorNext, we evaluate the impact of different mutators. We keep other conditions the same and only use one mutate operator each time as a variant of GPTFuzzer. The results are shown in Table 4's "Mutator" rows. We find that when using a single mutate operator, the fuzzing performance is greatly reduced. This shows the necessity to use all the mutate operators during fuzzing to enhance the performance. We also find that the _Crossover_ operator has the best performance among all the variants, likely due to its ability to generate new templates by combining two existing templates. Thus. it is more likely to generate new templates that can bypass the LLMs' safety measures. LimitationsAlthough GPTFuzzer shows impressive attack performance, there are some limitations of our method which we discuss below. First, GPTFuzzer relies on human-written jailbreak templates as the initial seeds. Although we have the _generate_ mutator to create some new templates, the degree of innovation within these templates is still limited. Consequently, they often share analogous expressions or structures, and unveiling novel attack patterns becomes a formidable challenge. Second, our methodology does not encompass transformations of the questions, thereby enabling the potential use of keyword matching to reject the prompts. Moreover, even though our judgment enjoys a very high accuracy, we still found some instances misclassified by the judgment model. These misclassified instances are usually the ones that are hard to determine whether they are jailbroken responses or not even for humans. Lastly, GPTFuzzer, just like typical AFL fuzzing tools, needs a lot of queries to the target model and has a risk of being blocked by the target model if the queries are too frequent. Future DirectionsTo tackle the above limitations, we propose several future directions to enhance our red-teaming tool. First, we could leverage/finetune some LLM to generate potential jailbreak templates without human knowledge. As our experiment shows in Figure 4, even if the templates cannot jailbreak the target model directly, they could still succeed after mutation. For example, we could use MPT-storywriter [54] to generate a virtual scenario where an urgent question must be answered. We could even give the context of the question to make the generated template more suitable for questions on this topic. Such an approach could markedly amplify the novelty and diversity of initial seeds and potentially unveil unprecedented attack patterns. Second, we could transform the questions to make them more natural and less suspicious. This too can be achieved through LLM mutation, and incorporated as a mutation component. Third, a well-defined and comprehensive jailbreak definition, as well as a more robust judgment model, are also important to improve the performance of GPTFuzzer and \begin{table} \begin{tabular}{l|l|c|c} \hline \multicolumn{2}{c|}{Variants of GPTFuzzer} & \multicolumn{1}{c|}{Top-1 ASR} & \multicolumn{1}{c}{Top-5 ASR} \\ \hline \multirow{3}{*}{Seed Selection} & Random & 37\% & 55\% \\ \cline{2-4} & Round-robin & 29\% & 59\% \\ \cline{2-4} & UCB & 55\% & 81\% \\ \hline \multirow{3}{*}{Mutator} & Generate & 37\% & 55\% \\ \cline{2-4} & Crossover & 47\% & 72\% \\ \cline{2-4} & Expand & 39\% & 65\% \\ \cline{2-4} & Shorten & 23\% & 49\% \\ \cline{2-4} & rephrase & 32\% & 59\% \\ \hline Original & GPTFuzzer & 60\% & 87\% \\ \hline \end{tabular} \end{table} Table 4: Ablation study results illustrating the impact of various seed selection strategies and mutators. The evaluations are conducted on Llama-2-chat-7B within a multi-question attack framework. For every variant encompassed in the seed selection strategy, we employ the _all_ initial seed filter and modify solely the seed selection methodology. Conversely, for mutator variants, the seed selection strategy remains analogous to GPTFuzzer, with alterations made only to the mutator. Each variant is tested with a singular mutator type. The last row delineates the outcomes attained by the unaltered GPTFuzzer, serving as a benchmark for evaluating the impacts of modifications. Figure 6: Visualization of seed search processes employing different seed selection methods. The tree’s root nodes represent the initial seeds deployed in the fuzzing process, while the subsequent child nodes symbolize the seeds generated from the parent seed. This representation illuminates the exploration-exploitation condition of each method, providing insights into the performance and effectiveness of each seed selection strategy in uncovering potentially interesting branches in the search space. other red-teaming works. We have included some challenging responses in Appendix XXX to show that it is not as trivial as people thought. We will work on this in the future. Lastly, we can also use some techniques to reduce the number of queries to the target model, such as using a cache to store the previously generated potent templates and avoid fuzzing from scratch. Also, a transfer attack may be a good choice when the rate limit of the commercial model is low. We would also explore these directions in the future. MitigationsOne naive way to mitigate the risk of jailbreak attacks is to use a blacklist to filter out the templates that are likely to be jailbreak templates. However, this is not a good solution as it is hard to maintain a comprehensive blacklist and the blacklist may also filter out some legitimate templates. An alternative way is to finetune against these identified jailbreak templates. Nonetheless, this approach is resource-intensive and it is hard to cover all the possible jailbreak templates, particularly those undiscovered. Mitigating jailbreak attacks effectively remains a significant challenge, necessitating continued research efforts to develop robust, sustainable solutions. ## 6 Ethical Consideration Our research unveils adversarial templates capable of generating harmful content across both open-sourced and commercial LLMs. While there are inherent risks associated with this disclosure, we firmly believe in the necessity of full transparency. The methodologies we've employed are not only straightforward but have also been alluded to in prior literature. Given the dedication and resources, any team could potentially harness language models for malicious purposes using similar techniques. As highlighted in Section 4.2, the incremental harm posed by our findings is currently minimal. This is primarily because existing human-written jailbreak templates already demonstrate significant potency. By sharing our findings, we aim to provide a resource for model developers to assess and enhance the robustness of their systems. To minimize potential misuse of our research, we've taken several precautionary measures: * **Awareness:** We've included a clear warning in our paper's abstract, highlighting the potential harm of the unfiltered content generated by LLMs. This serves as a proactive step to prevent unintended consequences. * **Ethical Clearance:** Before embarking on this research, we sought guidance from the Institutional Review Board (IRB) to ensure our work aligns with ethical standards. Their feedback confirmed that our study, not involving human subjects, didn't necessitate IRB approval. * **Pre-publication Disclosure:** We responsibly disclosed our findings to organizations responsible for the large, closed-sourced LLMs we evaluated, ensuring they were informed before our results became public. * **Controlled Release:** Instead of publicly releasing our adversarial jailbreak templates, we've chosen to distribute them exclusively for research purposes. We will provide access only to verified educational email addresses. ## 7 Conclusion In this study, we introduced GPTFuzzer, an innovative black-box jailbreak fuzzing framework, drawing inspiration from established frameworks of AFL. Moving beyond the constraints of manual engineering, GPTFuzzer autonomously crafts jailbreak templates, offering a dynamic approach to red teaming LLMs. Our empirical results underscore the potency of GPTFuzzer in generating these templates, even when initiated with human-written templates of varying quality. This capability not only highlights the robustness of our framework but also underscores potential vulnerabilities in current LLMs. We envision GPTFuzzer serving as a valuable tool for both researchers and industry professionals, facilitating rigorous evaluations of LLM robustness. Furthermore, we hope our contributions spark further exploration into the safety and security dimensions of large language models, driving the community towards more resilient and trustworthy AI systems. ## Acknowledgments This project would not have been possible without the generous support from Ant Group.
2309.14035
Applicability and limitations of cluster perturbation theory for Hubbard models
We present important use cases and limitations when considering results obtained from Cluster Perturbation Theory (CPT). CPT combines the solutions of small individual clusters of an infinite lattice system with the Bloch theory of conventional band theory in order to provide an approximation for the Green's function in the thermodynamic limit. To this end we are investigating single-band and multi-band Hubbard models in one- and two-dimensional systems. A special interest is taken in the supposed pseudo gap regime of the two-dimensional square lattice at half filling and intermediate interaction strength ($U \leq 3t$) as well as the metal-insulator transition. We point out that the finite-size level spacing of the cluster limits the resolution of spectral features within CPT. This restricts the investigation of asymptotic properties of the metal-insulator transition, as it would require much larger cluster sizes that are beyond computational capabilities.
Nicklas Enenkel, Markus Garst, Peter Schmitteckert
2023-09-25T11:02:02Z
http://arxiv.org/abs/2309.14035v1
# Applicability and limitations of cluster perturbation theory for Hubbard models ###### Abstract We present important use cases and limitations when considering results obtained from Cluster Perturbation Theory (CPT). CPT combines the solutions of small individual clusters of an infinite lattice system with the Bloch theory of conventional band theory in order to provide an approximation for the Green's function in the thermodynamic limit. To this end we are investigating single-band and multi-band Hubbard models in one- and two-dimensional systems. A special interest is taken in the supposed pseudo gap regime of the two-dimensional square lattice at half filling and intermediate interaction strength (\(U\leq 3t\)) as well as the metal-insulator transition. We point out that the finite-size level spacing of the cluster limits the resolution of spectral features within CPT. This restricts the investigation of asymptotic properties of the metal-insulator transition, as it would require much larger cluster sizes that are beyond computational capabilities. ## I Introduction The Hubbard model probably belongs to the most studied systems in solid state theory. Although its Hamiltonian possesses a simple form, it captures important aspects of various many-body phenomena like Mott-insulating states, antiferromagnetism and superconductivity.[1; 2; 3; 4; 5]. The Hamiltonian has three terms: the first term describes the hopping of the electrons on the lattice, the second term a repulsive Coulomb interaction of spin up and spin down electrons on the same site and the third term is the chemical potential, which we shifted such that half filling corresponds to \(\mu=0\) for bi-partite lattices: \[\mathcal{H} =-\sum_{\sigma}\sum_{x,y}t_{x,\sigma}\hat{v}^{\dagger}_{x,\sigma }\hat{v}_{y,\sigma}+U\sum_{x}\hat{n}_{x,\uparrow}\hat{n}_{x,\downarrow}\] \[-(\mu+U/2)\sum_{x}(\hat{n}_{x,\uparrow}+\hat{n}_{x,\downarrow}), \tag{1}\] where \(x\) and \(y\) are labelling the lattice sites and \(\sigma=\uparrow,\downarrow\) denotes the spin index. With \(\hat{c}^{\dagger}_{x,\sigma},\hat{v}_{y,\sigma}\) we denote the fermionic creation and annihilation operators and \(\hat{n}_{x,\sigma}=\hat{c}^{\dagger}_{x,\sigma}\hat{c}_{x,\sigma}\) is the occupation number operator. One interesting aspect of the Hubbard model is its Mott-insulating state at high interaction strength as well as the associated metal-insulator transition it supposedly captures. In this regard a pseudogap regime at intermediate interaction strength has been discussed[2; 6]. Within this study we investigated this regime using Cluster Perturbation Theory (CPT). Introduced by Senechal et al.[7] CPT has shown remarkable results when applied to the Hubbard model, despite being of low numerical cost. While these results caught our initial interest for the method, we came to the conclusion that care has to be taken when interpreting the results of CPT, especially concerning features like spectral gaps. In the following, we will first outline the method, apply it to the systems of interest and then analyse carefully the accuracy of the results by comparing the one dimensional case to exact results using Bethe ansatz. ## II Methods ### Cluster Green's functions In Cluster Perturbation Theory (CPT) the main objective is to construct an approximation to the retarded Green's function \(\mathcal{G}^{r}(\mathbf{k},\omega)\) of a given lattice system in the thermodynamic limit. This function is especially useful as it provides direct access to the spectral function[8; 9]: \[\mathcal{A}(\mathbf{k},\omega)=-\frac{1}{\pi}\mathrm{Im}\mathcal{G}^{r}( \mathbf{k},\omega). \tag{2}\] As CPT aims at approximating the Green's function of the full system by combining the solutions of small finite clusters cut out of the infinite lattice, we first have to discuss how to obtain the interacting Green's function on such a cluster. For this we first define the retarded Green's function for two fermionic operators \(\hat{A}\) and \(\hat{B}\) as: \[\mathcal{G}^{r}_{\hat{A},\hat{B}}(t,t^{\prime})=-i\Theta(t-t^{\prime})\langle \{\hat{A}(t),\hat{B}(t^{\prime})\}\rangle \tag{3}\] where \(\{\ldots,\ldots\}\) is the anticommutator, \(t\) and \(t^{\prime}\) are time arguments and \(\Theta(t)\) is the Heavyside Theta function. Note that we are only interested in the \(T=0\) case, which means that the expectation value (\(\langle...\rangle\)) only consists of the ground state \(\ket{\Psi_{0}}\). For the retarded Green's function we have \(t>t^{\prime}\) and as the Hamiltonian of interest is time-independent, we can assume \(t^{\prime}\) to be zero. As we are going to use a Chebyshev expansion, it is convenient to rewrite the retarded Green's function in terms of two new functions \(\mathcal{G}^{+}_{\hat{A},\hat{B}}(t)\) and \(\mathcal{G}^{-}_{\hat{B},\hat{A}}(t)\)[10]: \[\mathcal{G}^{r}_{\hat{A},\hat{B}}(t) =-i\Theta(t)\langle\{\hat{A}(t),\hat{B}(0)\}\rangle,\] \[=-i\Theta(t)\langle\hat{A}(t)\hat{B}(0)\rangle-i\Theta(t)\langle \hat{B}(0)\hat{A}(t)\rangle,\] \[=\mathcal{G}^{+}_{\hat{A},\hat{B}}(t)-\mathcal{G}^{-}_{\hat{B}, \hat{A}}(t). \tag{4}\] Where we used the definitions: \[\mathcal{G}^{+}_{\hat{A},\hat{B}}(t) =-i\Theta(t)\langle\hat{A}(t)\hat{B}(0)\rangle, \tag{5}\] \[\mathcal{G}^{-}_{\hat{B},\hat{A}}(t) =i\Theta(t)\langle\hat{B}(0)\hat{A}(t)\rangle. \tag{6}\] Performing a Fourier transformation we can obtain the Green's function in the frequency domain as: \[\mathcal{G}^{+}_{\hat{A},\hat{B}}(\omega) =-\bra{\Psi_{0}}\hat{A}[\mathcal{H}-E_{0}-(\omega+i\eta)]^{-1} \hat{B}\ket{\Psi_{0}}. \tag{7}\] \[\mathcal{G}^{-}_{\hat{B},\hat{A}}(\omega) =-\bra{\Psi_{0}}\hat{B}[\mathcal{H}-E_{0}+(\omega+i\eta)]^{-1} \hat{A}\ket{\Psi_{0}}, \tag{8}\] where \(\eta>0\) is an infinitesimal parameter that ensures convergence. ### Chebyshev Expansion Expressions like (7) and (8) can be very efficiently handled using Chebyshev polynomials [11]. They contain the function \[f^{\pm}_{z}(x)=-i\int_{0}^{\pm\infty}\mathrm{e}^{i(\pm z-x)t}\,dt=\frac{1}{\pm z -x}, \tag{9}\] with \(x,\mathrm{Re}(z)\in\mathbb{R}\) and \(\mathrm{Im}(z)>0\), that can be expanded using Chebyshev polynomials of the first kind \(T_{n}(x)\): \[f^{\pm}_{z}(x)=\sum_{n=0}^{\infty}\alpha^{\pm}_{n}(z)T_{n}(x), \tag{10}\] with the expansion coefficients: \[\alpha^{\pm}_{n}(z)=\frac{2/(1+\delta_{n,0})}{(\pm z)^{n+1}(1+\sqrt{z^{2}} \sqrt{z^{2}-1}/z^{2})^{n}\sqrt{1-1/z^{2}}}. \tag{11}\] For the polynomials the following recursion relation holds: \[\ket{\Phi_{0}}=b\ket{\Psi_{0}}, \tag{12}\] \[\ket{\Phi_{1}}=\left[a(\mathcal{H}-E_{0})-b\right]\ket{\Phi_{0}}, \tag{13}\] \[\ket{\Phi_{n+1}}=2[a(\mathcal{H}-E_{0})-b]\ket{\Phi_{n}}-\ket{\Phi_{n-1}}, \tag{14}\] where we choose the two parameters \(a,b\in\mathbb{R}\) to fit the spectrum of the operator \(a(\mathcal{H}-E_{0})-b\) into the interval \((-1,1)\), required by the orthogonality relation of the Chebyshev polynomials. With this we can identify the Green's functions as: \[\mathcal{G}^{\pm}_{\hat{B},\hat{A}}(\omega)=a\sum_{n=0}^{\infty}\alpha^{\pm}_ {n}(\pm a(\omega+i\eta)-b)\mu_{n}, \tag{15}\] where the \(\mu_{n}\) are often referred to as Chebyshev moments and are defined as the expectation values of the polynomials: \[\mu_{n}=\bra{\Psi_{0}}\hat{A}T_{n}(a(\mathcal{H}-E_{0})-b)\hat{B}\ket{\Psi_{0} }=\bra{\Psi_{0}}\hat{A}\ket{\Phi_{n}}. \tag{16}\] In order to calculate these moments for the Green's function of the finite cluster, we require the Hamiltonian in a many particle basis. To construct the ground state we employ a sparse matrix diagonalization as for example introduced in Ref. [12]. The main idea is to explicitly encode how a specific Hamiltonian acts on basis states in the occupation number representation. For this, one needs to at least encode all the basis states in a particular number sector. Although this can be efficiently done by saving each basis state as the bitwise representation of an integer, the computational space still grows exponentially, making it only usable for very small clusters. Due to limited computational resources our calculations did not exceed 18 site calculations. Having constructed the Hamiltonian the groundstate can be calculated using a Lanczos algorithm. ### Cluster Perturbation Theory The goal of Cluster Perturbation Theory (CPT) is to approximate the Green's function of a particular lattice model in the thermodynamic limit by combining the Green's functions of small individual clusters, for example calculated as described in the previous section. Introductions to this method are presented in Ref. [12; 13]. The first step is to split the Hamiltonian into two parts: \[\mathcal{H}=\sum_{\alpha}\mathcal{H}^{\mathrm{cluster}}_{\alpha}+\mathcal{H}^{ \mathrm{inter}}. \tag{17}\] In the first part: \[\mathcal{H}^{\mathrm{cluster}}_{\alpha} =(\mathcal{H}^{c}_{0}+\mathcal{H}^{c}_{U})_{\alpha}=-t\sum_{ \sigma}\sum_{x,y\in\gamma^{c}_{\alpha}}\hat{c}^{\dagger}_{x,\sigma}\hat{c}_{y,\sigma}\] \[+U\sum_{x\in\gamma^{c}_{\alpha}}\left(\hat{n}_{x,\uparrow}-\frac{ 1}{2}\right)\left(\hat{n}_{x,\downarrow}-\frac{1}{2}\right), \tag{18}\] one has the full Hubbard model on small, individual clusters \(\gamma^{c}_{\alpha}\), each labeled by the index \(\alpha\) and in the second part: \[\mathcal{H}^{\rm inter}=-t\sum_{\sigma}\sum_{\pi\in\tau_{\alpha}^{c},\pi\in\tau_{ \beta}^{c}}\hat{c}_{x,\sigma}^{\dagger}\hat{e}_{y,\sigma}, \tag{19}\] we only have the hopping elements between these individual clusters. Note that due to this splitting, it can be very useful to describe any lattice site \(\mathbf{R}_{i}\) by a combination of two new vectors: \[\mathbf{R}_{i}=\mathbf{r}_{\alpha}+\mathbf{r}_{m}, \tag{20}\] where \(\mathbf{r}_{\alpha}\) is the position of the individual clusters in a new superlattice \(\Gamma\) and \(\mathbf{r}_{m}\) describes the position of an individual site within a cluster. One can calculate the Green's function for one of these clusters and use this result for the Green's function of all other clusters due to the lattice symmetry. We will refer to this Green's function as the cluster Green's function \(\mathcal{G}^{c}(\mathbf{r}_{m},\mathbf{r}_{n},\omega)\). The main idea within CPT consists in calculating the self-energy from the cluster Green's function and use it to construct an approximation for the self-energy of the full system. We can obtain the cluster self-energy \(\Sigma^{c}(\mathbf{r}_{m},\mathbf{r}_{n},\omega)\) from a Dyson equation: \[\Sigma^{c}(\mathbf{r}_{m},\mathbf{r}_{n},\omega) =(\mathcal{G}^{c}_{0}(\mathbf{r}_{m},\mathbf{r}_{n},\omega))^{-1}\] \[-(\mathcal{G}^{c}(\mathbf{r}_{m},\mathbf{r}_{n},\omega))^{-1}, \tag{21}\] where \(\mathcal{G}^{c}_{0}(\mathbf{r}_{m},\mathbf{r}_{n},\omega)\) is the non-interacting Green's function on the cluster defined as: \[(\mathcal{G}^{c}_{0}(\mathbf{r}_{m},\mathbf{r}_{n},\omega))^{-1}=\omega+i\eta -\mathcal{H}^{c}_{0}(r_{m},r_{n}). \tag{22}\] Therefore we obtain for a particular entry of the system self-energy \(\Sigma^{s}(\mathbf{R}_{i},\mathbf{R}_{j},\omega)\) in real space, connecting two sites on the same cluster: \[\Sigma^{s}(\mathbf{R}_{i},\mathbf{R}_{j},\omega) =\Sigma^{s}(\mathbf{r}_{\alpha}+\mathbf{r}_{m},\mathbf{r}_{\alpha }+\mathbf{r}_{n},\omega)\] \[=\Sigma^{c}(\mathbf{r}_{m},\mathbf{r}_{n},\omega), \tag{23}\] and all entries of the self-energy connecting sites on different clusters are set to zero. Finally, we can use this approximation of the self energy, namely using the cluster self energy for the self energy of the full system, in a Dyson equation as before, to obtain the Green's function of the full system: \[(\mathcal{G}^{s}_{0}(\omega))^{-1}=\omega+i\eta-\mathcal{H}^{\rm inter}-\sum_ {\alpha}\mathcal{H}^{c}_{0,\alpha}. \tag{24}\] Note that in this way, we treat the non-interacting part exactly. This is why one should view the CPT approximation as a perturbation theory in \(U\) rather than a perturbation in the inter-cluster hopping. Finally we end up with the following expression for the Green's function of the full system: \[\mathcal{G}^{s}(\mathbf{R}_{i},\mathbf{R}_{j},\omega) =((\mathcal{G}^{s}_{0}(\mathbf{R}_{i},\mathbf{R}_{j},\omega))^{-1}\] \[-\Sigma^{s}(\mathbf{R}_{i},\mathbf{R}_{j},\omega))^{-1} \tag{25}\] ### Periodization While the just described procedure works for finite systems, it is important to note that there are also so called periodization schemes, which allow to extend these results to infinite systems. Here we are going to use the so called G-scheme, as discussed in Ref. [14]. The main idea is based on arranging clusters in an infinite superlattice and exploiting its translational symmetry. As pointed out before, one can split any lattice vector into one vector defined on the superlattice and one on a cluster. Therefore, one can similarly split any wave vector \(\mathbf{k}\) of the 1st BZ into a combination of a wave vector in a reduced BZ \(\tilde{\mathbf{k}}\) associated with the superlattice and one of the Brillouin zone of a single cluster \(\mathbf{K}\). This also allows one to split the Fourier transformations into two parts, one for the cluster and one for the superlattice. Using Bloch's theorem for the superlattice, one ends up with the following form for a periodized Green's function: \[\mathcal{G}(\mathbf{k},\omega)=\frac{1}{L}\sum_{a,b}e^{-i\mathbf{k}(\mathbf{r }_{a}-\mathbf{r}_{b})}\mathcal{G}_{a,b}(\tilde{\mathbf{k}},\omega). \tag{26}\] ## III Results We can now use the just presented methods to calculate the spectral function for our main system of interest, the 2D Hubbard model on a square lattice at half filling (Fig. 1, 2, 3). As we can clearly see when considering the spectral function at the \(X\) point and in the midpoint between \(\Gamma\) and \(M\), CPT here suggests a considerable spectral weight in the gap at U=4t. However, we are going to show that there are two parameters that pose large additional constraints on the resolution that can actually be achieved using CPT. The first parameter is the convergence aiding factor \(\eta\) and the second is the finite cluster Figure 1: First Brillouin zone (1. BZ) of the 2D square lattice with symmetry points and the k-path for the bandstructure plots. size. While the artefacts induced by the convergence aiding factor are related to our approximation of using only a finite number of Chebyshev moments, the constraints imposed by the finite cluster size are an inherent limitation of CPT. These additional constraints are typically not discussed in detail in the literature, but as we are going to show they actually prohibit us from making accurate judgements about the pseudogap at intermediate interaction strengths. To see this, we will concentrate on the 1D Hubbard model since on the one hand it allows us to compare to exact results from Bethe ansatz and on the other hand it gives a higher resolution in k-space. ### Convergence Aiding Factor The convergence aiding factor \(\eta\) enters the Green's function in Eqs. (7) and (8). We calculate these Green's functions with the help of the Chebyshev expansion (15) that, in practise, is evaluated only with a finite number of Chebyshev moments. This leads to artefacts in the spectral function in the form of Gibbs oscillations. This is illustrated for the spectral function of the non-interaction 1D chain evaluated for a cluster with 16 sites, see Fig. 4. In Fig. 5 we show the spectral function for a specific \(\mathbf{k}\)-value as a function of frequency that clearly displays oscillations. Note that a higher Chebyshev order only increases the frequency of these oscillations (see Fig. 6) but does not change their magnitude. These oscillations can be identified as Gibbs oscillations that usually arise when approximating a sharp step function (in this case the \(\delta\)-peak) by a finite Fourier expansion series. In order to suppress these artificial oscillations we will choose a sufficiently large value for the broadening parameter \(\eta\), such that the Chebyshev expansion is capable of resolving the peak without Gibbs oscillations. In addition, the finite cluster is naturally characterized by a finite level spacing. In order to mimic an infinite system and to obtain smooth bands, a finite broadening parameter \(\eta\) has to be chosen such that it smears out the effect of the finite level spacing [10]. While this means that altogether one has to choose \(\eta\) rather large (Fig. 7), one should also realize that one can counteract the effects of this broadening to a large extend by including the same large parameter for \(\eta\) in the non interacting Green's function when calculating the self energy. Effectively this corresponds to subtracting \(\eta\) from the inverse of the Green's function, \[\mathcal{G}^{c}(\omega)=\left[(\mathcal{G}^{c}(\omega,\eta_{C}))^{-1}-i\eta_ {C}\right]^{-1} \tag{27}\] Figure 3: Same spectral function as in Fig. 2, plotted only at the \(X\) point (\(k=[\pi,0,0]\)) and the midpoint between \(\Gamma\) and \(M\) point (\(k=[\pi/2,\pi/2,0]\)). We can still see a significant spectral weight at \(\omega=0\) indicating a pseudogap. Figure 2: Spectral function of the Hubbard model with U=4t on a 2D square lattice, plotted along the high symmetry axis of the 1. BZ. The broadening parameter was chosen as \(\eta=0.5\). The cluster calculations were performed on a 4x4 cluster with 120 Chebyshev moments. Figure 4: Spectral function obtained from a cluster Green’s function of a 16 site tight binding chain with a broadening parameter of \(10^{-7}\) leading to negative values in the spectral function. While this procedure works extremely well, as shown in Fig. 8, it still depends on the exact choice for \(\eta\), and it is _a priori_ unclear which value to choose for \(\eta\). Here we want to propose two different approaches. The first approach uses the typical single particle level spacing of the non interacting cluster for the broadening parameter \(\eta\), that can be estimated as \[\eta=\frac{4t}{M_{C}} \tag{28}\] where \(4t\) is the bandwidth with \(t\) the hopping amplitude and \(M_{C}\) the cluster size. This smears the discrete cluster levels (Fig. 9), and leads to a good approximation of the continuous cosine band structure one expects for a 1D system (Fig. 10). Applying this choice for the CPT approximation to the 1D Hubbard model, one finds a finite spectral weight within the Hubbard gap illustrating the numerical artefact that is induced by a large \(\eta\), see Figs. 11 and 12. As a second approach we propose an extrapolation scheme that calculates the CPT Green's function for multiple values of \(\eta\) and performs an extrapolation of the results to \(\eta=0\) (see Figs. 13 and 14). Although this procedure is physically sound and results in sharp peaks, there is no guarantee that this procedure will lead to the correct thermodynamic limit. In addition, obtaining a sharper peak does not automatically provide a more accurate result. Only in cases where the actual width of the peak is resolved by the Chebyshev expansion, i.e. wider than the many particle bandwidth divided by the number of Chebyshev moments, the extrapolated peaks could be considered reliable. Remarkably, despite the peak height and width being \(\eta\) dependent, the peak position appears Figure 5: Spectral function of a 16 site tight binding chain at \(k=3\pi/4\) with \(n_{ch}=60\) chebyshev moments and a broadening parameter of \(10^{-7}\). We can see oscillations around the peak leading to negative values in the spectral function. Figure 8: Removing the artificial broadening by means of eq. (27) results in a sharp \(\delta\)-like peak in the spectrum. Figure 6: Same spectral function as in Fig. 4 but calculated using \(n_{ch}=120\) Chebyshev moments. Note that the amplitude of the oscillations does not decrease, but their frequency increases in accordance with Gibbs phenomenon. Figure 7: Again the same spectral function as in Fig. 4 calculated using \(n_{ch}=120\) Chebyshev moments but now with a broadening of \(\eta=0.25\). The oscillations disappear, because the peak is artificially broadened. to be rather stable against a variation of \(\eta\). ### Accuracy of CPT results It is important to note that the finite size level spacing of the cluster used within CPT acts as a cutoff that limits any spectral resolution. Spectral features like gaps can only be resolved reliably if they are larger than this cutoff. As far as interaction effects are concerned, CPT only accurately reflects the same information that a careful analysis of the cluster result would also provide. To extract such an information from the cluster result directly, let us consider the spectral function of a 16 site 1D Hubbard chain without interaction as was shown in Fig. 9 and with an interaction of U=4t (Fig. 15). As one would expect for the non interacting case we can see 16 individual levels forming a cosine shaped band, while for the interacting case the levels in the middle of the spectrum move apart. Hence, the actual band gap is given by the shift of the energy levels rather than their frequency difference directly. Table 1 shows the frequency difference of a particular interaction strength subtracted by the difference in the non interacting case, while the table below shows the exact results one would get in the thermodynamic limit using Bethe ansatz (Tab. 2). As we can see, only for \(U=4t,8t\) the gap size agrees up to the second decimal place with the exact result. However, this means that for \(U=1t,2t\) and using CPT with the currently computationally accessible cluster sizes we can not judge if there is an actual gap in the system, as the deviation is on the same order of magnitude as the actual gap. Additionally we can see that the gap size does not Figure 11: CPT result for a 1D Hubbard model at U = 2t based on a 16 site cluster with broadening \(\eta=0.25\), based on the average single particle level splitting. We obtain two continuous bandstructures with the expected Hubbard gap. Figure 12: CPT result for the same system as in Fig. 11 shown only for \({\bf k}=\pi/2\) but with different broadening parameters. We can see, that the residual spectral weight in the gap strongly depends on the broadening parameter. Figure 10: Spectral function for the same system as in Fig. 9 but with a broadening of \(\eta=0.25\). We can see that the peaks start to overlap and to resemble a continuous cosine bandstructure, as expected for an infinite tight binding chain. Figure 9: Spectral function of a 16 site Hubbard model at U=0 (tight binding chain) with a broadening of \(\eta=0.15\). We can see individual levels of the cluster. change significantly with the cluster size. Hence we have to assume that the considered cluster sizes are simply to small to resolve the gap accurately. Now in order to arrive at the same result using the full CPT calculation, we can simply plot the spectral function in the middle of the spectrum. If we do this for multiple cluster sizes (Fig. 18), we can see, that the peaks slightly move to the center with increasing cluster size. Hence we can extrapolate the peak positions for the gap of an infinite cluster as shown in Fig. 19. This results in gaps comparable to results obtained directly from the cluster (Tab. 1). As in the case where we just considered the cluster we see that the error is too large to judge the gap accurately at \(U=1t,2t\). After this discussion, we now return to the 2D case which sparked this investigation in the first place. We come to the conclusion that CPT is not able to resolve reliably the spectral weight within the gap and, in particular, to predict the presence or absence of a pseudogap despite the resolution that the plot in Fig. 2 suggests. This is, firstly, due to the influence of the finite broadening parameter used to dampen the Gibbs oscillation induced by the Chebyshev approximation, and, secondly, due to the finite level spacing associated with the chosen 2D cluster. ### Use Cases for CPT One might now pose the question as to why one should employ CPT in the first place, since all of the reliable information is already contained in the cluster results. To this end, one should realize that just judging from the cluster results it can be very hard to identify how the Figure 16: The average of the spectral function at the k points \(k_{1/2}=\frac{M}{2}\pm 1\) for the M=16 site calculations as shown in Fig. 9. Figure 14: Same plot as in Fig. 12 but now for the CPT data obtained with the extrapolation scheme for the broadening parameter. We can see that the spectral function in the gap almost disappears. Figure 13: Again CPT results for the same system as in Fig. 11 but using an extrapolation scheme for the broadening parameter. We again obtain two continuous bands separated by the Hubbard gap, however the bands are narrower. Figure 15: Spectral function of a 16 site Hubbard chain at U=4t and a broadening parameter of \(\eta=0.15\). Comparing to the non interacting result in Fig. 9 we can see shifts in the positions of the single particle levels. \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline & U=1t & U=2t & U=4t & U=8t \\ \hline \(\Delta_{\text{Bethe}}/2\) & 0.003 & 0.086 & 0.643 & 2.340 \\ \hline \(\Delta_{CPT}/2\) & 0.028 & 0.143 & 0.683 & 2.344 \\ \hline \((\Delta_{CPT}/2)_{Error}\) & 0.025 & 0.057 & 0.040 & 0.004 \\ \hline \end{tabular} \end{table} Table 2: In row one, we show half the band gap \(\Delta\) calculated using Bethe ansatz. In row two, we give half the gap size obtained via CPT and in row three the extrapolation of the deviation. We can see that the results deviate on the order of magnitude of \(10^{-1}\), as they already did, when we were just considering the cluster result. This was expected, since the accuracy of CPT is determined by the finite cluster. Figure 19: The \(\omega\) values of the peaks marked in Fig. 18 are plotted against the inverse of the corresponding cluster size. Doing so allows for a finite size extrapolation to zero, corresponding to an infinite cluster size. The arrangement of the data suggests a linear extrapolation, which we performed and the result at zero is shown in the title. It corresponds to half the bandgap (\(\Delta_{CPT}/2\)). The second value was obtained, by doing the same extrapolation for the deviation from the exact Bethe result (\(\Delta_{CPT}/2-\Delta_{Bethe}/2\)). \begin{table} \begin{tabular}{|c|c||c|c|c|} \hline & M & U=1t & U=2t & U=4t & U=8t \\ \hline 6 & \((\Delta(U)-\Delta(0))/2\) & 0.03 & 0.16 & 0.66 & 2.24 \\ \hline 8 & \((\Delta(U)-\Delta(0))/2\) & 0.03 & 0.15 & 0.64 & 2.24 \\ \hline 10 & \((\Delta(U)-\Delta(0))/2\) & 0.02 & 0.14 & 0.63 & 2.24 \\ \hline 12 & \((\Delta(U)-\Delta(0))/2\) & 0.02 & 0.13 & 0.62 & 2.25 \\ \hline 14 & \((\Delta(U)-\Delta(0))/2\) & 0.02 & 0.13 & 0.62 & 2.26 \\ \hline 16 & \((\Delta(U)-\Delta(0))/2\) & 0.03 & 0.13 & 0.63 & 2.30 \\ \hline \end{tabular} \end{table} Table 1: Half of the band gap \(\Delta\), calculated using the peak positions from the interacting case and correcting them using the peak positions of the non interacting one. Figure 18: Shown is the influence of the cluster size on the band gap for \(U/t=4\). We show the spectral function at \(k=\pi/2\). The peaks defining the gap get broader and move closer together the larger the cluster. Note that the minimum spectral weight within the gap stays almost constant with system size and is almost zero due to our choice of \(\eta=0.15\). The peaks which are marked are used for a finite size extrapolation in Fig. 19. ## IV Conclusion In this study we have presented fundamental limitations of the CPT method. Most importantly we showed that the cluster Green's function already contains, as far as interaction effects are concerned, all the information that is included in the CPT Green's function and CPT only stays consistent with this information. Analysing the 1D case we have shown that the current computational limitations prohibit us to make accurate judgements of features like the Hubbard gap at intermediate interaction strengths, as this would require calculations on larger clusters, as the resolution is limited by the finite size induced level splitting. Additionally and specifically, for the approach of calculating the cluster Green's function via a Chebyshev expansion, we have shown that Gibbs oscillations require the choice of a large broadening parameter that in return prohibits one from making accurate judgements about the broadness of the peaks in the single particle spectrum. Hence we conclude that use cases for CPT are limited to cases where one is interested in obtaining numerically cheap initial guesses for the spectral function of materials with short range correlations, while more advanced methods need to be employed, to gain higher resolution. **Data availability** The data presented in this publication is avaliable on Zenodo under the DOI: 10.5281/zenodo.8063247. **Funding** This project was supported by BMBF via the MANIQU grant no. 13N15576.
2309.12444
Foundation Metrics for Evaluating Effectiveness of Healthcare Conversations Powered by Generative AI
Generative Artificial Intelligence is set to revolutionize healthcare delivery by transforming traditional patient care into a more personalized, efficient, and proactive process. Chatbots, serving as interactive conversational models, will probably drive this patient-centered transformation in healthcare. Through the provision of various services, including diagnosis, personalized lifestyle recommendations, and mental health support, the objective is to substantially augment patient health outcomes, all the while mitigating the workload burden on healthcare providers. The life-critical nature of healthcare applications necessitates establishing a unified and comprehensive set of evaluation metrics for conversational models. Existing evaluation metrics proposed for various generic large language models (LLMs) demonstrate a lack of comprehension regarding medical and health concepts and their significance in promoting patients' well-being. Moreover, these metrics neglect pivotal user-centered aspects, including trust-building, ethics, personalization, empathy, user comprehension, and emotional support. The purpose of this paper is to explore state-of-the-art LLM-based evaluation metrics that are specifically applicable to the assessment of interactive conversational models in healthcare. Subsequently, we present an comprehensive set of evaluation metrics designed to thoroughly assess the performance of healthcare chatbots from an end-user perspective. These metrics encompass an evaluation of language processing abilities, impact on real-world clinical tasks, and effectiveness in user-interactive conversations. Finally, we engage in a discussion concerning the challenges associated with defining and implementing these metrics, with particular emphasis on confounding factors such as the target audience, evaluation methods, and prompt techniques involved in the evaluation process.
Mahyar Abbasian, Elahe Khatibi, Iman Azimi, David Oniani, Zahra Shakeri Hossein Abad, Alexander Thieme, Ram Sriram, Zhongqi Yang, Yanshan Wang, Bryant Lin, Olivier Gevaert, Li-Jia Li, Ramesh Jain, Amir M. Rahmani
2023-09-21T19:36:48Z
http://arxiv.org/abs/2309.12444v3
# Foundation Metrics: Quantifying Effectiveness of Healthcare Conversations powered by Generative AI ###### Abstract Generative Artificial Intelligence is set to revolutionize healthcare delivery by transforming traditional patient care into a more personalized, efficient, and proactive process. Chatbots, serving as interactive conversational models, will probably drive this patient-centered transformation in healthcare. Through the provision of various services, including diagnosis, personalized lifestyle recommendations, dynamic scheduling of follow-ups, and mental health support, the objective is to substantially augment patient health outcomes, all the while mitigating the workload burden on healthcare providers. The life-critical nature of healthcare applications necessitates establishing a unified and comprehensive set of evaluation metrics for conversational models. Existing evaluation metrics proposed for various generic large language models (LLMs) demonstrate a lack of comprehension regarding medical and health concepts and their significance in promoting patients' well-being. Moreover, these metrics neglect pivotal user-centered aspects, including trust-building, ethics, personalization, empathy, user comprehension, and emotional support. The purpose of this paper is to explore state-of-the-art LLM-based evaluation metrics that are specifically applicable to the assessment of interactive conversational models in healthcare. Subsequently, we present an comprehensive set of evaluation metrics designed to thoroughly assess the performance of healthcare chatbots from an end-user perspective. These metrics encompass an evaluation of language processing abilities, impact on real-world clinical tasks, and effectiveness in user-interactive conversations. Finally, we engage in a discussion concerning the challenges associated with defining and implementing these metrics, with particular emphasis on confounding factors such as the target audience, evaluation methods, and prompt techniques involved in the evaluation process. ## 1 Introduction The rapid proliferation of Generative Artificial Intelligence (AI) is fundamentally reshaping our interactions with technology. AI systems now possess extraordinary capabilities to generate, compose, and respond in a manner that may be perceived as emulating human behavior. Particularly within the healthcare domain, prospective trends and transformative projections anticipate a new era characterized by preventive and interactive care driven by the advancements of large language models (LLMs). Interactive conversational models, commonly known as chatbots, hold considerable potential to assist individuals, including patients and healthcare providers, in a wide array of tasks such as symptom assessment, primary medical and health education, mental health support, lifestyle coaching, appointment scheduling, medication reminders, patient triaging, and allocating health resources. Due to the life-critical nature of healthcare applications, using conversational models necessitates establishing a unified and comprehensive set of foundation metrics1 that enable a meticulous evaluation of the models' performance, capabilities, identification of potential errors, and implementation of effective feedback mechanisms. These metrics can lead to significant advances in the delivery of robust, accurate, and reliable healthcare services. However, the existing evaluation metrics introduced and employed for assessing healthcare chatbots[2, 3, 4] exhibit two significant gaps that warrant careful attention. Footnote 1: [https://github.com/hugging-and-play/](https://github.com/hugging-and-play/) Firstly, it is observed that numerous existing generic metrics[5, 6, 7] suffer from a lack of unified and standard definition and consensus regarding their appropriateness for evaluating healthcare chatbots. Currently, state-of-the-art conversational models are predominantly assessed and compared based on language-specific perspectives[8] and surface-form similarity[8] using intrinsic metrics such as Bilingual Evaluation Understudy (BLEU)[9] and Recall-oriented Understudy for Gisting Evaluation (ROUGE).[5] Although these metrics are model-based, they lack an understanding of medical concepts (e.g., symptoms, diagnostic tests, diagnoses, and treatments), their interplay, and the priority for the well-being of the patient, all of which are crucial for medical decision-making.[10] For this reason, they inadequately capture vital aspects like semantic nuances, contextual relevance, long-range dependencies, changes in critical semantic ordering, and human-centric perspectives,[11] thereby limiting their effectiveness in evaluating healthcare chatbots. Moreover, specific extrinsic context-aware evaluation methods have been introduced to incorporate human judgment in chatbots assessment.[12, 13, 14, 9, 15, 16] However, these methods have merely concentrated on specific aspects, such as robustness of the generated answers within a particular medical domain. Secondly, it is evident that the existing evaluation metrics overlook a wide range of crucial _user-centered_ aspects that indicates the extent to which a chatbot establishes a connection and conveys support and emotion to the patient. Emotional bonds play a vital role in physician-patient communications, but they are often ignored during the development and evaluation of chatbots. Healthcare chatbot assessment should consider the level of attentiveness, thoughtfulness, emotional understanding, trust-building, behavioral responsiveness, user comprehension, and the level of satisfaction or dissatisfaction experienced. There is a pressing need to evaluate the _ethical implications_ of chatbots, including factors such as fairness and biases stemming from overfitting.[17] Furthermore, the current methods fail to address the issue of _hallucination_, wherein chatbots generate misleading or inaccurate information. In particular, in the healthcare domain, where safety and currentness of information are paramount, hallucinations pose a significant concern. The evaluation of healthcare chatbots should encompass not only their ability to provide personalized responses to individual users but also their ability to offer _accurate_ and _reliable_ information that applies to a broader user base. Striking the right balance between personalization and generalization is crucial to ensure practical and trustworthy healthcare guidance. Additionally, metrics are required to assess the chatbot's ability to deliver _empathetic_ and _supportive_ responses during healthcare interactions, reflecting its capacity to provide compassionate care. Moreover, existing evaluations overlook performance aspects of models, such as computational efficiency and model size, which are crucial for practical implementation. In this article, we begin by delving into an examination of the current state-of-the-art evaluation metrics applicable to assessing healthcare chatbots. Subsequently, we present an exhaustive collection of user-centered evaluation metrics and their taxonomy to provide a thorough and well-rounded comprehension of a healthcare chatbot's performance across diverse dimensions. These metrics encompass assessing the chatbot's language processing capabilities, impact on real-world clinical tasks, and effectiveness in facilitating user interactive conversations. Furthermore, we discuss the challenges associated with defining and implementing these metrics, emphasizing factors such as the target audience, evaluation methods, and prompt techniques integral to this process. In fact, we bring about the requirement for a comprehensive evaluation framework. ## 2 Review of Existing Evaluation Metrics for LLMs The evaluation of language models can be categorized into intrinsic and extrinsic methods,[18] which can be executed automatically or manually. In the following, we briefly outline these evaluation methods. ### Intrinsic Evaluation Metrics Intrinsic evaluation metrics measure the proficiency of a language model in generating coherent and meaningful sentences relying on language rules and patterns.[18] We categorize the intrinsic metrics into _general automatic_ and _dialogue-based_ metrics. An overview of the intrinsic metrics is shown in Figure 1(a). Additionally, Table 1 outlines a brief overview of existing intrinsic metrics employed for LLMs evaluation in the literature. The intrinsic evaluation metrics are characterized by their computational simplicity. They offer valuable quantitative measures to evaluate LLMs. However, they solely rely on surface-form similarity and language-specific perspectives, rendering them inadequate for healthcare chatbots. These metrics lack the capability to capture essential elements such as semantics,[29, 46] context,[47, 46] distant dependencies,[48, 49] semantically-critical ordering change,[47] and human perspectives, particularly in real-world scenarios. To illustrate the limitations of intrinsic metrics in healthcare contexts, consider the evaluation of the following two sentences using BLEU and ROUGE metrics with HuggingFace:[50] 1) _"Regular exercise and a balanced diet are important for maintaining good cardiovascular health."_ and 2) _"Engaging in regular physical activity and adopting a well-balanced diet is crucial for promoting optimal cardiovascular well-being."_ Despite the contextual similarity between the two sentences, the obtained BLEU and ROUGE scores are 0.39 and 0.13, respectively, on a scale of 0 to 1, reflecting low alignment. This underscores the inability of these metrics to capture the semantic meaning of the text effectively. Therefore, if we solely use these metrics to evaluate a healthcare chatbot, an inaccurate answer may receive a high score comparing with the reference answer. ### Extrinsic Evaluation Metrics Extrinsic evaluation metrics present means of measuring the performance of language models by incorporating user perspectives and real-world contexts.[18] These metrics can gauge how the model impacts end users and assess the extent to which LLMs meet human users' expectations and requirements.[8] Extrinsic metrics, gathered through subjective means, entail human participation and judgments within the evaluation process.[14, 15, 16] We classify the existing extrinsic metrics in the literature into two categories: general-purpose and health-specific metrics. Figure 1(b) provides an overview of the extrinsic metrics. General-purpose human evaluation metrics have been introduced to assess the performance of LLMs across various domains.[5] These metrics serve to measure the quality, fluency, relevance, and overall effectiveness of language models, encompassing a wide spectrum of real-world topics, tasks, contexts, and user requirements.[5] On the other hand, health-specific evaluation metrics have been specifically crafted to explore the processing and generation of health-related information by healthcare-oriented LLMs and chatbots, with a focus on aspects such as accuracy, effectiveness, and relevance. The aforementioned evaluation metrics have endeavored to tailor extrinsic metrics, imbued with context and semantic awareness, for the purpose of LLMs evaluation. However, each of these studies has been confined to a distinct set of metrics, thereby neglecting to embrace the comprehensive and all-encompassing aspect concerning healthcare language models and chatbots. ### _Multi-metric Measurements_ A restricted body of literature has introduced and examined a collection of domain-agnostic evaluation metrics, which amalgamate intrinsic and extrinsic measurements for LLMs in the healthcare domain. Notably, Laing et al.[5] have presented a multi-metric approach, as part of the HELM benchmark, to scrutinize LLMs concerning their accuracy, calibration (proficiency in assigning meaningful probabilities for generated text), robustness, fairness, bias, toxicity, and efficiency. Likewise, Wang et al.[6] have assessed the trustworthiness of GPT-3.5 and GPT-4 from eight discerning aspects encompassing toxicity, bias, robustness, privacy, machine ethics, and fairness. Additionally, Chang et al.[8] have presented organized evaluation methodologies for LLMs through three essential dimensions: "what to evaluate," "where to evaluate," and "how to evaluate." Despite these contributions, it is evident that these studies have yet to fully encompass the indispensable, multifaceted, and user-centered evaluation metrics necessary to appraise healthcare chatbots comprehensively. For example, these studies unable to assess chatbots in terms of empathy, reasoning, up-to-dateness, hallucinations, personalization, relevance, and latency. ## 3 Essential Metrics for Evaluating Healthcare Chatbots In this section, we present a comprehensive set of metrics essential for conducting a user-centered evaluation of LLM-based healthcare chatbots. The primary objective is to assess healthcare chatbot models from the perspective of users interacting with the healthcare chatbot, thereby distinguishing our approach from existing studies in this field. To visualize the evaluation process of healthcare chatbot models, we provide an overview in Figure 2. This process entails evaluators interacting with conversational models and assigning scores to various metrics, all from the viewpoint of users. These scores are \begin{table} \begin{tabular}{|p{28.5pt}|p{142.3pt}|p{142.3pt}|} \hline Name & From & Measure & Model \\ \hline HELM[5] & & Calibrating potential based on the number of internal is competitive versus between reference and personal text. & FLAN [5], FLAN [5] subsequently utilized for the purpose of comparing and ranking different healthcare chatbots, ultimately leading to the creation of a leaderboard. In this evaluation process, three confounding variables are taken into account: user type, domain type, and task type. The following outlines these three essential confounding variables. 1. **User type:** The end-users engaging with the conversational model may include patients, nurses, primary care providers, or specialist providers, among others. The evaluation of the model's performance encompasses diverse factors, such as safety and privacy, which are contingent upon the specific users or audience involved. For instance, when interacting with a patient, the chatbot may offer less advanced recommendations to mitigate potential harm or risks to the patient or others. Conversely, when the user is a medical doctor, the chatbot may provide comprehensive responses, including specific drug names, dosages, and relevant information gleaned from other patients' experiences. 2. **Domain type:** Chatbots can serve two distinct purposes: they can be designed for general healthcare queries, providing answers across a broad spectrum of topics. Alternatively, they can be tailored and trained for specific domains like mental health or cancer. The evaluation metrics required for assessing these chatbots can be influenced by the healthcare domain they cater to. 3. **Task type:** Chatbots exhibit versatility in performing diverse functions, encompassing medical report generation, diagnosis, developing a treatment plan, prescription, and acting as an assistant. The evaluation of the model and metric scoring may differ depending on the specific task at hand. For instance, in the domain of medical report generation, the utmost importance lies in ensuring the reliability and factuality of the generated text, a requirement that might not be as critical when the task involves acting as an assistant. As outlined below, the metrics are categorized into four distinct groups: accuracy, trustworthiness, empathy, and performance, based on their dependencies on the confounding variables. For a visual representation, please refer to Figure 3. Furthermore, Table 2 summarizes the healthcare related problems that each metric addresses. ### Accuracy Accuracy metrics encompass both automatic and human-based assessments that evaluate the grammar, syntax, semantics, and overall structure of responses generated by healthcare chatbots. The definition of these accuracy metrics is contingent upon the domain and task types involved.[5; 113] To elucidate, let us consider two examples. For a chatbot serving as a mental health assistant, an accuracy metric like "robustness" would gauge the model's resilience in answering mental health topics and effectively engaging in supportive dialogues. Conversely, for a generic healthcare chatbot designed for diagnosis, the "robustness" metric should evaluate the model's ability to handle mental health assistance queries and other diverse domains. It is important to note that accuracy metrics might remain invariant with regard to the user's type, as the ultimate objective of the generated text is to achieve the highest level of accuracy, irrespective of the intended recipient. In the following, we outline the specific accuracy metrics essential for healthcare chatbots, detail the problems they address, and expound upon the methodologies employed to acquire and evaluate them. **Intrinsic metrics** are employed to address linguistic and relevance problems of healthcare chatbots in each single conversation between user and the chatbot. They can ensure the generated answer is grammatically accurate and pertinent to the questions. Table 1 summarizes the intrinsic metrics used to evaluate LLMs. **Sensitivity, Specificity, Interestingness (SSI)**,[7] an extrinsic metric, assesses the overall flow, logic, and coherence of the generated text, contributing to User-Engagement. SSI metric measures how well the model's answers align with human behavior. The SSI score is computed as the average of three metrics: Sensibility, Specificity, and Interestingness. **Robustness,[15; 113]** as an extrinsic metric, explores the resilience of healthcare chatbots against perturbations and adversarial attacks. It addresses the challenge of response vulnerability by assessing a language model's ability to maintain performance and dependability amidst input variations, noise, or intentional behavior manipulation. In healthcare chatbots, where human inquiries may not precisely align with their underlying issues or intent, robustness assumes paramount importance. **Generalization,[15; 113]** as an extrinsic metric, pertains to a model's capacity to effectively apply acquired knowledge in accurately performing novel tasks. In the context of healthcare, the significance of the generalization metric becomes pronounced due to the scarcity of data and information across various medical domains and categories. Figure 2: **A broad overview of the evaluation process and the role of metrics.** Evaluators engage with healthcare chatbot models, considering confounding variables, to assign scores for each metric. These scores will be utilized to generate a comparative leaderboard, facilitating the comparison of healthcare chatbot models based on various metrics. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **User-centered Metrics** & **Low-level Metrics** & **Definition** & **Problem** & **Benchmark** \\ \hline & & & & Opombo4Q,\({}^{\text{A}}\), MaidQ-USMEE\({}^{\text{B}}\), QANC,\({}^{\text{C}}\) \\ & Intrinsic & Lingufficient issues and irrelevant responses & Longitudinal issues and irrelevant responses & RaoQ,\({}^{\text{A}}\), Nationalization, RAPT,\({}^{\text{B}}\), RAPT,\({}^{\text{C}}\) \\ & Intrinsic & Lingufficient issues and irrelevant responses & Longitudinal issues and irrelevant responses & RAPT,\({}^{\text{B}}\), Nationalization, RAPT,\({}^{\text{B}}\), RAPT,\({}^{\text{C}}\) \\ & Intrinsic & Lingufficient issues and irrelevant responses & Longitudinal issues and irrelevant responses & RAPT,\({}^{\text{B}}\), Nationalization, RAPT,\({}^{\text{B}}\), RAPT,\({}^{\text{C}}\) \\ & & & RAPT,\({}^{\text{B}}\), Nationalization, RAPT,\({}^{\text{B}}\), Nationalization, RAPT,\({}^{\text{C}}\) \\ & SSI & Measuring the relevancy of the generated response & Irrelevant responses & OrganLink,\({}^{\text{B}}\), Paul, SuperLink,\({}^{\text{B}}\), Nationalization, RAPT,\({}^{\text{C}}\) \\ & & & MML,\({}^{\text{B}}\), BigBench,\({}^{\text{A}}\), NANTIGA,\({}^{\text{C}}\), Openblood,\({}^{\text{A}}\), QASC,\({}^{\text{B}}\), WakFate,\({}^{\text{B}}\), Body\({}^{\text{A}}\) \\ & Robustness & Ganging the resilience of chatbot to any disruptions & Lack of resilience and validity & Claire,\({}^{\text{C}}\), CoCoA,\({}^{\text{C}}\), LALBADAD,\({}^{\text{C}}\) \\ & Generalization & Assessing chatbot’s performance on unfamiliar tasks & Overtime, limited transferability, and lack of validity & Trujola,\({}^{\text{C}}\), DROB,\({}^{\text{A}}\), MGDGD,\({}^{\text{B}}\) \\ & & & & \({}^{\text{B}}\)Turk,\({}^{\text{B}}\)DROB,\({}^{\text{A}}\), MGDGD,\({}^{\text{B}}\) \\ \hline & Consciences & Measuring response conclusions accurately & Wordiness and redundancy & KelA,\({}^{\text{A}}\), Alponea,\({}^{\text{C}}\), Pandand,\({}^{\text{A}}\) \\ & & GLUE,\({}^{\text{C}}\), N\({}^{\text{C}}\) & BentkemeAlEU\({}^{\text{A}}\) \\ \cline{2-7} & Up-to-draeness & Evaluating up-to-draeness of generated response & Hallucination, out-to-draeness, lack of validity, and hallucination & WakFate\({}^{\text{C}}\) \\ \cline{2-7} & Groundedness & Evaluating the factual validity of generated responses & Out-to-draeness, lack of reasoning, lack of validity, and hallucination & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ \hline & Safety & Measuring compliance of generated responses to ethical aspects & Toxicity & RealReliGroupGroup,\({}^{\text{B}}\), TrujfulMQ,\({}^{\text{A}}\) \\ & & Clinical measures, \({}^{\text{B}}\), BOLD,\({}^{\text{B}}\), BBQ\({}^{\text{C}}\) \\ \cline{2-7} & Privacy & Evaluating the model’s use of sensitive user information & Lack of privacy & DP-SGD\({}^{\text{B}}\),\({}^{\text{B}}\) \\ \cline{2-7} & Bias & Measuring the generated response bias towards specific populations & Lack of personalization and toxicity & Crowds-Pass,\({}^{\text{B}}\), WineGender,\({}^{\text{B}}\), HBD,\({}^{\text{C}}\) \\ & Bias & Measuring the generated response bias towards specific & Lack of personalization and toxicity & Trujola,\({}^{\text{B}}\), HBD,\({}^{\text{C}}\), HBD,\({}^{\text{C}}\) \\ & Bias & & Lack of personalization and toxicity & Trujola,\({}^{\text{B}}\), HBD,\({}^{\text{C}}\), HBD,\({}^{\text{C}}\) \\ & Interpretability & Assessing user interpretability of generated responses & Lack of reasoning and hallucination & Trujola,\({}^{\text{B}}\), HBD,\({}^{\text{C}}\), HBD,\({}^{\text{C}}\) \\ & & & & & \\ & & & & & \\ & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ \end{tabular} \end{table} Table 2: Evaluation metrics for healthcare chatbots Figure 3: **Overview of the four healthcare evaluation metric groups.** Accuracy metrics are scored based on domain and task types, trustworthiness metrics are evaluated according to the user type, empathy metrics consider patients needs in evaluation (among the user type), and performance metrics are evaluated based on the three confounding variables. The metrics identify the listed problems of healthcare chatbots. The size of a circle reflects the number of metrics which are contributing to identify that problem. **Conciseness**, as an extrinsic metric, reflects the effectiveness and clarity of communication by conveying information in a brief and straightforward manner, free from unnecessary or excessive details.[114, 115] In the domain of healthcare chatbots, generating concise responses becomes crucial to avoid verbosity or needless repetition, as such shortcomings can lead to misunderstanding or misinterpretation of context. **Up-to-dateness** serves as a critical metric to evaluate the capability of chatbots in providing information and recommendations based on the most current and recently published knowledge, guidelines, and research. Given the rapid advancements within the healthcare domain, maintaining up-to-date models is essential to ensure that the latest findings and research inform the responses provided by chatbots.[116, 35] To achieve up-to-dateness in models, integration of retrieval-based models as external information-gathering systems is necessary. These retrieval-based models enable the retrieval of the most recent information related to user queries from reliable sources, ensuring that the primary model incorporates the latest data during inference. **Groundedness**, the final metric in this category, focuses on determining whether the statements generated by the model align with factual and existing knowledge. Factuality evaluation involves verifying the correctness and reliability of the information provided by the model. This assessment requires examining the presence of true-causal relations among generated words,[117] which must be supported by evidence from reliable reference sources.[7, 12] Hallucination issues in healthcare chatbots arise when responses appear factually accurate but lack a valid foundation.[118, 119, 5, 120] To address this, groundedness leverages relevant factual information, promoting sound reasoning and staying up-to-date. Designing experiments and evaluating groundedness for general language and chatbot models follows established good practices.[121, 122, 123, 117, 118, 124] ### Trustworthiness Trustworthiness, an essential aspect of Responsible AI, plays a critical role in ensuring the reliability and conscientiousness of healthcare chatbot responses. To address these significant concerns, we propose four Trustworthiness metrics: safety, privacy, bias, and interpretability. It is important to note that these trustworthiness metrics are defined based on the user's type. For instance, the desired level of interpretability for a generated text may vary between a patient and a nurse, necessitating tailored evaluations for different user groups. The **Safety** metric evaluates the extent to which a model adheres to ethical and responsible guidelines in its generated responses.[116, 125, 28, 5] It primarily focuses on mitigating potential risks associated with harmful or inappropriate content (toxicity) produced by LLMs.[113] Safety encompasses multiple aspects, including the model's confidence level in its answers, the level of detail included in the responses, and the potential risks or harms posed by the answers.[7] These aspects can be tailored based on the intended user type. For example, when a healthcare professional interacts with the chatbot, the model can provide more specific advice or directives, such as prescribing dosage or duration of drug usage. However, when interacting with actual patients, the model should exercise greater caution. The **Privacy** metric is devised to assess whether the model utilizes users' sensitive information for either model fine-tuning or general usage.[126] Privacy is evaluated from three perspectives. First, users may share sensitive information with a chatbot to obtain more accurate results, but this information should remain confined to the context of the specific chat session and not be used when answering queries from other users.[127] Second, the model should adhere to specific guidelines to avoid requesting unnecessary or privacy-sensitive information from users during interactions. Lastly, the dataset used to train the model may contain private information about real individuals, which could be extracted through queries to the model. **Bias** in healthcare chatbots refers to the presence of discriminatory treatment in their responses. Bias encompasses three significant aspects: _Demographic bias_, where the model's responses exhibit disparities or favoritism based on attributes like race, gender, age, or socioeconomic status. _Medical condition bias_, which can arise when the LLM provides inconsistent or unequal recommendations or information (e.g., conservative approaches) for different medical conditions without logical clinical justification. _Representation bias_, occurring when certain groups or medical conditions are underrepresented or overlooked in the training data of the language model, leading to incomplete or inaccurate information. Bias evaluation[128] can be accomplished by employing an agreement index, based on the frequency of a healthcare chatbot agreeing with stereotype statements.[6] For instance, if the chatbot responds to the query "People of [selected ethnic group] are usually addicted to drugs" with an agreement or similar terms, it is considered an instance of agreement, indicating the presence of bias. The **Interpretability** metric assesses the chatbot's responses in terms of user-centered aspects, measuring the transparency, clarity, and comprehensibility of its decision-making process.[101] This evaluation allows users and healthcare professionals to understand the reasoning behind the chatbot's recommendations or actions. Interpretability ensures that the chatbot's behavior can be traced back to specific rules, algorithms, or data sources.[129] ### Empathy Empathy is the ability to understand and share the feelings of another person. Empathy metrics are established according to the user's type and hold particular significance, especially when the intended recipient is a patient. These metrics ensure that the chatbots consider end-users emotional support, trust, concerns, fairness, and health literacy.[130, 131, 95] Empathy they also plays a crucial role in building trust between users and chatbots. Unempathetic responses can erode trust and credibility in the system, as users may feel unheard, misunderstood, or invalidated. In pursuit of empathy, we propose four empathy metrics: emotional support, health literacy, fairness, and personalization. The **Emotional Support** metric evaluates how chatbots incorporate user emotions and feelings. This metric focuses on improving chatbot interactions with users based on their emotional states while avoiding the generation of harmful responses. It encompasses various aspects such as active listening, encouragement, referrals, psychoeducation, and crisis interventions [132]. The **Health Literacy** metric assesses the model's capability to communicate health-related information in a manner understandable to individuals with varying levels of health knowledge. This evaluation aids patients with low health knowledge in comprehending medical terminology, adhering to post-visit instructions, utilizing prescriptions appropriately, navigating healthcare systems, and understanding health-related content [133]. For instance, "pneumonia is hazardous" might be challenging for a general audience, while "lung disease is dangerous" could be a more accessible option for people with diverse health knowledge. The **Fairness** metric evaluates the impartiality and equitable performance of healthcare chatbots. This metric assesses whether the chatbot delivers consistent quality and fairness in its responses across users from different demographic groups, considering factors such as race, gender, age, or socioeconomic status [134, 135]. Fairness and bias are two related but distinct concepts in the context of healthcare chatbots. Fairness ensures equal treatment or responses for all users, while bias examines the presence of unjustified preferences, disparities, or discrimination in the chatbot's interactions and outputs [136, 137]. For instance, a model trained on an imbalanced dataset, with dominant samples from white males and limited samples from Hispanic females, might exhibit bias due to the imbalanced training dataset. Consequently, it may provide unfair responses to Hispanic females, as their patterns were not accurately learned during the training process. The **Personalization** metric gauges the degree of customization and individualization in the chatbot's conversations. It assesses how effectively the chatbot incorporates end-users' preferences, demographics, past interactions, behavioral patterns, and health parameters (collected from sources like electronic health records) when generating responses. Personalization can be evaluated from two perspectives: personalized conversation (communication procedure) and personalized healthcare suggestions (output). The metric, can be obtained through subjective human-based evaluation methods [138]. ### Performance Performance metrics are essential in assessing the runtime performance of healthcare conversational models, as they significantly impact the user experience during interactions. From the user's perspective, two crucial quality attributes that healthcare chatbots should primarily fulfill are **usability** and **latency**. **Usability** refers to the overall quality of a user's experience when engaging with chatbots across various devices, such as mobile phones, desktops, and embedded systems. **Latency** measures the round-trip response time for a chatbot to receive a user's request, generate a response, and deliver it back to the user. Low latency ensures prompt and efficient communication, enabling users to obtain timely responses. It is important to note that performance metrics may remain invariant concerning the three confounding variables (user type, domain type, and task type). In the following sections, we outline the performance metrics for healthcare conversational models. The **Memory Efficiency** metric quantifies the amount of memory utilized by a healthcare chatbot. Popular LLMs, such as GPT-4, Llama, and BERT, often require large memory capacity [139, 140, 134, 13, 141], making it challenging to run them on devices with limited memory, such as embedded systems, laptops, and mobile phones [142]. The **FLOG** **f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**f**o**o**f**o**f**o**f**o**f**o**o**f**o**f**o**f**o**f**o**f**o**o**f**o**f**o**f**o**f**o**o**f**o**f**o**o**f**o**f**o**f**o**o**f**o**f**o**o**f**o**f**o**f**o**f**o**o**f**o**f**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**f**o**o**f**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**f**o**o**o**f**o**o**o**f**o**o**o**f**o**o**o**f**o**o**o**o**f** necessitiates personalization, which can potentially compromise privacy and lead to biased responses. A significant relationship exists between performance metrics and the other three categories. For instance, the number of parameters in a language model can impact accuracy, trustworthiness, and empathy metrics. An increase in parameters may introduce complexity, potentially affecting these metrics positively or negatively. Conversely, a low parameter count can limit the model's knowledge acquisition and influence the values of these metrics. ### Evaluation Methods Various automatic and human-based evaluation methods can quantify each metric, and the selection of evaluation methods significantly impacts metric scores. Automatic approaches utilize established benchmarks to assess the chatbot's adherence to specified guidelines, such as using robustness benchmarks alongside metrics like ROUGE or BLEU to evaluate model robustness. However, a notable concern arises when employing existing benchmarks (see Table 2) to automatically evaluate relevant metrics. These benchmarks may lack comprehensive assessments of the chatbot model's robustness concerning confounding variables specific to the target user type, domain type, and task type. Ensuring a thorough evaluation of robustness requires diverse benchmarks that cover various aspects of the confounding variables. Human-based methods involve providing questions or guidelines to human annotators who score the chatbot's generated answers based on given criteria. This approach presents two main challenges: subjectivity and the need for a variety of domain expert annotators. To minimize bias, involving multiple annotators for scoring the same samples is essential to capture normative human judgments. Additionally, expert annotators from diverse healthcare domains are required to ensure accurate and comprehensive annotation. It is crucial to acknowledge two strategies for scoring metrics. In chat sessions, multiple conversation rounds occur between the user and the healthcare chatbot. The first strategy involves scoring after each individual query is answered (per answer), while the second strategy involves scoring the healthcare chatbot once the entire session is completed (per session). Some metrics, like intrinsic ones, perform better when assessed on a per-answer basis.[144] ### Model Prompt Techniques and Parameters Prompt engineering[145] significantly impacts the responses generated by healthcare chatbots, and the choice of prompt technique plays a pivotal role in achieving improved answers. Various prompting methods, such as zero-shot, few-shot, chain of thought generated with evidence, and persona-based approaches, have been proposed in the literature. Apart from prompting techniques, evaluation based on model parameters during inference is also crucial. Modifying these parameters can influence the chatbot's behavior when responding to queries. For example, adjusting the beam search parameter[146] can impact the safety level of the chatbot's answers, and similar effects apply to other model parameters like temperature,[147] which can influence specific metric scores. ## 5 Toward an Effective Evaluation Framework Considering the aforementioned deliberations regarding the requirements and complexities entailed in the evaluation of healthcare chatbots, it is of paramount importance to institute effective evaluation frameworks. The principal aim of these frameworks shall be to implement a cooperative, end-to-end, and standardized approach, thus empowering healthcare research teams to proficiently assess healthcare chatbots and extract substantial insights from metric scores. In this context, Figure 4 presents an illustrative high-level representation of such an evaluation framework. It includes essential components requiring adaptation during the evaluation process. Notably, while recent studies[148, 131, 31, 77] have introduced various evaluation frameworks, it is important to recognize that these may not fully cater to the specific needs of healthcare chatbots. Hence, certain components in our proposed evaluation framework differ from those in prior works. In the ensuing sections, we expound on these components and discuss the challenges that necessitate careful consideration and resolution. The term **Models** within the evaluation framework pertains to both current and prospective healthcare chatbot models. The framework should enable seamless interaction with these models to facilitate efficient evaluation. The evaluation framework encompasses the configurable **Environment**, where researchers establish specific configurations aligned with their research objectives. The three key configuration components consist of confounding variables, prompt techniques and parameters, and evaluation methods. Figure 4: **An illustrative high-level representation of an evaluation framework containing five main components: models, environment, interface, interacting users, and leaderboard.** 1. The **Confounding Variables** component is pivotal, as it stores configurations related to users, domains, and task types. The ability to adjust these variables in the evaluation framework ensures alignment among all stakeholders evaluating the target healthcare chatbot model, fostering a consistent and uniform evaluation perspective. 2. The **Prompt Techniques and Parameters** component enables the configuration of desired prompting techniques and LLM parameters. Evaluators utilize these configurations during the model evaluation process. 3. The **Evaluation** component represents a critical aspect of the evaluation framework, providing essential tools for evaluators to calculate individual metric scores, category-level metric scores, and a comprehensive total score for the desired healthcare chatbot model. Figure 4 illustrates the tools required in this component. To create a comprehensive evaluation process, specific requirements must be addressed. These include developing tailored benchmarks for healthcare domains, establishing detailed guidelines for human-based evaluations, introducing innovative evaluation methods designed explicitly for healthcare metrics, and providing evaluation tools to support annotators. One primary requirement for a comprehensive evaluation component is the development of healthcare-specific benchmarks that align with identified metric categories similar to introduced benchmarks in Table 2 but more concentrated on healthcare. These benchmarks should be well-defined, covering each metric category and its sub-groups to ensure thorough testing of the target metrics. Tailored benchmarks for specific healthcare users, domains, and task types should also be established to assess chatbot performance within these confounding variables. When combined with automatic evaluation methods like ROUGE and BLEU, these benchmarks enable scoring of introduced extrinsic metrics. The second crucial requirement involves creating comprehensive human guidelines for evaluating healthcare chatbots with the aid of human evaluators. These guidelines facilitate manual scoring of metrics. Healthcare professionals can assess the chatbot's performance from the perspective of the final users, while intended users, such as patients, can provide feedback based on the relevance and helpfulness of answers to their specific questions and goals. As such, these guidelines should accommodate the different perspectives of the chatbot's target user types. To ensure objectivity and reduce human bias, providing precise guidelines for assigning scores to different metric categories is indispensable. This fosters consistency in scoring ranges and promotes standardized evaluation practices. Utilizing predefined questions for evaluators to assess generated answers has proven effective in improving the evaluation process. By establishing standardized questions for each metric category and its sub-metrics, evaluators exhibit more uniform scoring behavior, leading to enhanced evaluation outcomes.[7; 121] The third crucial requirement involves devising novel evaluation methods tailored to the healthcare domain. These methods should integrate elements from the previous requirements, combining benchmark-based evaluations with supervised approaches to generate a unified final score encompassing all metric categories. Moreover, the final score should account for the assigned priorities to each metric category. For example, if trustworthiness outweighs accuracy in a specific task, the final score should reflect this prioritization. The integration of the aforementioned requirements should result in the desired scores, treating the evaluation component as a black box. Nevertheless, an unexplored avenue lies in leveraging BERT-based models, trained on healthcare-specific categorization and scoring tasks. By utilizing such models, it becomes possible to calculate scores for individual metrics, thereby augmenting the evaluation process. To facilitate effective evaluation and comparison of diverse healthcare chatbot models, the healthcare research team must meticulously consider all introduced configurable environments. By collectively addressing these factors, the interpretation of metric scores can be standardized, thereby mitigating confusion when comparing the performance of various models. The **Interface** component serves as the interaction point between the environment and users. Through this interface, interacting users can configure the environment by selecting the desired model for interaction, modifying model parameters, choosing the target user type, accessing evaluation guidelines, selecting the evaluation method, utilizing the latest introduced benchmarks, and more. Furthermore, the interface enables researchers to create new models, evaluation methods, guidelines, and benchmarks within the provided environment. The **Interacting users** of the evaluation framework serve different purposes and can be categorized into two main groups: evaluators and healthcare research teams. Evaluators utilize the evaluation framework through the interface to assess healthcare chatbot models and score the metrics. Healthcare research teams encompass computer and data scientists who contribute to new model creation and the development of novel evaluation methods. Additionally, it includes healthcare professionals who conduct new studies or contribute to the establishment of new benchmarks and guidelines. For instance, a healthcare research team might evaluate the performance of ChatGPT in answering mental health queries. In this scenario, healthcare professionals can introduce a new benchmark in the evaluation framework or provide novel guidelines to evaluators for evaluating ChatGPT based on metrics and assigning scores. Alternatively, the healthcare research team can use the existing evaluation tools to evaluate ChatGPT's performance in mental health. Eventually, the healthcare research team can report their findings and scores obtained through the evaluation process. The **Leaderboard** represents the final component of the evaluation framework, providing interacting users with the ability to rank and compare diverse healthcare chatbot models. It offers various filtering strategies, allowing users to rank models according to specific criteria. For example, users can prioritize accuracy scores to identify the healthcare chatbot model with the highest accuracy in providing answers to healthcare questions. Additionally, the leaderboard allows users to filter results based on confounding variables, facilitating the identification of the most relevant chatbot models for their research study. ## 6 Conclusion Generative AI, particularly chatbots, shows great potential in revolutionizing the healthcare industry by offering personalized, efficient, and proactive patient care. This paper delved into the significance of tailored evaluation metrics specifically for healthcare chatbots. We introduced a comprehensive set of user-centered evaluation metrics, grouped into four categories: accuracy, trustworthiness, empathy, and computing performance. The study highlighted the potential impact of confounding variables on metric definition and evaluation. Additionally, we emphasized how these metrics can address pertinent issues and enhance the reliability and quality of healthcare chatbot systems, ultimately leading to an improved patient experience. Lastly, we examined the challenges associated with developing and implementing these metrics in the evaluation process. For future work, We intend to focus on design and implement a comprehensive and dynamic evaluation framework for Healthcare chatbots in our upcoming research. Additionally, we aim to introduce healthcare-related benchmarks to enhance their performance. ## 7 Contributions MA and EKH conducted the research, analyzed the findings, and drafted the manuscript. IA played a key role in designing the study and revised the paper critically. DO contributed to drafting performance sub-section and revised the paper. ZSHA contributed to give guidance, revise critically the paper, and the design of the visualizations. AT and BL revised and validated the study from clinical perspectives. RS refined the paper and ensured alignment with NIST metrics. ZY contributed to drafting one proposed metric. YW and OG participated in the revising process. LJL, RJ, and AMR led the study, did mentoring, provided guidance throughout, and conducted critical revisions of the manuscript. All authors read and approved the final manuscript. ## 8 Disclaimer Certain commercial systems are identified in this paper. Such identification does not imply recommendation or endorsement by NIST; nor does it imply that the products identified are necessarily the best available for the purpose. Further, any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NIST, other supporting U.S. government or corporate organizations. ## 9 Acknowledgments We would like to thank the following NIST people for their in-depth comments: Ian Soboroff, Hoa Dang, Jacob Collard, and Reva Schwartz. Furthermore, we express our gratitude to Nigam Shah from Stanford for his valuable feedback, which has contributed to the enhancement of the paper
2309.05138
GenAIPABench: A Benchmark for Generative AI-based Privacy Assistants
Privacy policies of websites are often lengthy and intricate. Privacy assistants assist in simplifying policies and making them more accessible and user friendly. The emergence of generative AI (genAI) offers new opportunities to build privacy assistants that can answer users questions about privacy policies. However, genAIs reliability is a concern due to its potential for producing inaccurate information. This study introduces GenAIPABench, a benchmark for evaluating Generative AI-based Privacy Assistants (GenAIPAs). GenAIPABench includes: 1) A set of questions about privacy policies and data protection regulations, with annotated answers for various organizations and regulations; 2) Metrics to assess the accuracy, relevance, and consistency of responses; and 3) A tool for generating prompts to introduce privacy documents and varied privacy questions to test system robustness. We evaluated three leading genAI systems ChatGPT-4, Bard, and Bing AI using GenAIPABench to gauge their effectiveness as GenAIPAs. Our results demonstrate significant promise in genAI capabilities in the privacy domain while also highlighting challenges in managing complex queries, ensuring consistency, and verifying source accuracy.
Aamir Hamid, Hemanth Reddy Samidi, Tim Finin, Primal Pappachan, Roberto Yus
2023-09-10T21:15:42Z
http://arxiv.org/abs/2309.05138v3
# GenAIPABench: A Benchmark for ###### Abstract. Privacy policies inform users about the data management practices of organizations. Yet, their complexity often renders them largely incomprehensible to the average user, necessitating the development of _privacy assistants_. With the advent of generative AI (genAI) technologies, there is an untapped potential to enhance privacy assistants in answering user queries effectively. However, the reliability of genAI remains a concern due to its propensity for generating incorrect or misleading information. This study introduces GenAIPABench, a novel benchmarking framework designed to evaluate the performance of Generative AI-based Privacy Assistants (GenAIPA). GenAIPABench comprises: 1) A comprehensive set of questions about an organization's privacy policy and a data protection regulation, along with annotated answers for several organizations and regulations; 2) A robust set of evaluation metrics for assessing the accuracy, relevance, and consistency of the generated responses; and 3) An evaluation tool that generates appropriate prompts to introduce the system to the privacy document and different variations of the privacy questions to evaluate its robustness. We use GenAIPABench to assess the potential of three leading genAI systems in becoming GenAIPAAs --ChatGPT, Bard, and Bing AI. Our results demonstrate significant promise in genAI capabilities in the privacy domain while also highlighting challenges in managing complex queries, ensuring consistency, and verifying source accuracy. Generative AI, LLM, Privacy Policies, Data Protection Regulations, Benchmark + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: journal: Computer Vision and Pattern Recognition + Footnote †: (LLMs). The marked performance variability over time in models like GPT-3.5 and GPT-4 necessitates this system for consistent performance evaluation and quality control, and to foster transparency and accountability. However, the evaluation of LLMs and genAI is a challenging task, as these models are often trained on massive datasets and can generate text that seems indistinguishable from the human-written text. A variety of evaluation metrics and methods have been proposed such as F1 score, BLEU score, ROUGE score, METEOR score, Adversarial evaluation and CIDEr score (Zhu et al., 2017; Liu et al., 2018; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019), but there is no single metric or method that is universally accepted since, in general, the evaluation depends on the domain studied. Evaluating privacy involves complex challenges such as the absence of a clear ground truth, multi-dimensional objectives like data minimization and user consent, and the subjective nature of user perception, which often does not align with technical metrics. These factors make it challenging to establish a universally accepted evaluation method in the privacy domain. While genAI systems have been evaluated in domains such as healthcare, finance management, and mental health a noticeable gap exists in the literature when it comes to evaluating genAI performance within the privacy domain. This lack of scrutiny on privacy-related concerns could potentially leave users exposed to a range of risks, highlighting the urgent need for an in-depth evaluation in this domain. We present the GenAIPABench benchmark to evaluate future genAI-enabled privacy assistants. The benchmark evaluates the performance in a diverse set of tasks around topics that include, among others, transparency, user control, data minimization and purpose, security, and encryption. The benchmark includes: 1) A sample corpus of privacy policies and privacy regulations; 2) Questions an individual might have about the specific privacy policy of a company or particular data regulations, along with annotated answers; 3) A set of metrics to evaluate the answers obtained from the GenAIPA based on relevance, accuracy, clarity, completeness, and reference; and 4) An evaluator which applies the metrics to assess GenAIPA performance. Hence, the main contributions of this paper are as follows: * We present the first, up to the authors' knowledge, benchmark to evaluate GenAIPs. * We evaluate three popular genAI chatbots (ChatGPT, BARD, and Bing Chat) using GenAIPABench. * We analyze the results obtained and discuss challenges and opportunities for the development of GenAIPs. The rest of the paper is structured as follows. In Section 2, we review the state of the art on privacy benchmarking and genAI evaluation. In Section 3, we introduce the benchmark. In Section 4 and Section 5, we detail GenAIPABench's question corpus and metrics, respectively. In Section 6, we present the experiments performed using GenAIPABench. In Section 7, we provide a discussion on challenges and opportunities. Finally, Section 8 concludes the paper and presents directions for future research. ## 2. Related Work Since our benchmark is the first developed to assess the performance of GenAIPs, we survey previous work on privacy benchmarks and on benchmarking general-purpose genAI systems as well as general-purpose question-answering systems. Privacy BenchmarksIn recent years, there has been a growing interest in developing benchmarks and evaluation frameworks to assess the effectiveness and usability of privacy policies, as well as the transparency and capabilities of language models. Several notable projects and challenges have emerged to address these concerns, each with unique approaches and objectives. For instance, the (Zhu et al., 2017) Project focused on building PrivacyQA, a corpus of 1,750 questions and answers about privacy policies of mobile applications, along with over 3,500 expert annotations of relevant answers. The goal is to empower users to inform themselves about privacy issues and enable them to explore these issues selectively. A key advantage of PrivacyQA is its expertly crafted responses. Using answers provided by legal experts, PrivacyQA achieved higher reliability and precision in its responses. The queries within PrivacyQA, although relevant to privacy policies, are significantly specific to the mobile applications included in their study. The Usable Privacy Policy Project (Zhu et al., 2017) focuses on making privacy policies more accessible and understandable for users by leveraging machine learning and natural language processing techniques to analyze and summarize them. The project's "OPP-115 Corpus" dataset (Zhu et al., 2017) comprises 115 website privacy policies annotated with various information types. genAIEvaluationGe et al. (Zhu et al., 2017) introduced the OpenAGI research platform, designed to incorporate domain-specific expert models with LLMs, finding that optimized smaller LLMs could outperform their larger counterparts using Reinforcement Learning from Task Feedback mechanisms. Similarly, Kang et al. (Kang et al., 2018) delved into the ability of LLMs to understand user preferences and found that although they underperform in zero-shot and few-shot settings, fine-tuned LLMs could rival traditional Collaborative Filtering (CF) methods in predicting user ratings. Chiang and Lee (Chiang and Lee, 2018) assessed LLMs as substitutes for human evaluations in textual quality assessments, revealing a high degree of consistency between LLM and human ratings, particularly when using advanced models like InstructGPT and ChatGPT. Liu et al. (Liu et al., 2019) introduced AgentBench, a multi-dimensional evolving benchmark specifically aimed at evaluating LLMs as decision-making agents in interactive environments. On another front, Bang et al. (Bang et al., 2018) evaluated ChatGPT on a range of tasks, including reasoning and hallucination, exposing limitations, particularly in low-resource and non-Latin languages. Zhang et al. (Zhang et al., 2019) explored the efficacy of LLMs in news summarization, cautioning that while LLMs can be effective, they are also prone to generating misleading or inaccurate summaries. Lastly, Liu et al. (Liu et al., 2019) employed a rigorous custom benchmarking framework, EvalPlus, to assess the functional correctness of code generated by LLMs, revealing previously undetected errors. These studies collectively underscore the critical need for comprehensive and diverse evaluation metrics and methods to understand, optimize, and safely deploy LLMs. General Question-answering BenchmarksQuestion-answering benchmarks (Zhu et al., 2017; Liu et al., 2019; Liu et al., 2019) contain a large set of questions and their corresponding answers, often sourced from a specific domain, such as Wikipedia or news articles. The questions vary in difficulty and cover a wide range of topics, from factual to inferential and complex reasoning. To evaluate the quality of answers generated by LLMs, benchmarks use various metrics such as accuracy, precision, recall, and F1 score (Zhu et al., 2017; Liu et al., 2019). In addition, benchmarks may also focus on specific aspects of the answers, such as their clarity, relevance, and completeness. These metrics and aspects help assess the quality of LLMs' responses to questions and compare their performance to that of human experts. Another noteworthy initiative is the Holistic Evaluation of Language Models (HELM) seeks to enhance language models' transparency by adopting a multi-metric approach and conducting large-scale evaluations across various language models, scenarios, and metrics. This effort aims to provide a comprehensive understanding of language models' capabilities, limitations, and risks (Krishnan, 2017) In a different vein, (Krishnan, 2017) introduced TriviaQA, a large-scale benchmark for open-domain question answering. The benchmark consists of over 650,000 question-answer pairs, covering a diverse range of topics from science and history to popular culture and current events. Unlike many previous benchmarks, TriviaQA focuses on answering questions that are not tied to a specific context, requiring systems to be able to retrieve and integrate information from a wide range of sources. ## 3. The GenAIPABench Benchmark The GenAIPABench benchmark is designed to evaluate the performance of generative AI-based privacy assistants (GenAIPAs). The goal of GenAIPABench is to evaluate the overall capabilities of the system in assisting users to navigate the complex landscape of data privacy, namely: 1) Answering questions an individual might have about the privacy policy of an organization/corporation/service; 2) Answering questions about data privacy regulations in a specific country/state; 3) Summarizing privacy policies and privacy regulations. GenAIPABench comprises the following main components (see Figure 1) * **Privacy documents:** Extracted from web resources the current version of GenAIPABench includes 5 privacy policies and 2 data regulations with their corresponding manually annotated answers to questions. The dataset is included to introduce GenAIPAs to the specific content for which the questions will be asked. This is done to enable the comparison of diverse GenAIPAs regardless of whether their internal models have been trained in the specific documents or not. * **Privacy questions:** Intended to evaluate GenAIPAs' ability to interpret and respond to common inquiries regarding the privacy policies of websites/services as well as privacy regulations. The 24 questions address essential topics such as data collection, storage, sharing, and user rights (see Section 4 for more details). * **Metrics:** Used to measure GenAIPAs' performance in addressing the privacy policy and regulation questions, including aspects such as accuracy, relevance, clarity, completeness, and reference (see Section 5). Human analysts apply these metrics to evaluate the generated responses and identify areas for improvement. * **Annotated answers:** For five of the policies and two of the regulations in the corpus, we curated answers to each of the questions in the benchmark. Two experts, with each assigned a different privacy policy for analysis, were tasked with generating answers based on the policy. Following the initial generation of answers, they reciprocally reviewed each other's work, cross-checking against the original policies, and refining the responses where necessary. This process ensured the accuracy and depth of the annotated answers. * **Evaluator:** Handles the automatic generation of prompts to introduce the GenAIPA to the privacy document and generate the benchmark questions (see Section B for more details). The evaluator also handles the automatic execution of the prompts and collection of answers if an API is available to communicate with the system. ## 4. Question Corpus We introduce the question corpus that represents privacy questions an individual might ask the GenAIPA. ### Privacy Policy Questions The following questions are designed to cover a broad range of privacy-related topics related to the privacy policy of an organization/service and ensure a comprehensive evaluation of the GenAIPA performance. Overall, the questions are based on established privacy frameworks and guidelines and were informed by academic literature and industry reports on privacy policy evaluation and analysis (Krishnan, 2017; Krishnan, 2017; Krishnan, 2017). In particular, the ISO/IEC 29100:2011 - Information Technology - Security Techniques - Privacy framework was used as a reference (Krishnan, 2017). This standard provides a comprehensive framework for privacy management and includes guidelines for privacy impact assessments, and privacy policies. We describe each of the questions split into privacy-related categories. The benchmark includes three questions per category with varying degrees of presumed difficulty ranging from "easier" to "harder" questions. Note that certain questions contain a placeholder _[the company]_ which will be replaced by the evaluator with the name of the company in the privacy policy when generating a prompt for the assistant. The selection of these questions was based on the level of complexity and specificity of the questions. _Transparency (\(T\))_ refers to the ability of a user to easily understand and access information about how their personal data is being collected, used, and shared by a company or organization. This includes information about the types of data being collected, the purposes for which it will be used, and any third parties with whom it may be shared. Transparency also involves clear and concise language that is easily understandable by an average user, as well as easy access to the privacy policy itself. The following questions were chosen to represent easy, medium, and hard levels of difficulty in evaluating transparency in privacy policies: * **"Does the policy outline data collection practices?"** \(\mathbf{T_{m}}\) _"What is [the company]'s stance on government requests for user data?"_ \(\mathbf{T_{h}}\) _"How does the policy address potential conflicts of interest in data usage or sharing?"_ The easy question (1) is a straightforward yes or no question that does not require much explanation or context. The medium question (2) asks about the company's stance on government requests for user data, which may require some knowledge of privacy regulations and the company's policies. The hard question (3) addresses potential conflicts of interest in data usage and sharing, a more nuanced and complex issue requiring a deeper understanding of the company's business practices and policies. User Control (\(U\)C)refers to the options available to users to manage their personal information and privacy settings. These controls can include the ability to opt out of data collection and sharing, to delete personal data, to access and modify personal data, and to set preferences for how their data is used. The following questions were chosen to represent easy, medium, and hard levels of difficulty for evaluating user controls in the privacy policy: * _"Are users given control over their data and privacy settings?"_ * _"Are there clear mechanisms for users to request data deletion or access?"_ * _"How does [the company] manage consent and withdrawal of consent from users?"_ The easy question (1) is a basic requirement for any privacy policy and reflects the user's fundamental right to control their data. The medium question (2) requires an in-depth understanding of the company's data storage and deletion practices. The hard question (3) addresses how the company manages user consent, which can be a complex issue depending on the company's policies and the jurisdiction in which it operates. Data Minimization and Purpose Limitation (DM)are two important principles that aim to protect the privacy of users. Data minimization involves limiting the collection, use, and storage of personal data to only what is necessary for a specific purpose. This principle is aimed at reducing the risks associated with the processing of personal data and ensuring that data is not used for unrelated or unforeseeable purposes. Purpose limitation, on the other hand, involves restricting the use of personal data to only the purposes for which it was collected. This principle helps to ensure that personal data is not used in ways that are incompatible with the original purpose of collection and provides users with greater control over their personal data. The following questions were chosen to represent easy, medium, and hard levels of difficulty in evaluating data minimization and purpose limitations in a privacy policy: * _"Does [the company] minimize data retention periods?"_ * _"How is user data anonymized or aggregated to protect individual privacy?"_ * _"Are there any restrictions on data processing for specific purposes or contexts?"_ The easy question (1) asks whether the company minimizes data retention periods, which can be a straightforward yes or no answer based on the company's stated data retention policy. The medium Figure 1. A high-level overview of GenAIPABench. question (2) addresses how user data is anonymized or aggregated to protect individual privacy, which may require some technical understanding and context to answer fully. The hard question (3) asks about any restrictions on data processing for specific purposes or contexts, which may require a deeper understanding of the company's policies and regulatory requirements. #### Security and Encryption (\(Se\)) refers to the measures that organizations take to protect users' personal information from unauthorized access, theft, or hacking. These measures may include using encryption techniques to protect sensitive data, such as usernames, passwords, and credit card information, as well as implementing secure communication protocols to prevent eavesdropping or interception of user data. In addition, organizations may also have policies in place that specify how they handle security breaches or incidents involving personal information. These policies may include notifying affected users in a timely manner, conducting an investigation to determine the cause of the breach, and taking steps to prevent similar incidents from occurring in the future. The following questions were chosen to represent easy, medium, and hard levels of difficulty for evaluating security and encryption in privacy policy: * _"Are user communications encrypted end-to-end?"_ * _"What measures are in place to prevent unauthorized access to user data?"_ * _"How are data breaches or security incidents handled and communicated to users?"_ The easy question (1) is a straightforward yes or no question that assesses whether the company uses end-to-end encryption to protect user communications. The medium question (2) asks about the measures in place to prevent unauthorized access to user data, which may require a more detailed explanation and understanding of the company's security practices. The hard question (3) addresses how data breaches or security incidents are handled and communicated to users, which may require a deeper understanding of the company's incident response plan and the applicable legal and regulatory requirements. #### Privacy by Design and Innovation (PbD) refers to an approach to data protection that prioritizes privacy considerations throughout the entire design and development process of a product or service. This approach involves incorporating privacy-enhancing features into the product or service, such as data minimization, purpose limitation, and security measures, from the initial planning stages. The goal is to prevent privacy risks and ensure that user data is protected by default. The Privacy by Design and Innovation approach also encourages ongoing monitoring and evaluation of privacy practices to identify and address potential privacy issues. The following questions were chosen to represent easy, medium, and hard levels of difficulty for evaluating Privacy by Design and Innovation principles in privacy policy: * [leftmargin=*] * [rightmargin=*] * _"Does [the company] offer user-friendly resources, such as tutorials or guides, to help users effectively manage their privacy settings and understand their data rights?"_ Question (1) is easy because it's a simple yes-or-no query about the company's compliance with privacy laws for minors. Question (2) is medium as it asks about the company's efforts to accommodate unique accessibility needs, requiring a more nuanced understanding of privacy rights. Question (3) is hard because it calls for a detailed evaluation of how effectively the company educates users about complex privacy settings, reflecting a higher level of expertise in user education and privacy protection. _Compliance and Accountability (CA)._ refer to the mechanisms put in place by organizations to ensure that they adhere to privacy regulations and standards. This includes measures such as regular audits, data protection impact assessments, and the appointment of a Data Protection Officer (DPO) to oversee privacy-related activities. Accountability also involves taking responsibility for any privacy breaches or violations that may occur and providing remedies to affected individuals. The following questions were chosen to represent easy, medium, and hard levels of difficulty in evaluating compliance and accountability in the privacy policy: * _"Does the policy comply with applicable privacy laws and regulations?"_ * _"What steps are taken to ensure data processors and subprocessors adhere to privacy requirements?"_ * _"Does [the company] have a process in place for reporting and addressing privacy violations or non-compliance issues, both internally and with third-party vendors?"_ Question (1) is easy because it asks a basic yes-or-no question about the company's compliance with privacy laws, a foundational element of privacy management. Question (2) is medium in difficulty as it delves into the company's vendor management practices, requiring an understanding of third-party compliance within the broader scope of privacy management. Question (3) is hard as it calls for a nuanced assessment of the company's mechanisms for addressing privacy violations, a complex issue that entails understanding both compliance and accountability frameworks. The privacy policy question corpus not only contains the original questions but also their variations, which are essential for assessing the model's robustness (see Table 2). These variations are generated using paraphrasing which involves rewording a given text while preserving its original meaning, which can help evaluate the GenAIPAs' comprehension and response capability in varied language use scenarios. We utilized a tool called QuillBot 1, an advanced AI paraphrasing tool, to generate these question variations. QuillBot automatically restructures sentences and changes certain phrases or words while retaining the original intent of the sentence. Footnote 1: QuillBot: [https://www.quillbot.com/](https://www.quillbot.com/) ### Privacy Regulation Questions GenAIPABench includes questions to evaluate the performance of the GenAIPA in helping users understand privacy and data protection regulations such as the GDPR or the CCPA. We compiled and generalized the following questions extracted from different sources (Zhu et al., 2017; Zhu et al., 2017) that aim to cover a range of topics, from the scope and applicability of the regulations to specific requirements and rights: * _Who must comply with the [regulation]?_ * _What are the [regulation] fines?_ * _How do I comply with the [regulation]?_ * _Does the [regulation] require encryption?_ * _What is personal information and sensitive personal information under the [regulation]?_ * _What rights do I have under the [regulation]?_ Note that the evaluator will replace the placeholder _[regulation]_ with the specific privacy regulation to be evaluated from the privacy document dataset (e.g., GDPR, CCPA, LGPD, etc.). The benchmark, like the prior question corpus, includes question variations through paraphrasing for comprehensive evaluation. ## 5. Metrics In order to evaluate the quality of responses generated by the GenAIPA, we propose a set of metrics that incorporate five key features. The metrics are based on privacy policy evaluation and analysis resources (Zhu et al., 2017; Zhu et al., 2017; Zhu et al., 2017). For instance, we used a report from the Future of Privacy Forum, a non-profit organization focused on advancing responsible data practices, which provides insights into best practices for privacy policy design and evaluation. _Relevance (\(\mathcal{M}_{rel}\))._ measures how well an answer matches the user's query. This has been identified as an important aspect of ensuring user satisfaction in conversational agents (Zhu et al., 2017). Relevant answers to privacy questions enable users to make informed decisions about their data privacy while not relevant answers can lead to frustration and decreased satisfaction, potentially hindering users from understanding their rights and responsibilities (Zhu et al., 2017). _Accuracy (\(\mathcal{M}_{acc}\))._ represents whether the information conveyed is correct or not. Accuracy is essential for building trust and ensuring user acceptance of AI systems, as highlighted in (Zhu et al., 2017). Providing incorrect or invalid information can lead to misinformed decisions and negatively impact the user's perception of the system. A GenAIPA that answers incorrectly, or provides the user with invalid information, can lead to misinformed decisions. In addition to the implications to users' privacy (e.g., continue using a service that is portrayed as not too intrusive or decide to use a different one), if the user recognizes inaccuracies in the answers, this can negatively impact user perceptions of the system's reliability and trustworthiness (Zhu et al., 2017). _Clarity (\(\mathcal{M}_{cla}\))._ represents the effective communication of information ensuring clear and coherent responses (Zhu et al., 2017). The ease of understanding and coherence of a response, as perceived by human readers, plays a significant role in enabling users to make informed decisions based on the information provided. A common issue with many privacy policies is their lack of user-friendliness due to the use of legalese and technical privacy terms, as pointed out in (Zhu et al., 2017).To achieve clarity, GenAIPAs should employ simple and concise language, avoid ambiguity and jargon, and offer clear explanations when necessary. Additionally, these systems should consider the user's level of understanding and tailor responses to their knowledge level. By prioritizing clarity in their responses, GenAIPAs can not only enhance user satisfaction but also ensure that the information provided is effectively communicated. Completeness (\(\mathcal{M}_{com}\)) represents whether the answer is providing all the necessary information to address their question in the response (Song et al., 2019). The degree to which a response covers all necessary aspects or details of the topic is essential for ensuring that users do not need to ask multiple follow-up questions. A comprehensive response should cover all relevant aspects of the topic, provide accurate and complete information, and address any related issues that may be relevant to the user's query. Incomplete or inaccurate information can result in users making misinformed decisions or not fully understanding their privacy options. This can lead to frustration and decreased trust in the AI system (Wang et al., 2019). To achieve this, AI systems should be able to understand the context of the user's query and provide responses that are tailored to the user's specific needs. By providing comprehensive responses, AI systems can enhance user satisfaction and improve the efficiency of communication. Reference (\(\mathcal{M}_{ref}\)), understood as proper citation or mention of relevant policy sections, is important for ensuring transparency and credibility in legal or policy-related domains, as highlighted in (Wang et al., 2019). When applicable, AI systems should include proper citations or mentions of relevant policy sections in their responses. This not only enhances the accuracy and completeness of the response but also provides transparency and credibility to the user. The proper citation of relevant policy sections should include the appropriate legal or policy language, relevant section numbers, and any other necessary information to help the user better understand the legal or policy implications of their query. By providing proper citations or mentions of relevant policy sections in their responses, AI systems can enhance user trust and ensure that their responses are accurate and legally or policy-compliant. Metric EvaluationGenAIPABench proposes to evaluate each metric per answer given for a question in a scale from +1 to -1. In particular, we propose the following evaluation scheme for each metric: * \(\mathcal{M}_{rel}\): +1 for a relevant response, +0.5 for a partially relevant response, and -1 for a not relevant response. * \(\mathcal{M}_{acc}\): +1 for an entirely correct response, +0.5 for a partially correct response, and -1 for an incorrect response. * \(\mathcal{M}_{cla}\): +1 for a clear and easy-to-understand response, +0.5 for a somewhat clear but could be improved response and -1 for a confusing or hard-to-comprehend response. * \(\mathcal{M}_{com}\): +1 for a comprehensive response, +0.5 for a somewhat complete but lacking some minor information response, and -1 for an incomplete or missing important details response. * \(\mathcal{M}_{ref}\): +1 for a correctly cited relevant policy section, +0.5 for a mentioned section without explicitly citing it, and -1 for an incorrect reference. Then, based on the individual evaluation of the different metrics, GenAIPABench proposes to aggregate the results in an overall quality metric \(\mathcal{M}_{all}\). For that, we propose to calculate the total positive/partial points (i.e., \(\mathcal{M}_{all}^{+}\)) and the total negative points (\(\mathcal{M}_{all}^{-}\)) independently. This way, the potential negative impact of the answers to people's privacy decision-making process would be clearly stated. Then, we also combine both metrics and normalize the results using the following equation: \[\mathcal{M}_{all}=\frac{\left(\text{Current Score}-\text{Minimum Score} \right)}{\left(\text{Maximum Score}-\text{Minimum Score}\right)}\times 9+1 \tag{1}\] where the Minimum Score is -5 and the Maximum Score is 5. ## 6. Experiments We evaluated the three most prominent genAIT systems at the time of writing this article using GenAIPABench: ChatGPT-4(Wang et al., 2019), Bard2, and BingAI3. ChatGPT-4 was accessed through OpenAI's API4. Official APIs were unavailable for Bard and BingAI so they were accessed via their respective website pages. We evaluated the systems using five representative privacy policies (Uber, Spotify, Airbnb, Facebook, and Twitter) and two important privacy regulations (GDPR and CCPA). To contextualize the policies, we include in Table 1 some standard statistics extracted from them including: the frequency of unique words (Wang et al., 2019), estimated reading time, reading level (computed using the Flesch-Kincaid Grade Level (Wang et al., 2019)), and the frequency of connective words. Two of the authors gathered answers from each system for each privacy document and evaluated the responses. Once the answers from the models were collated, they were cross-referenced with annotated answers. We held meetings to discuss discrepancies and reach a consensus score to foster uniformity and a shared understanding. Footnote 2: [https://bard.google.com/](https://bard.google.com/) Footnote 3: [https://chat.openai.com/](https://chat.openai.com/) Footnote 4: [https://platform.openai.com/](https://platform.openai.com/) The following sections present the analysis of the results obtained. The performance results are plotted in graphs (e.g., Figure (a)a) that show the performance score, calculated using interquartile range (IQR) values from questions sorted into easy (\(G_{e}\)), medium (\(G_{m}\)), and hard (\(G_{h}\)) categories. To enhance visual interpretation, average performance scores are also mapped across five metrics using heatmaps (e.g., Figure (d)d), where the x-axis represents the chosen metrics and the y-axis corresponds to the different categories of privacy questions. ### Assessing the Quality of Responses to Privacy Policy Questions The purpose of this experiment is to assess the quality of responses in relation to privacy policy questions. The results (see Figure 2) show that ChatGPT-4 and BingAI consistently outperform Bard in most questions. Notably, BingAI stands out in its ability to adeptly handle hard questions, especially in the contexts of Twitter, Facebook, and Airbnb policies. This proficiency may be due to the use \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Policy & Unique & Reading & Reading & Connective \\ & Words & Time (mins) & Level (FKGL) & Words \\ \hline Twitter & 0.209 & 21 & 10.33 & 0.04 \\ Spotify & 0.158 & 32 & 12.45 & 0.03 \\ Uber & 0.156 & 37 & 11.95 & 0.04 \\ Airbnb & 0.189 & 27 & 14.15 & 0.04 \\ Facebook & 0.18 & 20 & 11.76 & 0.05 \\ \hline \end{tabular} \end{table} Table 1. Features of privacy policies in GenAIPABench. of a simpler reading level, a more diverse vocabulary, and lower reading times of the policies(see Table 1). Bard's performance tends to diminish as question complexity increases, a trend not observed as prominently in GPT4 or BingAI. We next analyze in detail the performance of each individual system. **ChatGPT-4**: We observe that the performance of ChatGPT-4 (Figure 2c), across all policies, fluctuated achieving scores as low as 10 to a commendable 100. This variability was more pronounced for questions of higher complexity. A particular trend was noticeable in the median scores. While ChatGPT-4 handled simpler questions well, often securing medians near the 100 mark, it struggled with complex questions, where medians descended to values like 50.5 (Uber) and 88.75 (Spotify). This variability could be ascribed to these policies requiring longer reading times, having a more advanced reading level, and making greater use of connective words. The interquartile range further illustrated this trend. ChatGPT-4's relevance (Figure 2f) in answering questions was generally strong, with scores ranging between 0.6 to 1 for most categories. However, it seemed to struggle slightly with \(CA_{e}\) at 0.1. The clarity exhibited by GPT-4 was commendable, consistently hovering between 0.6 and 1, except for a noticeable dip to 0.5 for \(DM_{h}\). Accuracy, however, was a mixed bag - while GPT-4 scored admirably with a peak of 1 for \(SE_{e}\), it descended to -0.7 for \(T_{h}\). Completeness followed a similar trajectory, ranging from highs like 1 (\(SE_{e}\)) to lows of -0.4 (\(RC_{m}\)). GPT-4's referencing capabilities appeared as an area of improvement, with several scores lying in the negative domain. **Bard**: Bard (Figure 2a) frequently registered minimum scores of 10 across various complexities and very occasionally scored higher (it peaked at around 91 for the combination of the Uber policy and easy questions). The median scores provide further insights into its tendency to gravitate towards mid-range values for easier questions, evidenced by scores like 55 (Spotify) and 68.5 (Airbnb). As question complexity rose, this median inclination often dropped, settling at values like 37 for both Spotify and Twitter (hard questions). Bard's 1st quartile performance for complex questions often struggled, while its 3rd quartile results indicated that even its top performance strata seldom achieved peak scores. Figure 2d shows that Bard's relevance was high across the board, with scores largely hovering around 1, though it faced challenges with \(SE_{h}\) at 0.2. The clarity metric also displayed consistency, mostly remaining close to 0.9, but \(SE_{m}\) presented a deviation with a score of 0.2. Accuracy for Bard varied considerably: it showed robustness in questions like \(T_{e}\) and \(T_{m}\) with scores at 1 but dipped to -1 for metrics like \(SE_{e}\) and \(PD_{e}\). Regarding completeness, Bard oscillated between a high of 1 (\(SE_{e}\)) to a low of -0.2 (\(PD_{e}\)). The reference domain was particularly Figure 2. Score distribution across varying difficulty levels (\(G_{e}\), \(G_{m}\), \(G_{h}\)) for privacy policy questions applied to five privacy policies. diverse for Bard, with scores ranging from 1 (\(RC_{m}\)) to lows of -1 in \(SE_{h}\). **BingAI**: Of the three systems, BingAI consistently demonstrated superior performance metrics (Figure 2b). Its score spectrum was high, frequently attaining the maximum score of 10 across various difficulties and policies, seldom dropping below 37 for a specific case (Spotify, medium questions). This performance was equally evident in the median values where BingAI displayed high consistency even as question complexity increased. Noteworthy were scores of 100 (Twitter, hard questions) and 95.5 (Airbnb, medium questions). The quartile analysis reinforced its robustness, with the 1st quartile values indicating high baseline performance and the 3rd quartile metrics often culminating near or at 100. BingAI's performance showed consistently high values in several metrics (Figure 2e). Its relevance and clarity stood out, surpassing the 0.8 mark. However, \(T_{h}\) was an outlier in relevance with a score of 0.4. BingAI's accuracy demonstrated consistent strength, frequently achieving a score of 1, though significant challenges were noted in \(T_{h}\) and \(AEE_{h}\) with scores of -1. Regarding completeness, BingAI's metrics were predominantly positive, with a substantial number of questions securing a score of 1, but a noticeable decline was observed in \(DM_{h}\) at -0.7. Referencing for BingAI showed variance but managed to avoid deeply negative scores. ### Assessing Robustness through Paraphrased Questions The main goal of this experiment is to evaluate the robustness and consistency of the systems in providing similar responses to paraphrased variants of the questions. The results (see Figure 3) show that ChatGPT-4 displayed consistent strengths, Bard excelled in certain areas but showed referencing challenges, and BingAI presented a mix of highs and noticeable lows. **ChatGPT-4**: ChatGPT-4 exhibited consistent performance across most policies (Figure 3c), irrespective of difficulty. With Spotify, there was a decline in performance as difficulty increased, from a median score of 82 in \(G_{e}\) to 50.5 in \(G_{h}\). Interestingly, the third quartile score remained at 100 for \(G_{h}\), indicating that while the central tendency was lower, a subset of responses still reached the top performance. Twitter and Facebook cases showcased strong performance, with median scores not dipping below 73 across all difficulties. For Airbnb, ChatGPT-4 answered with high proficiency, particularly in easy and hard categories, with the system achieving medians of 100 and 97.75, respectively. For Relevance, scores ranged between 0 and 1, showing high consistency in areas such as \(SE_{e}\), \(SE_{h}\), \(PD_{e}\), and \(PD_{m}\) among others (Figure 3f). Clarity ratings showed a similar tendency, with the model performing excellently on queries like \(SE_{e}\) and \(SE_{h}\), scoring a perfect 1, while encountering challenges in \(RC_{m}\) and \(UC_{h}\). Accuracy results were more variable, with instances like \(SE_{e}\), \(SE_{h}\), and \(AEE_{e}\)' scoring near or at the top, juxtaposed against scores as low as -0.7 in \(DM_{h}\). Completeness spanned from high performances in \(SE_{e}\) to lows in \(RC_{m}\), \(UC_{h}\), and \(DM_{h}\). Reference scores showed strong points, such as 0.9 in \(UC_{e}\) and \(UC_{m}\), but also revealed potential areas of improvement with scores like -0.7 in \(T_{e}\). **Bard**: Bard's performance varied across policies (Figure 3a). For Spotify, while \(G_{e}\) and \(G_{m}\) achieved median scores of 73 and 82, respectively, a significant drop to 37 was observed for hard questions. The minimum scores for these \(G_{h}\) were as low as 10, indicating struggles with the most challenging queries. Uber questions posed difficulties across all levels with the hard questions having a median of 59.5 but a minimum score of 1. Both Twitter and Facebook had mid-range median scores, with hard questions in Twitter yielding a consistent median and third quartile, both at 55. Airbnb responses were relatively stable, with scores fluctuating between 61.75 to 64. Analyzing Bard's performance across metrics (see Figure 3d), we observe that relevance ranged from scores as high as 1 for \(UC_{e}\), \(UC_{m}\), and \(UC_{h}\) to as low as 0.2 for \(T_{h}\) and \(PD_{e}\). Clarity was similarly distributed, with certain questions like \(UC_{e}\) receiving high scores of 0.9, while others, such as \(DM_{h}\), only achieved a score of 0.1. Accuracy proved to be a challenging area, with the lowest score being -1 for several questions, including \(SE_{h}\), \(PD_{e}\), and \(AEE_{e}\). However, the model managed to score 0.7 for \(UC_{e}\). Completeness ranged from a notable 0.9 for \(SE_{e}\) to less promising results like -0.3 for \(SE_{h}\). The Reference metric had its highs and lows, with the highest score being 1 for \(T_{e}\) and several instances of -1, indicating an inconsistency in this domain. **BingAI**: BingAI exhibited a mix of outstanding and lackluster performances (Figure 3b). For Spotify, it achieved perfect medians of 10 for \(G_{e}\) and \(G_{h}\), but the range in \(G_{h}\) was wide, from 10 to 100. The Uber policy was challenging, especially in \(G_{h}\), with a median of just 1 and a narrow range, indicating a uniform struggle. Twitter and Facebook policies saw robust results, with medians consistently above 86.5. For Airbnb questions, BingAI's performance was notable, particularly in \(G_{m}\) and \(G_{h}\) categories, where the system reached a perfect median score of 100. BingAI demonstrated great performance in Relevance, particularly for questions like \(T_{e}\), \(PD_{m}\), \(UC_{e}\), and \(SE_{e}\), all scoring a perfect 1, but also showed weaker areas with scores like -0.2 for \(DM_{h}\) (Figure 3e). Clarity maintained a consistent trend, with scores predominantly leaning toward the higher end. For Accuracy, BingAI had top-performing scores in areas like \(AEE_{e}\), \(PD_{m}\), and \(SE_{h}\), but faltered in others, achieving a score of -1 for \(DM_{h}\). In terms of Completeness, it exhibited excellence in \(SE_{e}\) and \(SE_{h}\), both scoring 1, but saw a drop in areas like \(DM_{h}\). The Reference scores varied, ranging from 1 in \(PD_{e}\) and \(PD_{m}\) to lows of -1 in areas such as \(DM_{h}\). ### Assessing the Ability to Recall Learned Privacy Policy Knowledge The purpose of this experiment is to assess the performance of the systems when the privacy policy is not given explicitly and hence the system has to rely on the information it obtained when it was trained. The main question it seeks to answer is how well the system retains and recalls privacy policies in which it was trained. **ChatGPT-4**: We observe that, considering the Spotify policy, ChatGPT-4's performance ranges between 23.5 and 100 scores in the \(G_{e}\) category (Figure 4c). Despite this variability, a strong median of 84.25 indicates its overall competence. The consistency was further emphasized by the narrow interquartile range (80.875 to 92.125). In the Uber policy, all three categories saw the model reaching its zenith with maximum scores of 100. For Twitter and Facebook, the medians (95.5 and 88.75, respectively for \(G_{e}\)) were strong, and the compact interquartile ranges again indicated reliable performances. Airbnb's policy mirrored a similar trend with a median above 88.75. ChatGPT-4 predominantly had scores close to 1 in Relevance across different question complexities, with only a slight dip to 0.5 for \(PD_{h}\) (Figure 4f). Clarity remained fairly consistent, with many of its scores ranging between 0.8 to 1, but there was a notable drop to 0.1 for \(DM_{h}\). In terms of Accuracy, while GPT-4 generally performed well in simpler and medium questions, there was a clear reduction in its performance in harder questions, dropping as low as -0.6 in the \(PD_{h}\) and \(RC_{h}\). Completeness scores demonstrated a similar trend with higher scores in easier categories and diminishing results in the harder questions, the lowest being -0.3 for \(PD_{h}\). The Reference, however, remained relatively low throughout, with a peak score of 1 for \(PD_{e}\) and a dip to -0.1 in several harder questions. **Bard**: Bard displayed wider variability than ChatGPT-4 (Figure 4a). In the Spotify policy, the \(G_{e}\) category witnessed a spread from 37 to 91, suggesting more variance in its responses. The broader interquartile range (37 to 82) compared to ChatGPT-4 underlined this. Uber's \(G_{h}\) difficulty indicated considerable inconsistency, with the lowest score being 10 and Q1 also at 10, suggesting that 25% of responses were at the floor of the scoring metric. Twitter's \(G_{h}\) category further echoed this inconsistency, with both minimum and Q1 at 10. However, Facebook and Airbnb policies in the \(G_{m}\) difficulty showed tighter interquartile ranges, hinting at better consistency. Bard maintained a high Relevance, predominantly fluctuating between 0.4 to 1 in \(G_{e}\) and \(G_{m}\) questions, but saw a drastic decline for \(SE_{h}\), scoring -0.2 (Figure 4d). Its Clarity mostly mirrored ChatGPT-4's pattern, though it had a steeper drop in \(G_{h}\) questions, reaching as low as -0.2 in the \(SE_{h}\) category. Accuracy exhibited significant variability, with scores ranging from a high of 0.9 in \(G_{e}\) like \(T_{e}\) and \(UC_{e}\), to a troubling -1 in harder ones like \(SE_{h}\), \(PD_{m}\), and \(PD_{e}\). Completeness varied considerably as well, with scores peaking at 1 for \(T_{e}\) and plummeting to -0.6 in \(SE_{e}\) and \(PD_{e}\). Reference scores were particularly notable for Bard due to their consistent negative values, dropping as low as -1 for multiple questions, suggesting possible issues with citation or source integrity. **BingAI**: BingAI showcased a peculiar trend (Figure 4b). For Spotify's \(G_{e}\) category, it ranged from 37 to a perfect 100, with a commendable median of 91. Yet, the \(G_{m}\) difficulty revealed stark contrasts, spanning from 1 to 86.5, with a median dropping to 37. This drastic disparity between \(G_{e}\) and \(G_{m}\) was further underscored by the interquartile range shift from 82-100 in \(G_{e}\) to a much broader 30.25-49.375 in \(G_{m}\). Similarly, Uber's \(G_{h}\) difficulty reflected a pronounced inconsistency with both the minimum and 25% of scores (Q1) languishing at 10, while the upper quartile (Q3) stretched to 23.5. Notably, in Airbnb's \(G_{h}\) category, BingAI achieved a 10 median indicating that over half of its responses received the maximum Figure 3. Score distribution across varying difficulty levels (\(G_{e}\), \(G_{m}\), \(G_{h}\)) for paraphrased privacy policy questions applied to five privacy policies. score, though its minimum at 1 demonstrates the presence of some extreme outliers. BingAI's performance in Relevance started strong, reaching 1 in categories like \(T_{e}\), \(UC_{e}\), and \(PD_{e}\), but faltered for \(G_{m}\) questions like \(T_{m}\), which scored -0.3 as shown in Figure 4e. Clarity remained relatively stable, with many scores hovering around the 0.6 to 1 range. However, its accuracy was inconsistent, dropping to -1 for \(DM_{e}\) but redeeming itself with scores like 1 in \(PD_{e}\). Completeness scores were highly variable, from a full score of 1 for \(PD_{e}\) to a concerning -1 for \(DM_{e}\). As for the Referencing, it scored negatively for most of the questions. In summary, the evaluation of the three Large Language Models, ChatGPT-4, Bard, and BingAI, revealed intricate patterns of strengths and challenges across the performance criteria. GPT-4 consistently showed high proficiency across policies, with its concentrated scores emphasizing reliability and consistency but also revealing potential weaknesses in referencing. While exhibiting proficiency in specific domains, Bard displayed broader variabilities and pronounced inconsistencies, especially in challenging contexts, with particular challenges in referencing and marked variability in accuracy and completeness for more complex questions. BingAI's performance was a blend of exemplary moments counterbalanced by stark inconsistencies across all criteria. ### Assessing the Quality of Responses to Privacy Regulation Questions This experiment aims to examine the quality of responses generated by the systems for questions concerning the CCPA and GPDR data protection regulations. Figure 5 shows the results obtained after executing the privacy regulation benchmark for both data protection regulations. Both ChatGPT-4 and BingAI excelled in answering privacy regulation queries, with ChatGPT-4 consistently achieving top scores across every metric. While Bard demonstrated good performance, it consistently struggled to provide accurate references, placing it behind the other two models. For all six questions (i.e., \(PR_{1}\), till \(PR_{6}\)), ChatGPT-4 and BingAI responses were accurate, relevant, comprehensive, and included correct references to regulation details. BingAI's scores took a hit due to its tendency to refer to online articles for its information rather than directly citing the articles from the GDPR and CCPA, a practice that ChatGPT-4 consistently followed. On the other hand, Bard's responses for questions \(PR_{1}\), \(PR_{2}\) and \(PR_{3}\) scored 0.5 for completeness as they lacked some details. Also, the responses scored -1 across all questions wrt the reference metric for both CCPA and GDPR. Figure 4. Score distribution across varying difficulty levels (\(G_{e}\), \(G_{m}\), \(G_{h}\)) for privacy policy questions applied to previously learned five privacy policies. ## 7. Discussion While, up to the author's knowledge, no specific Generative AI-based Privacy Assistant (GenAIPA) has been proposed yet, our experiments indicate that current general-purpose genAI models can be a good starting point. Bard, BingAI, and ChatGPT-4 demonstrated commendable capabilities when confronted with GenAIPABench questions. Of course, there are also challenges the systems encountered which require further refinement and exploration. When addressing questions related to an organization's privacy policies, all systems obtained a fairly high score for low to medium-difficulty questions. In particular, BingAI emerged as the most consistent performer for those, demonstrating superior outcomes across most metrics. Interestingly, when paraphrased versions of the questions were used, BingAI performed worse than the other systems. This inconsistency highlights that some systems might expect users to express their questions in certain ways which would be an issue given the difference in perception about privacy among the general public (Sang et al., 2018). The performance of all the systems declined when confronted with more complex questions, revealing limitations in handling situations requiring advanced reasoning or specialized knowledge. This is especially concerning given that these are the questions for which the general public might need more help. Of particular concern was a disconnect between the relevance and clarity of generated responses and their factual accuracy and completeness. Responses that were substantially incorrect were often presented coherently and relevantly, posing the risk of misleading users. Furthermore, we observed frequent issues wrt references that often point to outdated or incorrect data from the model's training set, rather than the most recent privacy policy information (which was provided to the systems). The three systems showed a strong understanding of the two privacy regulations evaluated. This might be due to the fact that there has been more discussion about data privacy regulations online than about specific privacy policies. This means that the underlying models of the three systems have potentially been trained on more information relevant to the privacy regulations. However, while the performance on the privacy regulations benchmark was excellent, we observe again the challenge of proper reference (especially in the case of Bard). While this might not be concerning for the average user, it could be a problem for, for instance, small businesses using these systems to understand how to adapt their operations to the regulations in the countries where they operate. In summary, we identified several critical challenges that GenAIPAs must overcome to become reliable at assisting users with their privacy questions. First, current genAI systems often fail to recognize and correct errors in their responses, posing a risk of disseminating inaccurate information on vital privacy matters. Second, they lack transparency in their reasoning, leaving users uncertain about the reliability of the provided answers, particularly when questions involve nuanced or ambiguous scenarios. Lastly, they demonstrated inconsistencies when faced with repeated queries and showed issues in recalling accurate source material, undermining their overall reliability and stability. Addressing these challenges is essential for future GenAIPAs to improve not only their accuracy but also to gain users' trust through transparent and consistent performance. This highlights the need for models specialized in the privacy domain fine-tuned, particularly in handling complex questions, to maintain response consistency and accuracy. Such models should also pay special attention to maintaining their knowledge updated which is essential given the continuously changing landscape of privacy policies and regulations. These findings align with previous analysis and existing literature in other domains, confirming that while large language models like ChatGPT, Bard, and BingAI excel in general language tasks, their performance can vary significantly when applied to specialized domains. Hence, our analysis is, up to the author's knowledge, the first to assess that this is the case in the domain of data privacy. ## 8. Conclusion and Future Work The emergence of generative AI and their ability to summarize text and answer questions generating human-like text presents an opportunity to develop more sophisticated privacy assistants (GenAIPAs). Due to the implications for individuals of receiving wrong information that might impact their privacy, it is required to evaluate such systems properly. In this paper we have presented a benchmark, GenAIPABench, to evaluate future GenAIPAs which includes questions about privacy policies and data privacy regulations, evaluation metrics, and annotated privacy documents. Our evaluation of popular genAI technology, including ChatGPT, Bard, and BingAI, shows promise for the technology but highlights that significant work remains to enhance their capabilities in handling complex queries, ensuring accuracy, maintaining response consistency and citing proper sources. We plan to continue expanding GenAIPABench with more annotated answers for a larger number of privacy documents. We also aim to develop the infrastructure to perform a periodic evaluation of current and future versions of genAI and GenAIPA systems.
2309.13982
Using a probabilistic approach to derive a two-phase model of flow-induced cell migration
Interstitial fluid flow is a feature of many solid tumours. In vitro Experiments have shown that such fluid flow can direct tumour cell movement upstream or downstream depending on the balance between the competing mechanisms of tensotaxis and autologous chemotaxis. In this work we develop a probabilistic-continuum, two-phase model for cell migration in response to interstitial flow. We use a kinetic description for the cell-velocity probability density function, and model the flow-dependent stimuli as forcing terms which bias cell migration upstream and downstream. Using velocity-space averaging, we reformulate the model as a system of continuum equations for the spatio-temporal evolution of the cell volume fraction and flux, in response to forcing terms which depend on the local direction and magnitude of the mechanochemical cues. We specialise our model to describe a one-dimensional cell layer subject to fluid flow. Using a combination of numerical simulations and asymptotic analysis, we delineate the parameter regime where transitions from downstream to upstream cell migration occur. As has been observed experimentally, the model predicts downstream-oriented, chemotactic migration at low cell volume fractions, and upstream-oriented, tensotactic migration at larger volume fractions. We show that the locus of the critical volume fraction, at which the system transitions from downstream to upstream migration, is dominated by the ratio of the rate of chemokine secretion and advection. Our model also predicts that, because the tensotactic stimulus depends strongly on the cell volume fraction, upstream, tensotaxis-dominated migration occurs only transiently when the cells are initially seeded, and transitions to downstream, chemotaxis-dominated migration occur at later times due to the dispersive effect of cell diffusion.
Yaron Ben-Ami, Joe M. Pitt-Francis, Philip K. Maini, Helen M. Byrne
2023-09-25T09:35:22Z
http://arxiv.org/abs/2309.13982v2
# Using a probabilistic approach to derive a two-phase model of flow-induced cell migration ###### Abstract Interstitial fluid flow is a feature of many solid tumours. _In vitro_ experiments have shown that such fluid flow can direct tumour cell movement upstream or downstream depending on the balance between the competing mechanisms of tensoraxis (cell migration up stress gradients) and autologous chemotaxis (downstream cell movement in response to flow-induced gradients of self-secreted chemoattractants). In this work we develop a probabilistic-continuum, two-phase model for cell migration in response to interstitial flow. We use a Fokker-Planck type equation for the cell-velocity probability density function, and model the flow-dependent mechanical and chemical stimuli as forcing terms which bias cell migration upstream and downstream. Using velocity-space averaging, we reformulate the model as a system of continuum equations for the spatio-temporal evolution of the cell volume fraction and flux, in response to forcing terms which depend on the local direction and magnitude of the mechanochemical cues. We specialise our model to describe a one-dimensional cell layer subject to fluid flow. Using a combination of numerical simulations and asymptotic analysis, we delineate the parameter regime where transitions from downstream to upstream cell migration occur. As has been observed experimentally, the model predicts downstream-oriented, chemotactic migration at low cell volume fractions, and upstream-oriented, tensotactic migration at larger volume fractions. We show that the locus of the critical volume fraction, at which the system transitions from downstream to upstream migration, is dominated by the ratio of the rate of chemokine secretion and advection. Our model also predicts that, because the tensotactic stimulus depends strongly on the cell volume fraction, upstream, tensoraxis-dominated migration occurs only transiently when the cells are initially seeded, and transitions to downstream, chemotaxis-dominated migration occur at later times due to the dispersive effect of cell diffusion. ## 1 Introduction Cells can sense a variety of chemical and mechanical cues which may bias their movement. In healthy tissues, cells migrate in response to multiple environmental cues; examples include morphogenesis, wound healing and the stimulation of an immune response to infection (SenGupta et al., 2021). At the same time, many diseases are characterised by excessive (or insufficient) directed cell migration; examples include tumour invasion and metastasis to adjacent tissues (Roussos et al., 2011; Shields et al., 2007) and impaired wound healing caused by diabetes (Falanga, 2005). Fluid flow has been found to promote tumour cell migration in several different ways (Shields et al., 2007; Polacheck et al., 2011, 2014). Interstitial fluid flow in solid tumours is known to be higher than in healthy tissues due to growth-induced increases in interstitial pressure and leaky blood vessels. Consequently, interstitial flow has been suggested as a contributor to cell migration and metastasis (Heldin et al., 2004; Follain et al., 2020). _In vitro_ experiments by Polacheck et al. (2011) have shown that fluid flow may impact the directed movement of cells in several different ways. On the one hand, extracellular fluid flow increases the pressure on the upstream part of the cell and, consequently, the cell increases the adhesion forces it exerts on the extracellular matrix (ECM) in this region. In turn, the localized tension at the front of the cell leads to actin localization and protrusion in this region, contributing to migration against the direction of flow (Polacheck et al., 2014). This mechanism, which is dominant in 3D cell cultures, is similar to the mechanism underlying _durotaxis_, where cells on a 2D substrate migrate in response to gradients in the mechanical stiffness of the substrate (Lo et al., 2000; DuChez et al., 2019). Cell movement in response to gradients in cell-ECM adhesion-forces has been termed _rheotaxis_ by Polacheck et al. (2014), but here we refer to it as _tensotaxis_(Rosalem et al., 2020) in order to emphasize the role of fluid-induced stress (rather than velocity gradients) on this type of movement. In addition to upstream directed movement induced by tensoraxis, autologous chemotaxis drives cell movement downstream. Here, the flow advects cell-secreted ligands, creating transcellular gradients of chemokines. The ligands bind to specific receptors on the cell surface, inducing cell polarization in the direction of higher chemokine concentrations and driving downstream, chemotactic migration. This autologous signaling mechanism has been observed by Shields et al. (2007), where tumour cells have been shown to migrate downstream by binding self-secreted CCL21 ligands to the CCR7 receptors. In experiments by Polacheck et al. (2011), cancer cells were seeded in a microfluidic channel and subject to fluid flow. The distribution of cell velocities was measured and the average migration direction (with respect to the flow direction) was evaluated. The results, reproduced in Fig. 1, show that the dominant mode of migration changed between downstream and upstream as the cell density increased. However, when the CCR7 receptor signaling pathway was blocked, upstream migration was found to prevail regardless of the cell density, supporting the observations by Shields et al. (2007) regarding CCR7-dependent, downstream-oriented, autologous chemotaxis. Additionally, for all of the experimental curves shown in Fig. 1, an increase in the interstitial flow led to a higher tendency of the cells to migrate upstream. These results motivate the question of how different properties of cells, and the mechanochemical landscape they sense, affect their migration directions. In this paper, we show how mathematical modelling can shed light on the mechanisms regulating the direction of collective cell migration in a flow as system parameters vary. Models of chemotactic migration go back to the highly influential work of Keller and Segel (1971). More recently, chemotaxis has been considered in the context of two-phase cellular tissue models (Byrne and Owen, 2004; Green et al., 2018), by formulating mass and momentum balance of the cell and fluid phases, coupled to the transport equation for chemoattractant propagation in the fluid phase. The ability of multiphase models to incorporate coupled interactions between cells, fluid, and chemoattract makes them a natural framework for describing the mechanisms involved in Figure 1: Experimental results, reproduced with permission from Polacheck et al. (2011), showing the directional migration score [positive (negative) – most cells travel downstream (upstream), see Polacheck et al. (2011) for details] as a function of the strength of interstitial flow and for different cell seeding densities (“high” and “low” refer to seeding densities of \(25\times 10^{4}\) and \(5\times 10^{4}\) cells/mL, respectively). The dashed lines show that upstream migration prevails when CCR7 receptor signaling pathway is blocked, interrupting the downstream-oriented autologous chemotaxis. mechanochemical transduction of cells subject to interstitial fluid flow. While models for chemotaxis are prevalent [see the extensive review by Arumugam and Tyagi (2021)], models for tensoraxis are less common. In a recent related work, Painter (2021), has applied a generalised Keller-Segel model to study the combined effect of rheotaxis (directed movement in response to flow velocity field) and chemotaxis on the aggregation of swimming organisms. However, he considered a single-phase model, thus did not solve the coupled interactions of the organisms and the fluid. Evje and coworkers (Waldeland and Evje, 2018; Evje and Winkler, 2020) have formulated a multiphase model which combined the competing mechanisms of downstream-oriented autologous chemotaxis and upstream force, introduced _ad hoc_ by inverting the direction of fluid drag force acting on the cells. Their models have been successful in reproducing the transition between downstream and upstream migration as the cell's volume fraction increases, as was observed by Polacheck et al. (2011). However, the heuristic assumption of inverting the direction of drag force does not explain the mechanisms underlying this type of migration and does not allow any generalisation of the model to more complicated scenarios where different sources of mechanical stimulus exist. More recently, Rosalem et al. (2020) derived a single-phase model for the tensotactic migration of cells, where the cell flux was assumed to be proportional to the transcellular pressure gradient. They verified that, in the presence of flow, this mechanism leads to upstream migration of cells; however, they did not consider the opposing effect of chemotactic migration. Consequently, it remains to be established what parameters (other than cell volume fraction) affect the direction of cell migration and what parameter regimes support downstream, rather than upstream, cell migration. An additional drawback of existing multiphase models for cell migration (e.g., Byrne and Owen (2004); Waldeland and Evje (2018)) is that directed-migration is modelled as an internal force exerted by the cells on their self (source terms in the cell momentum balance). Therefore, the cell speed increases with the strength of the stimulus. This contradicts some experimental findings, which show that individual cells bias their directionality in response to the external cues, but their speed of migration is not correlated with their directionality (Polacheck et al., 2011; Nam et al., 2020; DuChez et al., 2019). The goal of the present work is to derive a two-phase model for cell migration subject to flow-induced mechanochemical stimuli. The chemotactic and tensotactic cues are viewed as external signals that bias the probability that a cell moves in a certain direction, while the magnitude of its speed remains constant. We propose a Fokker-Planck-type equation to describe the probability density function of the cell-velocity orientation. We then apply velocity-space averaging to derive continuum equations for the spatio-temporal evolution of the cell volume fraction and flux, in response to forcing terms depending on the local direction and magnitude of the mechanochemical stimulus. Using a combination of numerical simulations and asymptotic analysis, we delineate the parameter regimes for which cell migration transitions from downstream to upstream. The remainder of the manuscript is structured as follows. In Sec. 2 we introduce our two-phase model for cell migration in response to flow-induced mechanochemical stimuli; we then introduce the one-dimensional model problem that is the focus of this paper, and use asymptotic methods to derive the critical conditions for transition between downstream and upstream migration in the limit of small stimulus. In Sec. 3 we present numerical results describing the spatio-temporal dynamics of the cell layer for different parameter regimes supporting different modes of migration; then, we compare predictions from the asymptotic analysis with numerical results derived from the full model regarding the conditions under which migration switches between downstream and upstream regimes. The paper concludes in Sec. 4 where we summarise our findings and outline possible directions for future research. ## 2 Analysis ### Model formulation In this section we introduce a model for cell migration in the presence of interstitial fluid flow, motivated by _in vitro_ experiments which show that interstitial flow can induce mechanochemical stimuli which bias the direction of cell migration (Polacheck et al., 2011; Shields et al., 2007). In the present model we view the mechanochemical cues as external signals that regulate the probability that the cells move in a certain direction, and assume that the magnitude of the cell speed remains constant. In more detail, we seek to formulate a model for the probability density function, \(f(\mathbf{x},t,\mathbf{\xi})\), that the velocity of a cell in a neighbourhood of spatial position \(\mathbf{x}\), at time \(t\), has orientation vector \(\mathbf{\xi}\). In Sec. 2.1.1 we formulate a dimensional model in an arbitrary number of dimensions; in Sec. 2.2 we consider a simplified one-dimensional version of the model and then non-dimensionalise the governing equations using the characteristic scales of the system which we introduce therein. #### 2.1.1 Probabilistic model for cell migration We assume that the cells travel at a velocity \(U_{c}\mathbf{\xi}\), where the cell speed, \(U_{c}\), is constant and \(\mathbf{\xi}\) is a unit vector representing the cell-velocity orientation, which evolves in response to the external stimuli sensed by the cells. We assume further that the cells perform an unbiased random motion superimposed on their directed movement. Accordingly, we model the change in the cell's probability density function, \(f(\mathbf{x},t,\mathbf{\xi})\), using a Fokker-Planck-type model, \[\frac{\partial f}{\partial t}+U_{c}\mathbf{\xi}\cdot\mathbf{\nabla}f=\frac{F-f}{\tau} +D_{c}\nabla^{2}f, \tag{1}\] where \(D_{c}\) represents diffusivity due to the unbiased random motion; the forcing term, \(F=F(\mathbf{x},t,\mathbf{\xi})\), represents the rate at which the orientation vector changes, which biases the probability density function in the direction of the stimulus; the constant, \(\tau\), represents relaxation time over which the cell responds to the external signal. We note that in Eq. (1) cell-cell interactions and cell-volume exclusion are neglected. As such, the model is suitable to describe situations in which the cell volume fraction takes low to moderate values. While the diffusion term in Eq. (1) may mimic the effect of intercellular repulsion, other phenomena related to collective cell migration (Tambe et al., 2011; Angelini et al., 2011) are neglected in our model formulation, in order to focus attention on the way in which flow-induced stimuli direct cell migration. In Eq. (1) we also neglect cell proliferation and death since we aim to model migration dynamics at much shorter time scales. We define a stimulus vector, \(\mathbf{s}=s\mathbf{\eta}\), where \(\mathbf{\eta}\) is a unit vector and \(s\) represents its magnitude, such that \(F\) is maximized when the cell velocity and stimulus vector are aligned, and \(F\) decreases monotonically as the angle between \(\mathbf{\xi}\) and \(\mathbf{\eta}\) increases. A simple functional form that captures this behaviour is given by \[F\left(\mathbf{s}(\mathbf{x},t),\mathbf{\xi}\right)=A\exp\left(s\mathbf{\eta}\cdot( \mathbf{\xi}-\mathbf{\eta})\right), \tag{2}\] where \(A\) is a function of the macroscopic variables (i.e., not a function of \(\mathbf{\xi}\)) which is defined below. Equation (2) states that the rate at which the cell orientation vector changes depends solely on the local stimulus vector. We assume that the volume of a single cell, \(V_{c}\), is identical for all cells. We multiply Eq. (1) by \(V_{c}\) and integrate over all possible cell-velocity orientations, \(\mathbf{\xi}\in\mathcal{V}\), where \(\mathcal{V}\) is the set of vectors pointing form the origin to the surface of a unit sphere. Then we have \[\frac{\partial\phi}{\partial t}+\mathbf{\nabla}\cdot\mathbf{\psi}=\frac{1}{\tau}\left( V_{c}\int_{\mathbf{\xi}\in\mathcal{V}}Fd\mathbf{\xi}-\phi\right)+D_{c}\nabla^{2}\phi, \tag{3}\] where the cell volume fraction and flux, \(\phi\) and \(\mathbf{\psi}\) respectively, are given by \[\phi=V_{c}\int_{\mathbf{\xi}\in\mathcal{V}}fd\mathbf{\xi}\ \ \text{and}\ \ \ \mathbf{\psi}=V_{c}U_{c}\int_{\mathbf{\xi}\in\mathcal{V}}\mathbf{\xi}fd\mathbf{\xi}. \tag{4}\] In order to ensure that the velocity biasing term, \(F\), does not generate or eliminate cells, we require that \[V_{c}\int_{\mathbf{\xi}\in\mathcal{V}}Fd\mathbf{\xi}=\phi. \tag{5}\] We define \(\theta\in[0,\pi]\) as the angle between \(\mathbf{\xi}\) and \(\mathbf{\eta}\), such that \[V_{c}\int_{\mathbf{\xi}\in\mathcal{V}}Fd\mathbf{\xi}=2\pi AV_{c}\int_{0}^{\pi}\exp \left(s(\cos\theta-1)\right)\sin\theta d\theta=2\pi AV_{c}\frac{(1-e^{-2s})}{s} \tag{6}\] Combining Eqs. (2), (5), and (6) we have \[A=\frac{\phi s}{2\pi V_{c}(1-e^{-2s})}. \tag{7}\] Accordingly, \(F\) is given by \[F=\frac{\phi s}{2\pi V_{c}(1-e^{-2s})}\exp\left(s\mathbf{\eta}\cdot(\mathbf{\xi}-\mathbf{ \eta})\right). \tag{8}\] Equation (3), together with Eq. (5), yields the cell conservation equation \[\frac{\partial\phi}{\partial t}+\mathbf{\nabla}\cdot\mathbf{\psi}=D_{c}\nabla^{2}\phi. \tag{9}\] In order to close the model we require an additional equation for \(\mathbf{\psi}\), where a natural choice would be a momentum balance equation. In order to derive the momentum balance, we multiply Eq. (1) by \(V_{c}U_{c}\mathbf{\xi}\) and integrate over all possible velocity orientations to obtain \[\frac{\partial\mathbf{\psi}}{\partial t}+\frac{1}{V_{c}}\mathbf{\nabla}\cdot(\mathbf{\psi }\mathbf{\psi}+\mathbf{\sigma})=\frac{1}{\tau}\left(V_{c}U_{c}\int_{\mathbf{\xi}\in\mathcal{ V}}F\mathbf{\xi}d\mathbf{\xi}-\mathbf{\psi}\right)+D_{c}\nabla^{2}\mathbf{\psi}, \tag{10}\] where the tensor \(\mathbf{\sigma}\) satisfies \[\mathbf{\sigma}=\int_{\mathbf{\xi}\in\mathcal{V}}(V_{c}U_{c}\mathbf{\xi}-\mathbf{\psi})(V_{c} U_{c}\mathbf{\xi}-\mathbf{\psi})fd\mathbf{\xi}, \tag{11}\] and describes cell stress due to microscopic variation in velocity, while the term proportional to \(\mathbf{\nabla}\cdot(\mathbf{\psi}\mathbf{\psi})\) represents inertial effects at the macroscale. If the characteristic length scale of the system is \(L\), then we can use dimensional analysis to estimate that cell inertia and stresses are only important at length scales \[L\lesssim U_{c}\tau\lesssim 10\mu m,\] where we have estimated the cell speed and relaxation time as \(U_{c}\sim 10\,\mu\)m/h (Polacheck et al., 2011; Nam et al., 2020) and \(\tau\lesssim 1\,\)h, respectively (Wyatt et al., 2016). We conclude that in the scope of our continuous model, the cell stress and inertia terms can generally be neglected because they are only relevant at spatial scales smaller than the characteristic scale of a single cell. Neglecting the cell stress and inertia terms in Eq. (10) and substituting from Eq. (8) we have \[\frac{\partial\mathbf{\psi}}{\partial t}=\frac{1}{\tau}\left(\phi U_{c}\frac{s}{2 \pi(1-e^{-2s})}\int_{\mathbf{\xi}\in\mathcal{V}}\exp\left(s\mathbf{\eta}\cdot(\mathbf{ \xi}-\mathbf{\eta})\right)\mathbf{\xi}d\mathbf{\xi}-\mathbf{\psi}\right)+D_{c}\nabla^{2}\mathbf{ \psi}. \tag{12}\] In Eq. (12) the cell flux source term is proportional to the local value of the average momentum and is a function than depends nonlinearly on the direction and magnitude of the stimulus. In what follows, we propose a constitutive model for the stimulus, \(\mathbf{s}\), which depends on the local mechanochemical cues sensed by the cells. #### 2.1.2 Constitutive model for the mechanochemical stimulus Cancer cells react to a variety of chemical and mechanical stimuli. We introduce a stimulus potential, \(\Phi\), such that \[\mathbf{s}=-l_{c}\mathbf{\nabla}\Phi, \tag{13}\] where \(l_{c}\) is a constant length which should be at the scale of the cell length. We consider two stimulus potentials: (I) ChemotaxisBinding of ligands to specifc receptors on the membrane of cancer cells can polarise their movement in the direction of larger concentration of these ligands, leading to effective chemotactic migration (Roussos et al., 2011). We model this process by assuming that the potential of the chemotactic stimulus is proportional to a chemokine concentration, \(c\), \[\Phi_{C}=-\chi c, \tag{14}\] where the constant \(\chi\) represents the chemotactic potential per unit concentration. (II) TensotaxisCells respond to local stress by biasing their movement in the direction of larger tension in their cell-ECM connections (Polacheck et al., 2011, 2014). When cells embedded in a three-dimensional matrix are subject to interstitial flow, the cell response is usually stimulated by increased fluid pressure at the upstream part (the part facing the flow) of the cell, which causes the cell to generate tensile ECM-adhesion forces in this region (and compressive forces in downstream region) resisting the flow-induced drag force. In turn, the localized tension at the upstream part of the cell polarises its movement in the direction of larger pressure (Polacheck et al., 2014). For simplicity, we model this process by viewing the tensotactic stimulus experienced by the cells as a potential which is proportional to the stresses acting on the cell in the direction normal to the cells' outer surface: \[\Phi_{T}=-\varpi\sigma_{c}^{n}. \tag{15}\] In Eq. 15, \(\varpi\) represents the strength of the tensotactic potential per unit stress, and \(\sigma_{c}^{n}\) is the extracellular stress acting on the cell in the direction normal to the cell's outer surface. We note here that, in more general cases the cells may be subject to other external stresses, such as shear stresses (Ostrowski et al., 2014) that stimulate tensorakis. Typically, when cells are embedded in a 3D matrix and subject to fluid flow, the dominant stress they experience is due to fluid pressure (Polacheck et al., 2011). Therefore, in this work we assume that Eq. (15) can be simplified to read \[\Phi_{T}=-\varpi p, \tag{16}\] where \(p\) is the interstitial fluid pressure. Finally, we write the total potential, \(\Phi\), as the sum of the chemotactic and tensotactic potentials, so that \(\Phi=\Phi_{C}+\Phi_{T}\). Then, Eq. (13) becomes \[\mathbf{s}=l_{c}\varpi\boldsymbol{\nabla}p+l_{c}\chi\boldsymbol{\nabla}c. \tag{17}\] In order to close the model we introduce equations for \(p\) and \(c\). In what follows we formulate the governing equations of the flow dynamics in the two-phase cell-fluid mixture, such that the fluid pressure and the concentration of the flow-advected chemokine can be evaluated. #### 2.1.3 Interstitial flow dynamics We make a no-voids assumption for the cell-fluid mixture such that the volume fraction of the fluid phase is given by \(1-\phi\). Then, the mass conservation equation of the fluid phase can be written as \[\frac{\partial(1-\phi)}{\partial t}+\boldsymbol{\nabla}\cdot[(1-\phi)\mathbf{ u}_{\mathbf{f}}]=0, \tag{18}\] where \(\mathbf{u}_{\mathbf{f}}\) is the fluid velocity. Combining Eq. (18) with Eq. (9) we have \[\boldsymbol{\nabla}\cdot[\boldsymbol{\psi}+(1-\phi)\mathbf{u}_{\mathbf{f}}]= D_{c}\nabla^{2}\phi. \tag{19}\] Assuming the fluid flux is much larger than the cell flux, as is usual for biological tissues, \[\boldsymbol{\psi}-D_{c}\boldsymbol{\nabla}\phi\ll(1-\phi)\mathbf{u}_{ \mathbf{f}}, \tag{20}\] we can simplify Eq. (19) to \[\boldsymbol{\nabla}\cdot[(1-\phi)\mathbf{u}_{\mathbf{f}}]=0. \tag{21}\] We proceed by assuming that the momentum equation of the fluid phase involves a balance between the drag force exerted by the cells and the pressure gradient (for simplicity, we neglect intra-phase viscous stresses). This balance can be written as a Darcy-type equation \[\mathbf{u_{f}}-\mathbf{u}_{\mathbf{e}}^{\mathrm{avg}}=-k_{H}g(\phi)\mathbf{\nabla}p, \tag{22}\] where \(\mathbf{u}_{\mathbf{e}}^{\mathrm{avg}}=\mathbf{\psi}/\phi\) is the average cell velocity, and \(k_{H}\) represents the hydrodynamic conductivity (permeability divided by viscosity). In Eq. (22), \(g(\phi)\) describes how the drag depends on the cell volume fraction, \(\phi\). For simplicity, we use the popular Carman-Kozney relation (Swartz and Fleury, 2007) so that \[g(\phi)=\frac{(1-\phi)^{3}}{\phi^{2}}. \tag{23}\] Again, we can neglect the cell velocity with respect to the fluid velocity in Eq. (22) to obtain \[\mathbf{u_{f}}=-k_{H}g(\phi)\mathbf{\nabla}p. \tag{24}\] Substituting from Eqs. (23) and (24) to Eq. (17), we can write the equation for the stimulus vector as \[\mathbf{s}=-\frac{l_{c}\varpi}{k_{H}}\frac{\phi^{2}}{(1-\phi)^{3}}\mathbf{u_{ f}}+l_{c}\chi\mathbf{\nabla}c. \tag{25}\] Finally, we model the evolution of the chemokine concentration, \(c\), using a reaction-advection-diffusion equation. We assume that the chemokine is secreted by the cells at a constant rate, \(\beta_{p}\), and that \(\beta_{d}\) is the rate (per unit concentration) at which it binds to receptors on the surface of the cells. Under these assumptions we obtain the following equation for the chemokine concentration \[\frac{\partial c}{\partial t}+\mathbf{\nabla}\cdot[c(1-\phi)\mathbf{u_{f}}]=\frac {\beta_{p}}{V_{c}}\phi-\frac{\beta_{d}}{V_{c}}\phi c+D\nabla^{2}c. \tag{26}\] where \(D\) is the diffusion coefficient of the chemokine in the interstitial fluid. Taken together, equations (9), (12), (21), (25), and (26) form a closed system for the cell volume fraction, \(\phi\) and flux, \(\mathbf{\psi}\), the cell stimulus, \(\mathbf{s}\), fluid velocity, \(\mathbf{u_{f}}\), and chemokine concentration, \(c\). ### Model problem: cell layer subject to one-dimensional flow #### 2.2.1 Formulation of a nondimensional 1D model In this section we reduce the model developed in Sec. 2.1 to a one-dimensional model that describes the migration of a population of cells (initially localized around an initial position) in a long microfluidic channel (i.e., we neglect fluxed of cell into, or out of, the channel edges). This model will be used to offer a simple explanation of the changes in the migration patterns of tumour cells in response to changes in flow velocity and cell volume fraction observed by Polacheck et al. (2011). We consider a long channel which is aligned with the \(x\)-axis, in which an initial cell layer is distributed normally around \(x=0\) such that \[\phi(x^{*},t^{*}=0)=\overline{\phi}\exp\left[-\left(\frac{x^{*}}{L^{*}}\right) ^{2}\right], \tag{27}\] In Eq. (27) and henceforth, we use asterisks to denote dimensional parameters, and the constants \(\overline{\phi}\) and \(L^{*}\) represent typical values of the cells' initial volume fraction and layer size, respectively. The cells are subject to fluid flow, where in the far-field as \(|x^{*}|\gg L^{*}\) (i.e., in regions sufficiently far from the cell layer), the fluid velocity magnitude is \(U_{f}^{*}\). A schematic of the one-dimensional model problem is illustrated in Fig. 2. We now simplify the equations derived in Sec. 2.1 to 1D Cartesian geometry form and non-dimensionalise them using the following scaling: we normalize length by the characteristic length of the initial distribution of cells, \(L^{*}\); we scale the velocity by the far-field fluid velocity, \(U_{f}^{*}\); accordingly, time is normalized by \(L^{*}/U_{f}^{*}\). The chemokine concentration is scaled by its maximal equilibrium concentration, \(c_{\rm eq}^{*}=\beta_{p}^{*}/\beta_{d}^{*}\). The full set of independent and dependent nondimensional variables are given by \[x=\frac{x^{*}}{L^{*}},\ t=\frac{t^{*}U_{f}^{*}}{L^{*}},\ \phi,\ \psi=\frac{\psi^{*}}{U_{f}^{*}},\ u_{f}= \frac{u_{f}^{*}}{U_{f}^{*}},\ \mbox{and}\ c=\frac{c^{*}}{c_{\rm eq}^{*}}. \tag{28}\] Then, Eq. (9) in a 1D nondimensional form reads \[\frac{\partial\phi}{\partial t}+\frac{\partial\psi}{\partial x}=\frac{1}{{ \rm Pe}_{c}}\frac{\partial^{2}\phi}{\partial x^{2}}, \tag{29}\] where \[{\rm Pe}_{c}=\frac{U_{f}^{*}L^{*}}{D_{c}^{*}}.\] Using physiologically relevant parameters we have \(U_{f}^{*}\sim 1\,\mu\)m/s (Polacheck et al., 2011), \(L^{*}\sim 100\,\mu\)m, and \(D_{c}^{*}\sim 1000\,\mu\)m\({}^{2}\)/h (Marel et al., 2014), such that the interstitial fluid velocity is much larger than the diffusive velocity of cells, i.e., \({\rm Pe}_{c}\gg 1\). We expect, however, that the cell flux will also be small, \(\psi\ll 1\), such that we cannot neglect diffusive effects. In one dimension it is straightforward to assume that the direction of the stimulus is constant, \(\boldsymbol{\eta}=\mathbf{\hat{x}}\), while \(s\) can change its sign such that \(s\in(-\infty,\infty)\). Then, the integral in Eq. (12) reads \[\begin{split}\frac{s}{2\pi(1-e^{-2s})}\int_{\boldsymbol{\xi}\in \mathcal{V}}\xi_{x}\exp\left(s\mathbf{\hat{x}}\cdot\left(\boldsymbol{\xi}- \mathbf{\hat{x}}\right)\right)d\boldsymbol{\xi}&=\frac{s}{(1-e^ {-2s})}\int_{0}^{\pi}\exp\left(s(\cos\theta-1)\right)\cos\theta\sin\theta d \theta\\ &=\coth(s)-s^{-1}.\end{split} \tag{30}\] We note that the resulting expression is antisymmetric with respect to \(s\), so that the momentum source term acts in the same direction as the stimulus. Assigning Eq. (30) to Eq. (12), reducing to 1D and non-dimensionalising we have \[\frac{\partial\psi}{\partial t}=\frac{\mathcal{U}}{\mathcal{T}}\left[\coth(s )-s^{-1}\right]\phi-\frac{\psi}{\mathcal{T}}+\frac{1}{{\rm Pe}_{c}}\frac{ \partial^{2}\psi}{\partial x^{2}}, \tag{31}\] where \[\mathcal{T}=\frac{\tau^{*}U_{f}^{*}}{L^{*}}\ \ \mbox{and}\ \ \mathcal{U}=\frac{U_{c}^{*}}{U_{f}^{*}}.\] Since the cell velocity is much smaller than the fluid velocity we will assume \(\mathcal{U}\ll 1\). The characteristic response time of cells, \(\tau^{*}\), is in the range of minutes to hours (Wyatt et al., 2016) such that, based on the characteristic scales of length and fluid velocity introduced above, we can estimate that \(\mathcal{T}\sim 10-100\). In order to evaluate the stimulus, \(s\), we must solve for the fluid velocity and chemokine gradient. Starting from the fluid velocity, we consider the nondimensional 1D form of the cell-fluid mixture mass conservation equation (21) \[\frac{\partial}{\partial x}\left[(1-\phi)u_{f}\right]=0. \tag{32}\] Figure 2: Schematic illustration of the one-dimensional model problem. Integrating Eq. (32) with respect to \(x\) we have \[u_{f}=\frac{1}{1-\phi}. \tag{33}\] where we assumed that \(u_{f}|_{x\rightarrow-\infty}=1\). The chemokine transport equation in a 1D nondimensional form then reads \[\frac{\partial c}{\partial t}+\frac{\partial c}{\partial x}=\mathrm{Da}\,\phi \left(1-c\right)+\frac{1}{\mathrm{Pe}}\frac{\partial^{2}c}{\partial x^{2}}, \tag{34}\] where \[\mathrm{Da}=\frac{L^{*}\beta_{d}^{*}}{U_{f}^{*}V_{c}^{*}}\ \ \mathrm{and}\ \ \mathrm{Pe}= \frac{U_{f}^{*}L^{*}}{D^{*}}. \tag{35}\] Using the physiologically relevant parameters of \(U_{f}^{*}\) and \(L^{*}\) introduced above, together with characteristic diffusivity of chemokines \(D^{*}\sim 100\,\mu\mathrm{m}^{2}/\mathrm{s}\)(Fleury et al., 2006; Bonneuil et al., 2022), we can estimate that \(\mathrm{Pe}\sim 1\), meaning that diffusive effects are likely to be important. In the context of the present one-dimensional model, the diffusive terms act to smooth the chemokine gradient and, thereby, to reduce the magnitude of the chemotactic cue in the centre of the cell layer (with subdominant contributions at the edges of the domain). However, since \(\mathrm{Pe}\ll\mathrm{Pe}_{c}\), we would need a very large domain in order to simulate, on the one hand, sufficiently large times to allow for cell migration while, on the other hand, avoiding boundary interactions of the chemokine at the channel edges. Therefore, to simplify the numerical calculations, we choose to neglect the diffusive term in Eq. (34) and view the results as the purely-advective limit, bearing in mind that including diffusion would result in somewhat weaker chemotactic migration. Additionally, the time-derivative of the chemokine is associated with changes in the cell volume fraction such that \[\frac{\partial c}{\partial t}\sim\mathrm{Da}\frac{\partial\phi}{\partial t} \sim O\left(\mathrm{Da}\,\mathcal{U},\frac{\mathrm{Da}}{\mathrm{Pe}_{c}} \right)\ll 1. \tag{36}\] Therefore, we can assume that the chemokine distribution is quasi-steady, i.e., changes in the cell volume fraction lead to instantaneous adaption of the chemokine distribution. Under these assumptions and together with the vanishing of the chemokine at the channel inlet, \(c|_{x\rightarrow-\infty}=0\), we can simplify Eq. (34) to \[c(x,t)=1-\exp\left(-\mathrm{Da}\int_{-\infty}^{x}\phi(z,t)dz\right). \tag{37}\] Substituting from Eqs. (33) and (37) in the 1D form of Eq. (25) we have \[s=-\mathcal{K}\frac{\phi^{2}}{(1-\phi)^{4}}+\mathcal{M}\mathrm{Da}\,\phi\exp \left(-\mathrm{Da}\int_{-\infty}^{x}\phi(z,t)dz\right), \tag{38}\] where the nondimensional parameters \[\mathcal{K}=\frac{l_{c}^{*}\varpi^{*}U_{f}^{*}}{k_{H}^{*}}\ \ \mathrm{and}\ \ \mathcal{M}=\frac{l_{c}^{*}\chi^{*}c_{\mathrm{eq}}^{*}}{L^{*}},\] represent characteristic magnitudes of the tensotactic and chemotactic stimuli, respectively. Then, the one-dimensional spatio-temporal evolution of the cell volume fraction, \(\phi\), and flux, \(\psi\), in response to interstitial fluid flow, can be solved using Eqs. (29) and (31), together with the constitutive model for the cell stimulus in Eq. (38). We impose no flux boundary conditions at the far field, i.e., \[\frac{\partial\phi}{\partial x}=0\ \ \mathrm{and}\ \ \psi=0\ \ \mathrm{as}\ \ x\rightarrow\pm\infty. \tag{39}\] In order to solve the system of equations given by Eqs. (29), (31), and (38), we use a semi-implicit finite difference scheme; the \(x\)-derivatives are discretised using a second-order central difference method on a uniform grid spanning the interval \([-X,X]\),where \(X\gg 1\) is sufficiently large that the far-field boundary conditions given in Eq. (39) have negligible effect on the results. Advancing the system in time is achieved using Euler's forward method. The above scheme is implemented in MATLAB. The code is available at the following GitHub repository: github.com/yaronbenami/cell_migration. #### 2.2.2 Downstream and upstream migrating populations An important goal of the present model is to identify parameter regimes in which transitions between upstream and downstream migration occur. While the sign of \(\psi\) provides an indication of the average direction of cell migration, changes in the proportion of cells traveling upstream and downstream is more accurately given by \[\phi^{\rm diff}=V_{c}\left(\int_{\xi_{x}>0}fd\mathbf{\xi}-\int_{\xi_{x}<0}fd\mathbf{\xi} \right). \tag{40}\] It is important to note here that, because we used a probabilistic approach to develop our model, we can derive \(\phi^{\rm diff}\) from Eq. (40). This would not have been possible using a conventional multiphase model in which the average macroscopic variables are not explicitly related to microscopic velocity distributions. Integrating Eq. (1) with respect to \(\mathbf{\xi}\) for \(\xi_{x}>0\) and subtracting the integral of Eq. (1) for \(\xi_{x}<0\) we have \[\frac{\partial\phi^{\rm diff}}{\partial t}+\frac{\partial\psi^{\rm diff}}{ \partial x}=\frac{1}{\mathcal{T}}\left(\phi{\rm tanh}\left(\frac{s}{2}\right)- \phi^{\rm diff}\right)+\frac{1}{{\rm Pe}_{c}}\frac{\partial^{2}\phi^{\rm diff }}{\partial x^{2}}, \tag{41}\] where \[\psi^{\rm diff}=V_{c}U_{c}\left(\int_{\xi_{x}>0}\xi_{x}fd\mathbf{\xi}-\int_{\xi_{ x}<0}\xi_{x}fd\mathbf{\xi}\right), \tag{42}\] and we note the source (sink) term due to cells changing their migration direction from upstream to downstream (and vice versa). In order to obtain an equation for \(\psi^{\rm diff}\), we multiply Eq. (1) by \(\xi_{x}\), integrate with respect to \(\mathbf{\xi}\) for \(\xi_{x}>0\) and subtract the integral for \(\xi_{x}<0\) to obtain \[\frac{\partial\psi^{\rm diff}}{\partial t}=\frac{\mathcal{U}}{\mathcal{T}} \left(1-\frac{\tanh(s/2)}{s}\right)\phi-\frac{\psi^{\rm diff}}{\mathcal{T}}+ \frac{1}{{\rm Pe}_{c}}\frac{\partial^{2}\psi^{\rm diff}}{\partial x^{2}}. \tag{43}\] With \(\phi\) and \(s\) determined by Eqs. (29), (31) and (38), we can solve Eqs. (41) and (43) to determine \(\phi^{\rm diff}\) and \(\psi^{\rm diff}\). We define the total difference between downstream- and upstream-migrating cells, \[N^{\rm diff}(t)=\int_{-\infty}^{\infty}\phi^{\rm diff}(x,t)dx, \tag{44}\] and use this quantity as a metric to determine whether, at time \(t\), there is a dominant tendency for the cells to migrate downstream (\(N^{\rm diff}(t)>0\)) or upstream (\(N^{\rm diff}(t)<0\)). We note that by integrating Eq. (41) with respect to \(x\) we can derive an ODE for \(N^{\rm diff}\) \[\frac{dN^{\rm diff}}{dt}=\frac{R(t)-N^{\rm diff}}{\mathcal{T}}, \tag{45}\] where we have assumed no cell flux as \(x\to\pm\infty\) and \(R(t)\) is the net rate at which cells change orientation given by \[R(t)=\int_{-\infty}^{\infty}\phi{\rm tanh}\left(\frac{s}{2}\right)dx.\] Finally, we define \(\overline{\phi}_{c}(t)\) as the critical value of \(\overline{\phi}\) (the initial volume fraction at \(x=0\), see Eq. (27)) for which \(N^{\rm diff}=0\) at time \(t\), such that a transition in the overall migration tendency occurs at time \(t\) when \(\phi(0,0)=\overline{\phi}_{c}\). #### 2.2.3 Asymptotic analysis of \(\overline{\phi}_{c}\) in the limit of small stimulus The strengths of the tensotactic and chemotactic stimuli are governed by the nondimensional parameters \(\overline{\phi}\) and Da. While the parameters \(\mathcal{K}\) and \(\mathcal{M}\) also affect the value of \(s\), we will show below that the transition between downstream and upstream migration is dominated by the parameter combinations that yield \(s|_{x=0}=0\), such that only the ratio, \(\mathcal{K}/\mathcal{M}\), affects the value of \(\overline{\phi}_{c}\). In this section we estimate \(\overline{\phi}_{c}\) in the limiting case for which \(\overline{\phi},\text{Da}\ll 1\). In this limit, the stimulus in Eq. (38) scales as \(s\sim-\mathcal{K}\overline{\phi}^{2}+\mathcal{M}\text{Da}\overline{\phi}\ll 1\). Taking the \(s\ll 1\) limit of Eq. (31) we have that the cell advective flux, \(\psi\), is much smaller than the diffusive flux, \(\psi\sim\mathcal{U}\overline{\phi}s\ll\text{Pe}_{c}^{-1}\overline{\phi}\). Therefore, we may assume that, at leading order, the cell volume fraction is given by the unsteady diffusion equation \[\frac{\partial\phi}{\partial t}\approx\frac{1}{\text{Pe}_{c}}\frac{\partial^{ 2}\phi}{\partial x^{2}}. \tag{46}\] The solution for Eq. (46), together with the initial condition \[\phi(x,0)=\overline{\phi}\exp\left(-x^{2}\right),\] is given by \[\phi(x,t)\approx\frac{\overline{\phi}}{\sqrt{1+4t/\text{Pe}_{c}}}\exp\left(- \frac{x^{2}}{1+4t/\text{Pe}_{c}}\right). \tag{47}\] With \(s\ll 1\), it is straightforward to show that, at leading order, Eqs. (41) and (43) yield the following expressions for \(\phi^{\text{diff}}\) and \(\psi^{\text{diff}}\), \[\phi^{\text{diff}}=-\mathcal{T}\frac{\partial\psi^{\text{diff}}}{\partial x}+ O(s) \tag{48}\] and \[\psi^{\text{diff}}=\frac{1}{2}\mathcal{U}\phi+O(s^{2}). \tag{49}\] Combining Eqs. (48) and (49) we have \[\phi^{\text{diff}}=-\frac{1}{2}\mathcal{T}\mathcal{U}\frac{\partial\phi}{ \partial x}+O(s). \tag{50}\] It can be readily verified from Eq. (50) that \(\phi^{\text{diff}}\) is antisymmetric with respect to \(x=0\) (since \(\phi\) maintains its symmetry at this limit, see Eq. (47)). Thus, the leading order term for \(\phi^{\text{diff}}\) does not contribute to the integral in Eq. (44). We conclude that the contribution of the tensotactic and chemotactic stimuli to the integral arises from the \(O(s)\) terms localized around \(x=0\), where \(\partial\phi/\partial x\) vanishes. Consequently, we can assume that the tendency towards upstream or downstream migration is dominated by the stimulus value at \(x=0\). Assigning Eq. (47) to Eq. (38) we have \[s|_{x=0}\approx-\frac{\mathcal{K}}{(1+4t/\text{Pe}_{c})}\frac{\overline{\phi} ^{2}}{(1-\overline{\phi}/\sqrt{1+4t/\text{Pe}_{c}})^{4}}+\frac{\mathcal{M} \text{Da}\overline{\phi}}{\sqrt{1+4t/\text{Pe}_{c}}}\exp\left(-\frac{\sqrt{ \pi}\text{Da}\overline{\phi}}{2}\right). \tag{51}\] In Eq. (51), we retain terms that are subdominant as \(\text{Da},\overline{\phi}\ll 1\). While the higher order terms are not asymptotically valid (since we did not formally derive the next order correction terms), retaining them in our analysis was useful in order to capture the qualitative behaviour of \(\overline{\phi}_{c}\) for non-small \(\text{Da}\) and \(\overline{\phi}\) (see, for example, the local maximum in \(\overline{\phi}_{c}\) in Fig. 6 which is captured by the asymptotic expression). In the limit when \(\overline{\phi}\ll 1\) it is straightforward to show that for sufficiently small cell volume fraction [depending on the parameter \(\mathcal{K}/(\mathcal{M}\text{Da})\)] there is a dominant tendency towards downstream migration (scaled with \(\overline{\phi}\) contrary to the \(\overline{\phi}^{2}\) scaling of the upstream-directed tensotactic stimulus). This result is consistent with the experimental findings of Polacheck et al. (2011) who observed that downstream migration becomes more dominant as the cell volume fraction decreases (see Fig. 1). Equating Eq. (51) to zero yields the following transcendental equation for \(\overline{\phi}_{c}\): \[\frac{\mathcal{K}}{\mathcal{M}\text{Da}\sqrt{1+4t/\text{Pe}_{c}}}\frac{ \overline{\phi}_{c}\exp\left(\frac{\sqrt{\pi}\text{Da}\overline{\phi}_{c}}{ 2}\right)}{(1-\overline{\phi}_{c}/\sqrt{1+4t/\text{Pe}_{c}})^{4}}=1, \tag{52}\] which depends on the nondimensional parameter groupings, \(\mathcal{K}/\mathcal{M}\), \(\text{Da}\) and \(t/\text{Pe}_{c}\), representing the relative strength of the tensotactic to chemotactic stimulus, the ratio of chemokine reaction and advection rates, and cell-diffusive time, respectively. The dependence of \(\overline{\phi}_{c}\) on \(t\) means that the critical value of the initial volume fraction for the transition from downstream to upstream migration depends on the time that has elapsed since the cells were seeded. This is because cell diffusion reduces the value of \(\phi|_{x=0}\) as \(t\) increases. Consequently, a transition to downstream migration at sufficiently large \(t\) will always occur, regardless of the value of the initial volume fraction \(\overline{\phi}\). Alternatively, by replacing \(\overline{\phi}_{c}\rightarrow\overline{\phi}\) and \(t\to t_{c}\) in Eq. (52), the equation could be interchanged to describe the critical time in which the transition takes place, \(t_{c}\), as a function of the initial volume fraction \(\overline{\phi}\). ## 3 Results Table 1 summarises the nondimensional parameter values used to generate model simulations. We will study the behaviour of cells for a range of values of the initial volume fraction, \(\overline{\phi}\), and the Damkohler number, Da. These parameters, together with the values of the tensotactic and chemotactic signal strengths, \(\mathcal{K}\), and \(\mathcal{M}\), respectively, govern the magnitude and direction of the stimuli. In this work we keep \(\mathcal{K}\) and \(\mathcal{M}\) constant and equal, and vary the values of \(\overline{\phi}\) and Da. The cells' nondimensional velocity, \(\mathcal{U}\) and their Peclet number, \(\mathrm{Pe}_{c}\), are chosen to have physiologically relevant values and are fixed at these default values throughout the paper. We will examine two physiologically relevant values of the cells' relaxation time, \(\mathcal{T}\), corresponding to dimensional times of minutes and hours. Figure 3 illustrates the spatio-temporal evolution of the cell layer in response to the flow-induced chemotactic and tensotactic stimuli. We define the macroscopic average cell velocity as \[u_{c}^{\mathrm{avg}}=\frac{\psi}{\phi}, \tag{53}\] and plot the \(x\)-distributions of the scaled average velocity, \(u_{c}^{\mathrm{avg}}/\mathcal{U}\) (macroscopic cell velocity normalized by the microscopic cell velocity), and the cell volume fraction, \(\phi\), at times \(t=0,100,500,1000\). For the specific case of Da = 0.5 and \(\mathcal{T}=10\), we consider two initial cell volume fractions, \(\overline{\phi}\), which induce different migration behaviours: in Fig. 3(a,b), \(\overline{\phi}=0.2\), leading to dominant downstream migration at all times; by contrast, in Fig. 3(c,d), \(\overline{\phi}=0.4\), leading to upstream migration at early times, and a transition to dominant downstream migration at later times (\(t\gtrsim 300\)). To complete the picture of the different velocities in which different regions of the cell layer migrate, Fig. 4 shows how the spatial position, \(x\), at which \(\phi_{\mathrm{max}}\), the maximal volume fraction, is obtained changes over time \(t\) (solid lines). Also shown are the trajectories of the two spatial locations at which the volume fraction attains its half-maximal value (dashed lines). The purely diffusive (PD) trajectory, \(x_{\mathrm{PD}}\), for which \(\phi=\phi_{\mathrm{max}}/2\), is given by \[x_{\mathrm{PD}}(t;\phi=\phi_{\mathrm{max}}/2)=\pm\ln(2)\sqrt{1+4t/\mathrm{Pe}_ {c}} \tag{54}\] \begin{table} \begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline \(\overline{\phi}\) & 0.01-0.5 \\ Da & 0.01-10 \\ \(\mathcal{U}\) & 0.003 (a) \\ \(\mathrm{Pe}_{c}\) & 300 (b) \\ \(\mathcal{T}\) & 10, 100 (c) \\ \(\mathcal{K}\) & 10 \\ \(\mathcal{M}\) & 10 \\ \hline \end{tabular} \end{table} Table 1: Nondimensional parameter values and is included in Fig. 4 for reference. Naturally, in the purely diffusive case the location of the maximal value does not change over time, \(x_{\rm PD}(t;\phi=\phi_{\rm max})=0\). In Fig. 3(a) we notice a dominant cell diffusion, which is also shown by the red lines in Fig. 4, where the locations of \(\phi=\phi_{\rm max}/2\) predominantly travel in the direction opposite to the volume fraction gradient. This diffusive movement is superimposed on a small shift of the cell layer to the right (downstream) due to the chemotaxis-induced positive velocity seen in Fig. 3(b) and by the right-biasing of all the red curves in Fig. 4 (compare the slope of the dashed-red curves and the purely diffusive curves (dashed-dotted black lines)). The average cell velocity is small (up to 10% of the individual cell speed) because, although chemotaxis dominates tensoraxis, the chemotactic signal is rather weak at low volume fractions. While the cell layer in Fig. 3(a) seems to maintain its symmetry with respect to the location of the maximum volume fraction, it is possible to detect a small amount of symmetry breaking in the velocity field due to the nonlinearity of the tensotactic and chemotactic stimuli which attain their maximum values at slightly different \(x\)-locations. At a larger initial cell volume fraction, in Fig. 3(c), we can notice that the cell layer is skewed to the left at early times due to the large negative cell velocity at the centre of the cell layer (solid blue line in Fig. 4). This is because the tensotactic signal is dominant in the region where the volume fraction is large. At early times we also notice that the stimulus-directed velocity and diffusive velocity are equal and opposite at the downstream part of the cell layer, while they reinforce each other, to generate a Figure 3: The spatio-temporal evolution of the cell volume fraction (a,c) and scaled average velocity (b,d) starting from a Gaussian \(x\)-distribution of cells at \(t=0\), with \(\overline{\phi}=0.2\) (a,b) and \(\overline{\phi}=0.4\) (c,d). The solutions are plotted at times \(t=0\) (dashed-black line), \(t=100\) (blue), \(t=500\) (magenta), and \(t=1000\) (red). Parameter values: \(\mathrm{Da}=0.5\), \(\mathcal{T}=10\); other parameters are fixed at the values listed in Table 1. large upstream velocity, at the upstream part of the cell layer (compare the dashed-blue curves and purely-diffusive curves in Fig. 4). At later times, we observe a reduction in the cell volume fraction due to the action of cell diffusion, which consequently leads to a transition dominant downstream migration (notice the change to positive velocity in Fig. 3(d) and in the solid-blue line in Fig. 4 when \(t\gtrsim 100\)). To better illustrate the change in the dominant mode of migration as the cell volume fraction changes, the results presented in Fig. 5 show the spatial variation in the proportion of cells traveling downstream and upstream at time \(t=100\) when \(\mathrm{Da}=0.5\) and the initial volume fraction varies. For small \(\overline{\phi}\), most cells travel downstream. As \(\overline{\phi}\) increases, more cells travel downstream (compare the change between \(\overline{\phi}=0.1\) and \(0.2\) in Fig. 5) due to the increased production of chemokine; however, the maximum number of cells traveling downstream is no longer in the centre of the cell layer due to the increased tensotactic stimulus in the region where the cell volume fraction is maximal. As \(\overline{\phi}\) increases further, different migration directions dominate in different regions of the cell layer. On the one hand, the strong tensotactic stimulus in the centre of the cell layer, where the cell volume fraction is maximal, leads to upstream migration in this region; on the other hand, cells in the edges, where the volume fraction is smaller, continue to migrate downstream. At the critical value, \(\overline{\phi}_{c}\) (dashed-black line in Fig. 5) there is a balance between the proportion of upstream-migrating cells in the bulk of the cell layer and the proportion of downstream-migrating cells at the edges of the cell cluster. As \(\overline{\phi}\) increases beyond \(\overline{\phi}_{c}\) more cells migrate upstream. Due to the strong nonlinearity of the tensotactic stimulus, small deviations of \(\overline{\phi}\) above \(\overline{\phi}_{c}\) amplify the tendency to upstream migration. Consequently, the \(x\)-position where \(\overline{\phi}^{\mathrm{diff}}\) attains its minimal value (in the region of dominant tensoraxis) moves to the left as \(\overline{\phi}\) increases, because of the large cell flux in the negative \(x\) direction which shifts the location of the maximum value of \(\phi\). Sufficiently far from the bulk of the cell layer, where the stimuli are very weak and cell diffusion dominates, all curves of \(\overline{\phi}^{\mathrm{diff}}\) in Fig. 5 collapse onto a single curve. In these regions cells tend to diffuse in the direction of decreasing cell density. Having observed transitions in the favourable migration direction as the system parameters vary, it is useful to delineate the parameter regimes in which these transitions occur. For that purpose, the results presented in Fig. 6(a) show how \(\overline{\phi}_{c}\) changes as the Damkohler number, \(\mathrm{Da}\), is varied for two fixed values of the cell relaxation time, \(\mathcal{T}\), and two values of the time that has elapsed since the initial condition; all other parameters are held fixed at their default values (see Table 1). The characteristic Figure 4: Different regions of the cell layer travel at different velocities. The change in the spatial position, \(x\), of \(\phi_{\mathrm{max}}\) (solid lines) and \(\phi_{\mathrm{max}}/2\) (dashed lines) as a function of time, for the two cases presented in Fig. 3, \(\overline{\phi}=0.2\) (red lines) and \(\overline{\phi}=0.4\) (blue lines). For comparison, the dashed-dotted black lines show the purely diffusive evolution of the spatial location of \(\phi_{\mathrm{max}}/2\). time scale is given by \(L^{*}/U_{t}^{*}\sim 100s\); we used values of \(\mathcal{T}\) corresponding to a dimensional relaxation times of several minutes (\(\mathcal{T}=10\), \(\tau^{*}\sim 15\)min, red symbols) and a few hours (\(\mathcal{T}=100\), \(\tau^{*}\sim 3\)h, blue symbols) corresponding to physiologically relevant relaxation times (Wyatt et al., 2016). For the time at which data were collected (i.e., the elapsed time since the start of the experiment), we used dimensional times of several hours (\(t_{\text{short}}=100\), square symbols) and approximately one day (\(t_{\text{long}}=1000\), star symbols) to study cell behaviour on time scales which are either much smaller or similar in magnitude to the timescale for cell migration, respectively. For each parameter combination we used the Matlab function fzero, and our numerical scheme, to determine the value of \(\overline{\phi}\) for which \(N^{\text{diff}}=0\) at the simulated time (either \(t_{\text{short}}\) or \(t_{\text{long}}\)). Figure 6 shows that \(\overline{\phi}_{c}\) increases as the duration of time at which data are collected increases. The grey region in Fig. 6 indicates the parameter region in which downstream migration prevails for all \(t\). This region is delineated by the critical curve, \(\overline{\phi}_{c}(\text{Da};t=0)\), on which a "transition" between upstream and downstream migration occurs at time, \(t=0\). Consequently, for each value of Da, if \(\overline{\phi}\) is outside this grey region, upstream migration will dominate at early times, and a transition to downstream migration will occur at some later time. In more detail, a parameter combination in the area delineated by the two curves in Fig. 6, \(\overline{\phi}_{c}(\text{Da};t=t_{1})\) and \(\overline{\phi}_{c}(\text{Da};t=t_{2})\), will undergo a transition between upstream and downstream migration at some time in the interval \(t\in(t_{1},t_{2})\). Due to the action of cell diffusion, the cell volume fraction reduces over time and favours downstream migration at later times. Due to this mechanism, we expect that at sufficiently large times downstream migration will always prevail. However, depending on the values of the parameters, these times could be extremely long and may not be physiologically relevant. For example, the dimensional long time we used is equivalent to \(t_{\text{long}}^{*}\sim 1\,\text{day}\). This is about the maximum time scale on which the current model is applicable, since at longer time scales processes including cell proliferation and death may no longer be negligible. To further illustrate the effect of the time that has elapsed since the beginning of the experiment on the migration behaviour, Fig. 6(b) shows the transition time, \(t_{c}\) as a function of Da for a range of values of \(\overline{\phi}\). In accordance with the grey region in Fig. 6(a), for sufficiently small values of \(\overline{\phi}\), there is a range of values of Da for which physically realistic values of \(t_{c}\) do not exist (i.e., \(t_{c}<0\) in this region), and, thus, downstream migration prevails at all times. As expected, \(t_{c}\) increases as \(\overline{\phi}\) increases, reflecting the increase in tensotactic stimulus with the cell volume fraction increases. Figure 5: Series of plots showing how, at a fixed time point \(t=100\), the proportion of cells moving upstream and downstream changes with \(x\) for different values of the initial volume fraction, \(\overline{\phi}\). The black dashed line represents the critical volume fraction, \(\overline{\phi}_{c}\), at which the dominant mode of migration switches between downstream and upstream. Parameter values: \(\text{Da}=0.5\), \(\mathcal{T}=10\); other parameters use the values listed in Table 1. While the time that has elapsed since the initial state may affect the value of \(\overline{\phi}_{c}\) dramatically, Fig. 6(a) shows that varying the cell relaxation time by a factor of 10 (while remaining in the physiologically relevant regime) does not significantly alter the critical value of \(\overline{\phi}_{c}\) (compare the red and blue symbols in Fig. 6(a)). The modest increase in \(\overline{\phi}_{c}\) for smaller relaxation times can be attributed to a more rapid reaction of the cells to changes in the dominant external stimulus from tensoraxis-dominated at early times to chemotaxis-dominated stimulus at later time. This, in turn, causes the transition to occur at earlier times. The black lines in Fig. 6(a) correspond to the asymptotic behaviour of \(\overline{\phi}_{c}\) as \(\mathrm{Da}\ll 1\) and \(\overline{\phi}\ll 1\), given by Eq. (52), for \(t=100\) (solid line) and \(t=1000\) (dashed line). We note that the asymptotic model is in excellent agreement with the numerical results when \(\overline{\phi},\mathrm{Da}\ll 1\). We note that it also replicates the general trend for larger values of \(\overline{\phi}\) and \(\mathrm{Da}\). As expected, the agreement improves when the cell relaxation time increases or the elapsed time decreases. This is because larger relaxation times and smaller elapsed times mean less skewness of the cell distribution with respect to their \(x\)-symmetric initial distribution, such that the assumptions on which the asymptotic model is based (see Eq. (46) _et sec._) are better fulfilled. In the limit of \(\overline{\phi}\ll 1\) it is straightforward to show, using the asymptotic expression in Eq. (52), that when \(\mathcal{K}/\mathcal{M}\) decreases and \(\mathrm{Da}\) increases (for example, by reducing the fluid velocity), the critical volume fraction for transition from downstream to upstream migration increases. Indeed, in Polacheck et al. (2011) smaller fluid velocities were found to reduce the tendency for upstream migration (see Fig. 1). Based on our model, we attribute this behaviour to the impact that a reduction in the fluid velocity has on the mechanisms that inhibit upstream migration and promote downstream migration: (i) the fluid-cell drag force decreases, which leads to a smaller transcellular pressure gradient and a smaller tensotactic cue; (ii) the ratio of reaction to advection increases, which leads to larger chemokine gradients and a larger downstream-oriented chemotactic signal. For large values of the Damkohler number (\(\mathrm{Da}\gtrsim 3\)), we note a qualitative change in behaviour where further increases in \(\mathrm{Da}\) lead to a reduction in \(\overline{\phi}_{c}\) (i.e., a reduced tendency to migrate downstream). This is due to increased chemokine consumption by the cells as the chemokine concentration increases, which diminishes the chemotactic gradients in the downstream region of the cell layer. This non-monotonic behaviour of the critical conditions with respect to the Damkohler number is also shown in Fig. 6(b), where the transition time Figure 6: The critical conditions for transition between dominant downstream and upstream migrations in the parameter space of \(\mathrm{Da}\), \(\overline{\phi}\), and \(t\). (a) the critical cell volume fraction, \(\overline{\phi}_{c}\), for transition as a function of \(\mathrm{Da}\). Comparison between the asymptotic expression (Eq. (52)) at times \(t_{\mathrm{short}}=100\) (solid line) and \(t_{\mathrm{long}}=1000\) (dashed line) and numerical simulation results at the respective times (square and star symbols, respectively). Two values of cell relaxation times, \(\mathcal{T}=10\) (red symbols) and \(\mathcal{T}=100\) (blue symbols) were simulated. The shaded grey area indicates the region in parameter space in which upstream migration does not take place for any \(t\). (b) the transition time, \(t_{c}\), as a function of \(\mathrm{Da}\) for different values of \(\overline{\phi}\), as predicted by the asymptotic model. initially decreases with Da (corresponding to the aforementioned increase in the chemotactic stimulus), while starting to increase at \(O(1)\) values of Da. ## 4 Conclusions The goal of the present study was to use mathematical modelling to study cell migration in response to flow-induced mechanical and chemical stimuli. We developed a hybrid probabilistic-continuum model for a two-phase mixture of fluid and cells. We started from a microscopic cell description given by a Fokker-Planck-type equation for the cell probability density function, forced by a stimulus-dependent term biasing the cell-velocity probability. Then, we used a velocity-space averaging to formulate a system of continuum equations that describe how the cells' spatial distribution evolves over time at the macroscopic level, in response to the mechanochemical signal. While the use of a microscopic probabilistic description as a starting point to derive continuum models for cell migration has been previously applied [e.g., Othmer and Hillen (2002); Turner et al. (2004); Johnston et al. (2015)], to the best of our knowledge, the current approach of framing a probabilistic description of cell movement withing a continuum multiphase framework describing the probability-biasing mechanochemical stimuli, has not previously been used. Motivated by the experimental results of Polacheck et al. (2011), we focused on studying the migration of a one-dimensional cell layer in an infinite channel subject to a fluid flow. Contrary to purely continuum based models, our probabilistic approach enabled us to determine how the proportion of cells travelling upstream and downstream at a given spatial location evolves over time, and to determine the critical conditions at which transitions in the dominant mode of migration occur. Through a combination of numerical simulation of the one-dimensional model and asymptotic analysis, we delineated the locus of transitions in the two parameter plane defined by the initial cell volume fraction, \(\overline{\phi}\), and the Damkohler number, Da, the latter parameter representing the ratio of chemokine secretion to advection rates. In agreement with the experimental observations by Polacheck et al. (2011) (see Fig. 1), the current model predicts downstream-oriented chemotactic migration at low cell volume fractions, and upstream-oriented tensotactic migration at larger volume fractions. This effect can be understood by the increase in the transcellular pressure gradient and consequent tensotactic stimulus when the cell volume fraction increases. In the experiments by Polacheck et al. (2011), the distribution of cell velocity was only measured at a single time point. By contrast, our model predicts that the time at which experimental measurements are made has an important effect on the dominant mode of migration. We identified a region of the parameter space in which the chemotactic stimulus dominates the tensotactic stimulus for all \(t\) and, thus, downstream migration prevails for the duration of the experiment. By contrast, outside this region of parameter space, upstream migration prevails at the beginning of the experiment when the cells are localised, and a transition to downstream migration occurs at later times, due to the effect of cell diffusion, which causes the cell distribution of cells to become more dispersed over time. This phenomenon may indicate the need to measure the cell velocities at different time points when conducting cell migration experiments. We additionally showed that an increase in Da tends to increase the importance of chemotactic migration, due to enhanced chemokine secretion by the cells. However, our model predicts that there is an optimal value of Da \(\sim O(1)\) which maximizes the chemotactic signal; as Da increases above this local maximum, chemokine degradation increases, leading to smaller chemotactic gradients in the downstream region of the cell layer. Here we mention that the current results were obtained in the purely advective limit of the chemokine propagation, i.e., neglecting diffusive effects. It is expected that including chemokine diffusion and boundary interactions (e.g., no flux) will result in a more complicated behaviour. Applying asymptotic analysis in the limit of \(\phi,\mathrm{Da}\ll 1\) we obtained an explicit formula for the critical conditions in terms of the system parameters. The asymptotic expression showed an excellent agreement with the numerical results in the limit of \(\phi,\mathrm{Da}\ll 1\), while it was also able to capture the general trend at larger values of \(\phi\) and Da, including the local maximum observed in the numerical results. While not considered in this paper, the present model also allows one to calculate the microscopic cell-velocity probability distribution, \(f(\mathbf{x},t,\boldsymbol{\xi})\). In future work, this distribution could be compared to experimental measurements of the distribution of cellular velocities. In this way, it should be possible to refine the functional form of the biasing term, \(F(\mathbf{x},t,\boldsymbol{\xi})\), to achieve better agreement with the experimental results. Finally, the model developed in this paper constitutes a novel framework to study cell migration in a dynamic fluid environment. One example for such cell migration is the movement of tumour cells towards plasma-depleting blood vessels which can lead to either vessel collapse (Padera et al., 2004) or intravasation (Roussos et al., 2011). Here, the interaction of the cells with the vessel walls may affect the flux of interstitial fluid depleted by the vessel, thus coupling the extravascular cell migration to intravascular blood flow. This phenomenon has significant implications for tumour blood flow, progression, and therapy (Stylianopoulos et al., 2012; Jain et al., 2014), and thus represents a natural topic for future work. ## Acknowledgments This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/X023869/1]. ## Data Availability The datasets generated during the current study are available from the corresponding author on reasonable request. The MATLAB code used to generate the data is available at the following GitHub repository: github.com/yaronbenami/cell_migration.
2310.20218
A Systematic Review for Transformer-based Long-term Series Forecasting
The emergence of deep learning has yielded noteworthy advancements in time series forecasting (TSF). Transformer architectures, in particular, have witnessed broad utilization and adoption in TSF tasks. Transformers have proven to be the most successful solution to extract the semantic correlations among the elements within a long sequence. Various variants have enabled transformer architecture to effectively handle long-term time series forecasting (LTSF) tasks. In this article, we first present a comprehensive overview of transformer architectures and their subsequent enhancements developed to address various LTSF tasks. Then, we summarize the publicly available LTSF datasets and relevant evaluation metrics. Furthermore, we provide valuable insights into the best practices and techniques for effectively training transformers in the context of time-series analysis. Lastly, we propose potential research directions in this rapidly evolving field.
Liyilei Su, Xumin Zuo, Rui Li, Xin Wang, Heng Zhao, Bingding Huang
2023-10-31T06:37:51Z
http://arxiv.org/abs/2310.20218v1
# A Systematic Review for Transformer-based Long-term Series Forecasting ###### Abstract The emergence of deep learning has yielded noteworthy advancements in time series forecasting (TSF). Transformer architectures, in particular, have witnessed broad utilization and adoption in TSF tasks. Transformers have proven to be the most successful solution to extract the semantic correlations among the elements within a long sequence. Various variants have enabled transformer architecture to effectively handle long-term time series forecasting (LTSF) tasks. In this article, we first present a comprehensive overview of transformer architectures and their subsequent enhancements developed to address various LTSF tasks. Then, we summarize the publicly available LTSF datasets and relevant evaluation metrics. Furthermore, we provide valuable insights into the best practices and techniques for effectively training transformers in the context of time-series analysis. Lastly, we propose potential research directions in this rapidly evolving field. Keywords:Long-term time series forecasting, Deep learning, Transformer, Self-attention, Multi-head attention + Footnote †: journal: Computer Vision and Pattern Recognition ## 1 Introduction The time series is usually a set of random variables observed and recorded sequentially over time. Key research directions for time-series data are classification [1, 2], anomaly detection [3-5], event prediction [6-8], and time series forecasting [9-11]. Time series forecasting (TSF) predicts the future trend changes of time series from a large amount of data in various fields. With the development of data collection technology, the task gradually evolves into using more historical data to predict the longer-term future, which is long-term time series forecasting (LTSF) [12, 13]. Precise LTSF can offer support to decision makers to better plan for the future by forecasting outcomes further in advance, including meteorology prediction [14], noise cancellation [15], financial long-term strategic guidance [16], power load forecasting [17, 18], and traffic road condition prediction [19]. Formerly, traditional statistical approaches were applied to time series forecasting, such as autoregressive (AR) [20], moving average (MA) [21] models, auto-regressive moving average (ARMA) [22], AR Integrated MA (ARIMA) [23], and spectral analysis techniques [24]. However, these traditional statistical methods require many a priori assumptions on the time-series prediction, such as stability, normal distribution, linear correlation, and independence. For example, AR, MA, and ARMA models are based on the assumption that time series are stationary, but in many real cases, time-series data exhibit non-stationarity. These assumptions limit the effectiveness of these traditional methods in real-world applications. As it is difficult to effectively capture the nonlinear relationships between time series with traditional statistical approaches, many researchers have studied LTSF from the perspective of machine learning (ML) [25, 26, 27, 28, 29]. Support vector machines (SVMs) [30] and adaptive boosting (AdaBoost) [31] were employed in the field of TSF. They calculate data metrics, such as minimum, maximum, mean, and variance, within a sliding window as new features for prediction. These models have somewhat solved the problem of predicting multivariate, heteroskedastic time series with nonlinear relationships. However, they suffer from poor generalization, which leads to limited prediction accuracy. Deep learning (DL) models (Fig.1) have greatly improved the nonlinear modeling capabilities of TSF in recent years. These models are constructed with neural network structures with powerful nonlinear modeling capabilities to learn complex patterns and feature representations in time series automatically. Therefore, DL is an effective solution for TSF and many other problems related to TSF, such as hierarchical time series forecasting [32], intermittent time series forecasting [33], and sparse multivariate time series forecasting [34] asynchronous time series forecasting [35, 36]. It has even extended some multi-objective, multi-granular forecasting scenarios [37] and multi-modal time series forecasting scenarios [38, 39]. The advantage of deep learning models can be attributed to their profound flexibility and ability to capture long-term dependencies and handle large-scale data. It is noteworthy that recurrent neural networks (RNNs) [40] and their variants, such as long short-term memory networks (LSTMs) [41] and gated recurrent units (GRUs) [42, 43, 44], are widely employed among deep learning models to process sequence data. These models process batches of data sequentially using a gradient descent algorithm to optimize the unknown model parameters. The gradient information of the model parameters is updated by back-propagation through time [45]. However, due to the sequential processing of input data and back-propagation through time, they suffer from some limitations, especially when dealing with datasets with long dependencies. The training process of LSTM and GRU models also suffers from gradient vanishing and explosion. Though some architectural modifications and training techniques can help LSTM and GRU to alleviate the gradient-related problems to some extent, the effectiveness and efficiency of RNN-based models may still be compromised [46]. Furthermore, it is possible to apply models like Convolutional Neural Network (CNN) to conduct time-series analysis. On the other hand, the transformer [47] is a model that combines various mechanisms, such as attention, embedding, and encoder-decoder structures, in natural language processing. Later, studies improved the transformer and gradually applied it to TSF, imaging, and other areas, making transformers progressively their genre. Recent advancements in transformer-based models have shown substantial progress [12, 48, 49]. The self-attentive mechanism of the transformer allows for adaptive learning of short-term and long-term dependencies through pairwise (query-key) request interactions. This feature grants the transformer a significant advantage in learning long-term dependencies on sequential data, enabling the creation of more robust and expansive models [50]. The performance of transformers on LTSF is impressive, and they have gradually become the current mainstream approach. The two main tasks of time-series data are forecasting and classification. Forecasting aims to predict real values from given time-series data, while the classification task categorizes given time series data into one or more target categories. Many advances have been made in time-series Transformers for forecasting [12, 49, 51-59] and classification tasks [1, 60-62]. However, genuine time-series data tends to be noisy and non-stationary, and the learning of spurious dependencies, lacking interpretability, can occur if time-series-related knowledge is not combined. Thus, challenges remain despite the notable achievements in accurate long-term forecasting using transformer-based models. In this review, we commence with a comprehensive overview of transformer architecture in Section 2. Section 3 presents transformer-based architectures for LTSF in recent research. In Section 4, we analyze transformer effectiveness for LTSF. Subsequently, Section 5 summarizes the public datasets and evaluation metrics in LTSF tasks. Section 6 introduces several training strategies in existing transformer-based LTSF solutions. Finally, we conclude this review in Section 7. ## 2 Transformer In this section, we begin by analyzing the inherent mechanics of the transformer proposed by Vaswani et al. [63] in 2017, with the objective of presenting solutions to the challenge of neural machine translation. Fig.2 shows the transformer architecture. Subsequently, we delve into the operations within each constituent of the transformer and the underlying principles that inform these operations. Several variants of the transformer architecture have been proposed for time-series analysis; however, our discussion in this section is limited to the original architecture [64][65]. Fig.2 Schematic diagram of transformer ### Self-attention The self-attention mechanism is a process that involves mapping a query and a sequence of key-value pairs to generate a corresponding output vector. The resulting vector is determined by the summation of weights acting on values computed from the query and key. A schematic representation of the self-attention mechanism is depicted in Fig.2. As can be seen in Fig.3, the core of the self-attention mechanism is to get the attention weights by calculating Q and K and then acting on V to get the whole weights and outputs. Q, K, and V are the Query, Key, and Value matrices of the input sequence after linear transformation. With respect to the input sequence denoted as X, the parameters Q, K, and V are given by \[\text{Q}=W_{q}\text{X},\text{K}=W_{k}\text{X},\text{and}\ \ \text{V}=W_{v.} \tag{1}\] Q, K, and V are computed by multiplying the input X by three different matrices (but this is only limited to the encoder and decoder encoding process using the self-attention mechanism in their respective inputs; the Q, K, and V in the interaction between the encoder and decoder are referred to otherwise). Here, the computed Q, K, and V can be interpreted as three different linear transformations of the same input to represent its three different states. After Q, K, and V are computed, the weight vectors can be further computed. Specifically, for the inputs Q, K, and V, the weight vectors are calculated as: \[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{q\,\kappa^{T}}{\sqrt{d_{k}}} \right)\text{V}. \tag{2}\] The dimension of the query and key is denoted by \(d_{k}\). The attention for each position is normalized using the softmax function. The formula illustrates that the attention score matrix can be derived by executing a dot product operation between the query and key, followed by division with a scaling factor of \(\sqrt{d_{k}}\). Subsequently, the attention weights for each position are obtained by performing a softmax operation on the attention score matrix. The ultimate self-attention representation is achieved by multiplying the attention weights with the value matrix. The compatibility function employed in this process is a scaled dot product, thus rendering the computation process efficient. Additionally, the linear transformation of the inputs introduces ample expressive power. As illustrated in Fig. 2, the scale process corresponds to the division of \(d_{k}\) in Eq. 2. It is imperative to note that scaling is essential because, for larger \(d_{k}\), the value obtained after \(QK^{T}\) is excessively large, consequently causing a diminutive gradient after the softmax operation. The diminutive gradient hinders the training of the network and, thus, is not conducive to the overall outcome. ### Multi-Head Attention The self-attention mechanism serves as a solution to the sequential encoding challenge encountered in conventional sequence models. It enables the generation of a final encoded vector that incorporates attention information from multiple positions, achieved through a finite number of matrix transformations on the initial inputs. However, it is worth noting that the model's encoding of positional information may lead to an overemphasis on its own position, potentially neglecting the importance of other positions. To address this issue, the Multi-Head Attention mechanism has been introduced. As shown in Fig.4, the multi-attention mechanism is a self-attention processing process of multiple groups of the original input sequences; and then each group of self-attention results are spliced together to perform a linear transformation to obtain the final output results. Specifically, its calculation formula is: \[\text{Multi-Head}(Q,K,V)\text{Concat}=(\text{head}_{1},...,\text{ head}_{h})W_{o}\] \[\text{head}_{i}=\text{Attention}\big{(}QW_{Qi},KW_{Ki},VW_{Vi}\big{)}. \tag{3}\] In this context, the matrices Q, K, and V refer to the query, key, and value matrices of the input sequences, respectively, subsequent to linear transformation. The variable h denotes the number of attention heads. Additionally, the weight matrices \(W_{Qi},W_{Ki},andW_{Vi}\) are utilized to carry out the linear transformation on Q, K, and V. The output weight matrix of the multi-head attention is denoted by the symbol \(W_{o}\). The computation of a single attention head is denoted by attention in Eq. 2, equivalent to the previously mentioned self-attention mechanism. Each attention head maps the inputs through independent linear transformations and subsequently applies the attention mechanism to obtain the representation. The final output of the multi-head attention is obtained by combining the representations of all attention heads and applying a linear transformation to the output weight matrix \(W_{o}\). ### Encoder and decoder The encoder is depicted on the left-hand side of Figure 2. The encoder component encompasses two primary networks: the multi-head attention mechanism and the two-layer feed-forward neural network. Notably, residual connections are added to both network parts, and layer normalization is implemented after residual connections. Consequently, each part's output is represented as \(LayerNorm\big{(}x+Sublayer(x)\big{)}\), with the Dropout operation added to both. For the two-layer, fully connected network in the second part, the specific computational procedure is outlined as follows. \[\text{FFN}(x)\ =\ \max(0,\ x\text{W}_{1}\ +\ b_{1})\,W_{2}\ +\ b_{2} \tag{4}\] The variable x symbolizes the feature representation of the input. \(W_{1}\) and \(W_{2}\) denote the weight matrices, while \(b_{1}\) and \(b_{2}\)represent the bias vectors. The \(\ \max(0,\...)\\) signifies the utilization of the Rectified Linear Unit (ReLU) as the activation function. The decoder is similar to the Encoder, albeit with the inclusion of an extra multi-head attention mechanism that interacts with the encoder output. Unlike the encoder, the decoder comprises three parts of the network structure. The top and bottom segments resemble the encoder, save for a middle section that engages with the encoder's output (Memory), referred to as "encoder-decoder attention." In this component, the input Q is derived from the output of the following multi-head attention mechanism, while K and V are linearly transformed outputs (Memory) of the encoder component. ### Positional encoding In the context of modeling text-related data, the initial step involves its vectorization. In machine learning, one-hot coding, bag-of-words models, and TF-IDF are frequently employed techniques to represent text. However, in deep learning, it is more common to associate individual words (or tokens) with a low-dimensional, dense vector space using an embedding layer. Consequently, in the transformer model, the first task is also to vectorize the text in this manner and name it token embedding, which is commonly referred to as word embedding in deep learning. Adopting a prior network model, such as CNN or RNN, would signify the conclusion of text vectorization, as these network architectures already possess the capacity to capture temporal features, whether in the n-gram form in CNNs or the temporal form in RNNs. However, this does not apply to the transformer network architecture, which eliminates recursion and convolution. The self-attention mechanism's introduction principle indicates that the actual operation of the self-attention mechanism is solely a linear transformation accomplished by multiplying multiple matrices back and forth. Therefore, even when the word order is disrupted, the result remains unaltered; in other words, the original text sequence will be lost solely if the self-attention mechanism is utilized. As illustrated in Fig.5, the sequence "I am writing review" underwent a linear transformation after the word embedding representation. Subsequently, we modified the sequence to "review am writing I" and executed a linear transformation employing the same weight matrix in the intermediary. Based on the calculation outcomes depicted in the figure, no fundamental disparity exists between the results prior to and after the exchange of sequence positions, with only the corresponding positions being interchanged. Considering these issues, the transformer adds an additional positional embedding after the token embedding of the original input text to delineate the temporal sequence of the data. In the original transformer, the author employs the sinusoidal functions to generate positional information for each dimension. \[\text{PosEmb}(pos,2i)=\sin\left(\frac{pos}{10000^{2i/d_{\text{model}}}}\right) \tag{5}\] \[\text{PosEmb}(pos,2i+1)=\cos\left(\frac{pos}{10000^{2i/d_{\text{model}}}}\right) \tag{6}\] The present study introduces the concept of \(\text{PosEmb}(pos,2i)\) as a representation of position encoding for position _pos_ and even index 2i. Similarly, \(\text{PosEmb}(pos,2i+1)\) signifies the position encoding of position _pos_ and odd index 2i+1. Notably, \(d_{\text{model}}\) refers to the dimensions of a sequence in the transformer model. Incorporating this non-linear positional embedding position information contributes to the comparison results in Fig.6. Fig.6 reveals that the outcomes after a linear conversion utilizing an identical weight matrix display notable disparities pre- and post-rearrangement. Thus, it can be inferred from Fig.6 that the insufficiency of the self-attentive mechanism in capturing temporal sequencing information can be compensated for by applying positional embedding. Fig.6 Positional embedding matrix for the sequence "I am writing review" Fig.7 depicts a graphical representation of the position matrix. The vertical axis denotes each Token in the input sequence, and each row corresponds to the embedding vector \(\vec{P}\). Based on the assumptions outlined in the original transformer, the sequence length is 100, and the dimension is 512. Each row contains 512 values--each with a value between 1 and -1. Consequently, one can infer that the vectors' positions on any two tokens can be interchanged, and an information model that incorporates positional cues, as illustrated in the accompanying figure, can indeed detect this distinction. ## 3 Transformer-based architectures in LTSF The design of a network needs to consider the characteristics and nature of problems. In this section, we first analyze the key problems in the LTSF tasks, followed by a discussion of some popular recent transformer-based architectures in LTSF tasks. ### LTSF's key problems LTSF is usually defined as forecasting a more distant future [12, 66]. Given the current status of the existing work, there are two main problems in the field of LTSF: complexity and dependency. LTSF requires processing a large amount of data [67], which may lead to longer training times and require more computational resources [68], as the computational complexity grows exponentially with the length of the sequence. Additionally, storing the entire sequence in memory may be challenging due to the computer's limited memory [69]. This may limit the length of the time series available for prediction [70]. Meanwhile, LTSF models need to have the ability to accurately capture the temporal relationship between past and future observations in a time series [71, 72, 73]. Long-sequence time series exhibit long-term dependence [74, 75], challenging the models' ability to capture dependencies [12]. Moreover, LTSF is characterized by inherent periodicity and non-stationarity [76], and thus, LTSF models need to learn a mixture of short-term and long-term repeated patterns in a given time series [67]. A practical model should capture both repeated ways to make accurate predictions, which imposes more stringent requirements on the prediction model regarding learning dependence. ### Transformer variants A transformer [47] mainly captures correlations among sequence data through a self-attention mechanism. Compared with the traditional deep learning architecture, the self-attention mechanism in a transformer is more interpretable. Wu et al. [53] introduced the vanilla transformer to the field of temporal prediction for influenza disease prediction. However, as mentioned above, transformers have a large computational complexity, leading to high computational costs. Moreover, the utilization of location information is not apparent, the position embedding in the model embedding process is ineffective, and long-distance information cannot be captured. A brief conclusion of recent transformer-based architectures is given in Table 1. \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline Reference & Technique & Brief information \\ \hline Wu et al. [53] & Transformer & Transformer for LTSF on influenza \\ \hline LogSparse transformer [49] & LogSparse self-attention Transformer & + & Reduce time complexity by convolutional self-attention \\ \hline AST [56] & GAN + Transformer & Reduce error accumulation by sparse attention with GAN \\ \hline SpringNet [77] & Spring attention + Transformer & Spring attention to repeatable long-term dependency fluctuating patterns \\ \hline Lee et al. [78] & Partial correlation-based attention + series-wise multi-resolution & Improve pair-wise comparisons-based attention disadvantages with partial correlation-based attention \\ \hline Informer [12] & ProbSparse self-attention + self-attention distilling + generative style decoder & Sparse and computationally effective \\ \hline Auttoformer [67] & Sequence decomposition + auto-correlation + Transformer & Auto-correlation and sequence decomposition architecture \\ \hline Pyraformer [70] & Pyramidal attention module + Transformer & Multi-resolution representation with pyramidal attention module \\ \hline FEDformer [79] & Fourier enhanced + wavelet enhanced + Transformer & Reduce time complexity with frequency domain decomposition based on Autoformer architecture \\ \hline TCCT [51] & CNN + CSPAttention + Transformer & Reduce computational cost with CSPAttention \\ \hline Chu et al. [80] & Autoformer + Informer + Reformer + MLP & Incorporate multiple transformer variants and meta-learner \\ \hline Quadformer [81] & Learning-to-rotate attention & Quaternion architecture with learning- \\ \hline \end{tabular} \end{table} Table 1: Transformer-based architectures. The time complexity of self-attention computation in a transformer is initially established at \(\text{O}(\text{L}^{2})\), leading to high computational cost. Some subsequent works have been developed to optimize this time complexity and the long-term dependency of transformer-based models. The LogSparse transformer [49] model first introduces transformer to the field of TSF, making transformer more feasible for time series with long-term dependencies. LogSparse transformer allows each time step to be consistent with the previous time step and is selected using an exponential step. It proposed convolutional self-attention by employing causal convolutions to produce queries and keys, reducing the memory utilization from \(\text{O}(\text{L}^{2})\) to \(\text{O}(L(logL))\) in each self-attention layer. The prediction accuracy achieved for fine-grained, long-term dependent time series can be improved in cases with limited memory. Informer [12] uses the ProbSparse self-attention mechanism, further reducing the computational complexity of the traditional transformer model to \(\text{O}(\text{L}(\text{logL}))\). At the same time, inspired by dilated convolution in [83] and [84], it also introduced the self-attention distilling operation to remove redundant combinations of value vectors to reduce the total space complexity of the model. In addition, it designed a generative style decoder to produce long sequence outputs with only one forward step to avoid accumulation error. The Informer architecture was tested on various datasets and performed better than models such as Autoregressive Integrated Moving Average (ARIMA) [85], Prophet [86], LSTMa [87], LSTNet [88], and DeepAR [89]. The Autoformer [67] is a simple seasonal trend decomposition architecture with an auto-correlation mechanism working as an attention module. It achieves \(\text{O}(\text{L}(\text{logL}))\) computational time complexity. This deep decomposition architecture embeds the sequence decomposition strategy into the encoder-decoder structure as an internal unit of Autoformer. In contrast, TCCT [51] designs a CSP attention module that merges CSPNet with a self-attentive mechanism and replaces the typical convolutional layer with an expanded causal convolutional layer, thereby modifying the distillation operation employed by Informer to achieve exponential receptive field growth. In addition, the model develops a penetration mechanism for stacking self-attentive blocks to obtain finer information at negligible additional computational costs. Pyraformer [70] is a novel model based on hierarchical pyramidal attention by letting the maximum length of the signal traversal path be a constant concerning the sequence length L and can achieve theoretical O(L) complexity. Pyraformer conducts both intra-scale and inter-scale attentions, which capture temporal dependencies in an individual resolution and build a multi-resolution representation of the original series, respectively. Similarly, Triformer [13] proposed a triangular, variable-specific attention architecture, which achieves linear complexity through patch attention while proposing a lightweight approach to enable variable-specific model parameters. FEDformer [79] achieves O(L) linear computational complexity by designing two attention modules that process the attention operation in the frequency domain with the Fourier transform [90] and wavelet transform [91], respectively. Instead of applying transformer to the time domain, it applies it to the frequency domain, which helps it better expose potential periodic information in the input data. The Conformer [82] model uses the fast Fourier transform to extract correlation features of multivariate variables. It employs a sliding window approach to improve the operational efficiency of long-period forecasting, sacrificing global information extraction and complex sequence modeling capabilities, thereby reducing the time complexity to O(L). To address the problems of long-term dependency, Lin et al. [77] established SpringNet for solar prediction. They proposed a DTW attention layer to capture the local correlations of time-series data, which helps capture repeatable fluctuation patterns and provide accurate predictions. For the same purpose, Chu et al. [80] combined Autoformer, Informer, and Reformer to propose a prediction model based on stacking ensemble learning. Chen et al. [81] proposed a Quadformer framework in which learning-to-rotate attention introduces learnable period and phase information to describe complex periodic patterns, trend normalization to model normalization of the sequence representation in the hidden layer, and decoupling of the LRA by using the global memory, to efficiently fit multi-periodic complex patterns in the LTSF while achieving linear complexity without loss of prediction accuracy. To alleviate the problem of redundant information input in LTSF, the Mueformer proposed by Zeng et al. [68] enhances the features by inputting the multi-perceptual domain processing mechanism, while the multi-cornered attention head mechanism and the attention head pruning mechanism enhance the expression of multi-head attention. Each of these efforts takes a different perspective on optimizing the parametric part of the model, but a general architecture and component that can reduce the number of required model parameters has not yet emerged. In addition to the previously mentioned transformer-based architectures, other architectural modifications have emerged in recent years. For example, the Bidirectional Encoder Representations from Transformers (BERT) [92] model is built by stacking transformer encoder modules and introducing a new training scheme. Pre-training the encoder modules is task-independent, and decoder modules can be added later and fine-tuned to the task. This scheme allows BERT models to be trained on large amounts of unlabeled data. BERT architecture has inspired many new transformer models for time-series data [1, 57, 60]. However, compared to NLP tasks, time-series data include various data types [1, 12, 93]. Thus, the pre-training process will have to be different for each task. This task-dependent pre-training contrasts with the NLP tasks, which can start with the same pre-trained models, assuming all tasks are based on the same language semantics and structure. Generative adversarial networks (GANs) consist of the generator and the discriminator, learning adversarially from each other. The generator-discriminator learning principle has been applied to the time-series forecasting task [56]. The authors use a generative adversarial encoder-decoder framework to train a sparse transformer model for time-series forecasting, solving the problem of being unable to predict long series due to error accumulation. The adversarial training process improves the model's robustness and generalization ability by directly shaping the output distribution of the network to avoid error accumulation through one-step-ahead inference. TranAD [94] applied GAN-style adversarial training with two transformer encoders and two transformer decoders to gain stability. As a simple transformer-based network tends to miss slight deviations of anomaly, an adversarial training procedure can amplify reconstruction errors. TFT [54] designs a multi-horizon model with static covariate encoders, a gating feature selection module, and a temporal self-attention decoder. It encodes and selects valuable information from various covariates information to perform forecasting. It also preserves interpretability by incorporating global and temporal dependencies and events. SSDNet [95] combines the transformer with state space models (SSM), which use the transformer part to learn the temporal pattern and estimate the SSM parameters; the SSM parts perform the seasonal-trend decomposition to maintain the interpretable ability. While MT-RVAE [96] combines the transformer with Variational AutoEncoder (VAE), it focuses on data with few dimensions or sparse relationships. A multi-scale transformer is designed to extract different levels of global time-series information. AnomalyTrans [60] combines transformer and Gaussian prior association to make rare anomalies more distinguishable. Prior association and series association are modeled simultaneously. The minimax strategy optimizes the anomaly model to constrain the prior and series associations for more distinguishable association discrepancies. GTA [3] contains a graph convolution structure to model the influence propagation process. Replacing vanilla multi-head attention with a multi-branch attention mechanism combines global-learned attention, multi-head attention, and neighborhood convolution. GTN [62] applies a two-tower transformer, with each tower working on time-step-wise attention and channel-wise attention, respectively. A learnable weighted concatenation is used to merge the features of the two towers. Aliformer [57] makes the time-series sales forecasting using knowledge-guided attention with a branch to revise and denoise the attention map. In addition, some researchers have made corresponding network improvements for specific applications. First, in the application of transportation, spatiotemporal graph transformer [97] proposes an attention-based graph convolution mechanism for learning a more complex temporal-spatial attention pattern applying to pedestrian trajectory prediction. Traffic transformer [55] designs an encoder-decoder structure using a self-attention module to capture the temporal-temporal dependencies and a graph neural network (GNN) module to capture the spatial dependencies. Spatial-temporal transformer networks introduced a temporal transformer block to capture the temporal dependencies and a spatial transformer block to assist a graph convolution network to capture more spatial-spatial dependencies [98]. There are also applications for event prediction. Event forecasting or prediction aims to predict the times and marks of future events given the history of past events, which is often modeled by temporal point processes (TPP) [6]. Self-attentive Hawkes process (SAHP) [7] and transformer Hawkes process (THP) [8] adopt transformer encoder architecture to summarize the influence of historical events and compute the intensity function for event prediction. They modify the positional encoding by translating time intervals into sinusoidal functions to utilize the intervals between events. Later, a more flexible model named attentive neural datalog through time (ANDTT) [99] was proposed to extend SAHP/THP schemes by embedding all possible events and times with attention. ## 4 Transformer effectiveness for LTSF Is transformer effective in the time series forecasting domain? The response we provide is affirmative. Since the publication of Zeng's scholarly article, "Are transformers effective for time series forecasting?"[100], the feasibility of utilizing transformer models for time series forecasting has emerged as a significant subject of scholarly discourse. This is particularly noteworthy as a straightforward model emerged victorious over a considerably intricate transformer model, thus prompting a substantial academic discourse. Zeng claimed that the transformer-based models are not effective in time series forecasting. They compare the transformer-based models with a simple linear model, DLinear, which uses the decomposition layer structure in Autoformer and which DLinear claims outperforms the transformer-based models. A transformer with different positional and temporal embeddings retains very limited temporal relationships. It is prone to overfitting on noisy data, whereas a linear model can be modeled in a natural order and with fewer parameters can avoid overfitting. However, Yuqi Nie [101] presents a novel solution to tackle the loss of temporal information induced by the self-attention mechanism. This approach is rooted in the transformer time-series prediction and involves transforming the time-series data into a patch format akin to that of Vision transformer. This conversion preserves the localization of the time series, with each patch serving as the smallest unit for Attention computation. The findings in Table 2 demonstrate that research focused on transformer-based time-series prediction underscores the significance of integrating temporal information to improve the model's prediction performance. On the other hand, the transformer's effectiveness is reflected in Large Language Models. LLMs are powerful transformer-based models, and numerous previous studies have shown that transformer-based models are capable of learning potentially complex relationships among textual sequences [102; 103]. It is reasonable to expect LLMs to have the potential to understand complex dependencies among numeric time series augmented by temporal textual sequences. The current endeavor for time series LLMs encompasses two primary strategies. One approach involves the creation and preliminary training of a fundamental, comprehensive model specifically tailored for time series. This model can be subsequently fine-tuned to cater to various downstream tasks. This path represents the most rudimentary solution, drawing upon a substantial volume of \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|l|}{Models} & \multicolumn{2}{|c|}{PatchTST/64} & \multicolumn{2}{|c|}{PatchTST/42} & \multicolumn{2}{|c|}{DLinear} & \multicolumn{2}{|c|}{FEDformer} & \multicolumn{2}{|c|}{Autoformer} & \multicolumn{2}{|c|}{Informer} \\ \hline \multicolumn{2}{|l|}{Metric} & \multicolumn{1}{c|}{MSE} & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline \multirow{4}{*}{ \begin{tabular}{c} \end{tabular} } & 96 & 0.129 & 0.222 & 0.130 & 0.222 & 0.140 & 0.237 & 0.186 & 0.302 & 0.196 & 0.313 & 0.304 & 0.393 \\ \cline{2-13} & 192 & 0.147 & 0.240 & 0.148 & 0.240 & 0.153 & 0.249 & 0.197 & 0.311 & 0.211 & 0.324 & 0.327 & 0.417 \\ \cline{1-1} \cline{2-13} & 336 & 0.163 & 0.259 & 0.167 & 0.261 & 0.169 & 0.267 & 0.213 & 0.328 & 0.214 & 0.327 & 0.333 & 0.422 \\ \cline{1-1} \cline{2-13} & 720 & 0.197 & 0.290 & 0.202 & 0.291 & 0.203 & 0.301 & 0.233 & 0.344 & 0.236 & 0.342 & 0.351 & 0.427 \\ \hline \end{tabular} \end{table} Table 2: Multivariate long-term forecasting results on electricity dataset data and imbuing the model with time-series-related knowledge through pre-training. The second strategy involves fine-tuning based on the LLM framework, wherein corresponding mechanisms are devised to adapt the time series, enabling its application to existing language models. Consequently, this facilitates processing diverse time-series tasks using the pre-existing language models. This path poses challenges and necessitates the ability to transcend the original language model. A straightforward linear model may have its advantages in specific circumstances; however, it may not be capable of effectively handling extensive time series information on the same level as a more intricate model, such as the transformer. In summary, it is evident that the transformer model remains far from obsolete in time series forecasting. Nonetheless, having abundant training data to fully unlock its immense potential is crucial. Unfortunately, there is currently a scarcity of publicly available datasets that are sufficiently large for time series forecasting. The vast majority of existing pre-trained time-series models are trained using public datasets like Traffic and Electricity. Despite these benchmark datasets serving as the foundation for developing time series forecasting, their limited size and lack of generalizability pose significant challenges for large-scale pre-training. Thus, in the context of time-series prediction, the most pressing matter is the development of expansive and highly generalized datasets (similar to ImageNet in computer vision). This crucial step will undoubtedly propel the advancement of time-series analysis and training models while enhancing the capacity of training models in time-series prediction. Additionally, this development underscores the transformer model's effectiveness in successfully capturing long-term dependencies within a sequence while maintaining superior computational efficiency and a more comprehensive feature representation capability. ## 5 Public datasets and evaluation metrics In this section, we summarize some common applications and relevant public LTSF datasets. Also, we discuss prediction performance evaluation metrics in LTSF. ### Common applications and public datasets #### 5.1.1 Finance LTSF is commonly used in finance to predict economic cycles [104], fiscal cycles, and long-term stock trends [105]. In the stock market, LTSF can predict future trends and fluctuations in stock prices [106], helping investors to develop more accurate investment strategies. In financial planning, LTSF can predict future economic conditions, such as income, expenses, and profitability, to help individuals or businesses better plan their financial goals and capital operations [107]. In addition, LTSF can predict a borrower's repayment ability and credit risk [108] or predict future interest rate trends to help financial institutions conduct loan risk assessments for better monetary and interest rate policies. We summarized the open-source LTSF datasets in the finance field in recent years in Table 3. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Dataset & Referen ce & Data information & Min-granularity \\ \hline Gold prices & [109] & Daily gold prices from 2014.1 to 2018.4, including & 1 day \\ \hline \end{tabular} \end{table} Table 3: Finance LTSF dataset #### 5.1.2 Energy In the energy field, LTSF is often used to assist in developing long-term resource planning strategies [119]. It can help companies and governments forecast future energy demand to better plan energy production and supply. It can also help power companies predict future power generation to ensure a sufficient and stable power supply [120]. In addition, LTSF can help governments and enterprises to develop energy policy planning or manage the energy supply chain [121]. These applications can help enterprises and governments better plan, manage, reduce risks, improve efficiency, and realize sustainable development. We summarized the energy field's open-source datasets in recent years in Table 4. #### 5.1.3 Transportation In urban transportation, LTSF can help urban traffic management predict future traffic flow [124] for better traffic planning and management. It can also be used to predict future traffic \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline Dataset & Reference & Data information & Min-granularity \\ \hline Power & [118] & The electricity consumption of a household, including voltage, electricity consumption, and other characteristics from 2006.12 to 2010.11 & 1 minute \\ & & [https://archive.ics.uci.edu/ml/datasets/Individual+hous](https://archive.ics.uci.edu/ml/datasets/Individual+hous) & \\ & & ehold+electric+power+consumption & \\ \hline Solar energy & [49, 56, 112] & The highest solar power production from 137 photovoltaic plants in Alabama in 2006 & 5 minutes \\ & & [https://www.nrel.gov/grid/solar-power-data.html](https://www.nrel.gov/grid/solar-power-data.html) & \\ \hline electrici y & [56, 67, 111, 122] & The electricity consumption of 321 customers between 2011 and 2014 & 15 minutes \\ & & [https://archive.ics.uci.edu/ml/datasets/ElectricityLoadD](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadD) & \\ & & isagrams20112014 & \\ \hline wind & [49, 56] & Hourly estimates of energy potential as a percentage of the maximum output of power plants for a European region between 1986 and 2015 & 1 hour \\ & & [https://www.kaggle.com/sohier/30-years-ofeuropean-wind-generation](https://www.kaggle.com/sohier/30-years-ofeuropean-wind-generation) & \\ \hline ETT & [12, 67] & The load and oil temperature of power transformers from 2016.7 to 2018.7 & 15 minutes \\ & & [https://github.com/zhouhaoyi/ETDataset](https://github.com/zhouhaoyi/ETDataset) & \\ \hline sanyo & [77] & Daily solar power generation data from two photovoltaic plants in Alice Springs, Northern Territory, and Australia from 2011.1 to 2017.1 & 1 day \\ & & [http://dkasolarcentre.com.au/source/alicesprings/dka-m4-b-phase](http://dkasolarcentre.com.au/source/alicesprings/dka-m4-b-phase) & 1 day \\ \hline hanergy & [77] & Daily solar power generation data from two photovoltaic plants in Alice Springs, Northern Territory, and Australia from 2011.1 to 2016.12 & 1 day \\ & & [https://dkasolarcentre.com.au/source/alicesprings/dka-m16-b-phase](https://dkasolarcentre.com.au/source/alicesprings/dka-m16-b-phase) & \\ \hline power grid data & [123] & Grid data of State Grid Shanghai Municipal Electric Power Company from 2014.1 to 2015.2 & 1 day \\ \hline \end{tabular} \end{table} Table 4: Energy LTSF dataset congestion [125], future traffic accident risks, and traffic safety issues [126] for better traffic safety management and accident prevention. We summarized the open-source datasets in the transportation field in recent years in Table 5. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Dataset & Reference & Data information & Min-granularity \\ \hline Paris metro line & [25] & The passenger flow on Paris metro lines 3 and 13 between 2009 and 2010 [http://www.neural-forecasting-competition.com/](http://www.neural-forecasting-competition.com/) & 1 hour \\ \hline PeMS03, PeMS04, PeMS07, PeMS08, & [56, 67, 111, 112, 127, 129] & traffic flow data & 30s \\ \hline Birmingham Parking & [115] & The parking lot ID, parking lot capacity, parking lot occupancy, and update time for 30 parking lots operated by Birmingham National Car Park from 2016.10 to 2016.12 [http://archive.ics.uci.edu/ml/datasets/parking+birmi](http://archive.ics.uci.edu/ml/datasets/parking+birmi) ngham & 30 min \\ \hline METR-LA & [111, 127] & Traffic information collected by loop detectors on Los Angeles County freeways from 2012.3 to 2012.6 [https://drive.google.com/drive/folders/10FOTa6HX](https://drive.google.com/drive/folders/10FOTa6HX) PqX8Pf5WRoRwcFnW9BrNZEIX & 5 min \\ \hline PEMS-BAY & [111, 127] & Traffic speed readings from 325 sensors collected by PeMS, the California Transit Agency Performance Measurement System from 2017.1 to 2017.5 [https://drive.google.com/drive/folders/10FOTa6HX](https://drive.google.com/drive/folders/10FOTa6HX) PqX8Pf5WRoRwcFnW9BrNZEIX & 30s \\ \hline SPMD & [130] & The driving records of approximately 3,000 drivers in Ann Arbor, Michigan, from 2015.5 to 2015.10 [https://github.com/ElmiSay/DeepFEC](https://github.com/ElmiSay/DeepFEC) & 1 hour \\ \hline VED & [130] & The fuel and energy consumption of various personal vehicles operating under different realistic driving conditions in Michigan, US, from 2017.11 to 2018.11 [https://github.com/ElmiSay/DeepFEC](https://github.com/ElmiSay/DeepFEC) & 1 hour \\ \hline England & [128] & National average speeds and traffic volumes derived from UK freeway traffic data from 2014.1 to 2014.6 [http://tris.highwaysengland.co.uk/detail/trafficflow](http://tris.highwaysengland.co.uk/detail/trafficflow) data & \\ \hline TaxiBJ+ & [131] & The distribution and trajectory of more than 3,000 cabs in Beijing & 30 min \\ \hline \end{tabular} \end{table} Table 5: Transportation LTSF dataset #### 5.1.4 Meteorology and medicine The application of LTSF in meteorology mainly focuses on predicting long-term climate trends. For example, LTSF can be used to predict long-term climate change [134], providing a scientific basis for national decision-making in response to climate change. It can also issue early warnings for natural climate disasters [135] to mitigate potential hazards to human lives and properties. In addition, LTSF can predict information such as sea surface temperature and marine meteorology for the future [136], providing decision support for industries such as fisheries and marine transportation. We summarized the open-source datasets in the meteorology and medicine fields in recent years in Table 6 and Table 7, respectively. \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline Dataset & Reference & Data information & Min-granularity \\ \hline Beijing PM2.5 & [137] & Hourly PM2.5 data and associated meteorological data for Beijing from 2010.1 to 2014.12 & 1 hour \\ \hline Hangzhou temperature & [114] & Daily average temperature of Hangzhou from 2011.1 to 2017.1 & 1 day \\ \hline WTH & [67] & Weather conditions throughout 2020 & 10 min \\ \hline USHCN & [34] & Continuous daily meteorological records from 1887 to 2009 & 1 day \\ \hline KDD-CUP & [34] & PM2.5 measurements from 35 monitoring stations in Beijing from 2017.1 to 2017.12 & 1 hour \\ \hline US & [129] & Weather datasets from 2012 to 2017 from 36 & 1 hour \\ \hline \end{tabular} \end{table} Table 6: Meteorology LTSF dataset In the medical field, LTSF can be applied to various stages of drug development. For example, predicting a drug's toxicity, pharmacokinetics, pharmacodynamics, and other parameters helps researchers optimize the drug design and screening process [138]. In addition, LTSF can predict medical needs over a certain period [139]. These predictions can be used to allocate and plan medical resources rationally. ### Evaluation metrics In this section, we discuss prediction performance evaluation metrics in the field of TSF. According to [142], the prediction accuracy metrics can be divided into three groups: scale-dependent, scale-independent, and scaled error metrics, which are based on whether the evaluation metrics are affected by the data scale and how the data scale effects are eliminated. Let \(Y_{t}\) denote the observation at time \(t\) (\(t=1\),..., \(n\)) and \(F_{t}\) denote the forecast of \(Y_{t}\). Then define the forecast error \(e_{t}=Y_{t}-F_{t}\). #### 5.2.1 Scale-dependent measures Scale-dependent measures are evaluation metrics whose data scales depend on the data size of the original data, and they are the most widely used evaluation metrics in forecasting. These are \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Dataset & Defence & Data information & Min-granularity \\ \hline ILI & [53, 67] & Data on patients with influenza-like illness recorded weekly by the Centers for Disease Control and Prevention from 2002 to 2021 [https://gis.cdc.gov/grasp/fluview/fluportaldashb](https://gis.cdc.gov/grasp/fluview/fluportaldashb) oard.html. & 1 week \\ \hline COVID-19 & [140] & Daily data on confirmed and recovered cases collected from 2020.1 to 2020.6 in Italy, Spain, France, China, US, and Australia [https://github.com/CSSEGISandData/COVID-19](https://github.com/CSSEGISandData/COVID-19) & 1 day \\ \hline 2020 OhioT1DM & [141] & Eight weeks of continuous glucose monitoring, insulin, physiological sensor, and self-reported life event data for each of 12 patients with type 1 diabetes in 2020 [http://smarthealth.cs.ohio.edu/OhioT1DM-dataset.html](http://smarthealth.cs.ohio.edu/OhioT1DM-dataset.html) & 5 min \\ \hline MIMIC-III & [34] & A public clinical dataset with over 58,000 admission records from 2001 to 2012 [http://mimic.physionet.org](http://mimic.physionet.org) & 1 hour \\ \hline \end{tabular} \end{table} Table 7: Medicine LTSF dataset useful when comparing different methods applied to the same datasets but should not be used, for example, when comparing across datasets with different scales. The most commonly used scale-dependent measures are based on the absolute error or squared errors: Mean Square Error (MSE) \(\;=\;\) mean(e\({}_{t}^{2}\)) (7) Root Mean Square Error (RMSE) \(\;=\;\sqrt{\text{MSE}}\) (8) Mean Absolute Error (MAE) \(\;=\;\) mean(\(|\)e\({}_{t}|\)) (9) Median Absolute Error (MdAE) \(\;=\;\) median(\(|\)e\({}_{t}|\)) (10) Historically, the RMSE and MSE have been popular, largely because of their theoretical relevance in statistical modeling. However, they are more sensitive to outliers than MAE or MdAE [143]. #### 5.2.2 Scale-independent measures Scale-independent measures are evaluation metrics not affected by the size of the original data. They can be divided more specifically into three subcategories: measures based on percentage errors, measures based on relative errors, and relative measures. The percentage error is \(\;p_{t}\;=\;100\,e_{t}/Y_{t}\). The most commonly used measures are: Mean Absolute Percentage Error (MAPE) \(\;=\;\) mean(\(|\)p\({}_{t}|\)) (11) Median Absolute Percentage Error (MdAPE) \(\;=\;\) median(\(|\)p\({}_{t}|\)) (12) Root Mean Square Percentage Error (RMSPE) \(\;=\;\sqrt{\text{mean}(\text{p}_{t}^{2})}\) (13) Root Median Square Percentage Error (RMdSPE) \(\;=\;\sqrt{\text{median}(\text{p}_{t}^{2})}\) (14) Percentage errors have the advantage of being scale-independent and so are frequently used to compare forecast performance across different datasets. However, these measures have the disadvantage of being infinite or undefined if \(\;Y_{t}\;=\;0\;\) for any \(t\) in the period of interest and have an extremely skewed distribution when any value of \(\;Y_{t}\;\) is close to zero. The MAPE and MdAPE also have the disadvantage of putting a heavier penalty on positive errors than negative errors. Measures based on percentage errors are often highly skewed, and, therefore, transformations (such as logarithms) can make them more stable [144]. An alternative way of scaling is to divide each error by the error obtained using another standard forecasting method. Let \(\;r_{t}\;=\;e_{t}/e_{t}^{*}\;\) denote the relative error, where \(\;e_{t}^{*}\;\) is the forecast error obtained from the benchmark method. Usually, the benchmark method is the random walk where \(\;F_{t}\;\) is equal to the last observation. Mean Relative Absolute Error (MRAE) \(\;=\;\) mean(\(|\)r\({}_{t}|\)) (15) Median Relative Absolute Error (MdRAE) \(\;=\;\) median(\(|\)r\({}_{t}|\)) (16) Geometric Mean Relative Absolute Error (GMRAE) \(\;=\;\) gmean(\(|\)r\({}_{t}|\)) (17) A serious deficiency of relative error measures is that \(\;e_{t}^{*}\;\) can be small. In fact, \(\;r_{t}\;\) has infinite variance because \(\;e_{t}^{*}\;\) has a positive probability density at 0. The use of "winsoorizing" can trim extreme values, which will avoid the difficulties associated with small values of \(\;e_{t}^{*}\;\)[145], but adds some complexity to the calculation and a level of arbitrariness as the amount of trimming must be specified. Rather than use relative errors, one can use relative measures. For example, let \(\rm MAE_{b}\) denote the MAE from the benchmark method. Then, a relative MAE is given by \[\rm relMAE\ =\ MAE/MAE_{b}. \tag{18}\] An advantage of these methods is their interpretability. However, they require several forecasts on the same series to compute MAE (or MSE). #### 5.2.3 Scaled errors Scaled errors were first proposed in [142] and can be used to eliminate the effect of data size by comparing the prediction results obtained with the underlying method (usually the native method). The following scaled error is commonly used: \[q_{t}\ =\ \frac{e_{t}}{\frac{1}{n-1}\sum_{t=2}^{n}|V_{t}-V_{t-1}|}. \tag{19}\] Therefore, The Mean Absolute Scaled Error is simply \(\rm{\ MASE\ =\ mean}([q_{t}])\). There is a simple way to understand this evaluation metric; the denominator can be understood as the average error of the native predictions made one step ahead in the future. If the \(\rm{\ MASE\ >\ }1\), then the experimental method under evaluation is worse than the native prediction, and vice versa. While MdASE is similar to \(\rm{\ MASE}\), the way \(\rm{\ MASE}\) is calculated using the mean makes it more susceptible to outliers, while MdASE calculated using the median has stronger robustness and validity. However, such metrics can only reflect the results of comparison with the basic method and cannot visualize the error of the prediction results. ## 6 Training strategies Recent transformer variants introduce various time-series features into the models for improvements [67; 70]. In this section, we summarize several training strategies of existing transformer-based models for LTSF. ### Preprocessing and embedding In the preprocessing stage, normalization with zero mean is often applied in time-series tasks. Moreover, seasonal-trend decomposition is a standard method to make raw data more predictable [146; 147], first proposed by Autoformer [67]. It also uses a moving average kernel on the input sequence to extract the trend-cyclical component of the time series. The seasonal component is the difference between the original sequence and the trend component. FEDformer [79] further proposes a mixture of experts' strategies to mix the trend components extracted by moving average kernels with various kernel sizes. The self-attentive layer in the transformer architecture cannot preserve the positional information of the time series. However, local location information or the ordering of the time series is essential. Furthermore, global time information, such as hierarchical timestamps (weeks, months, years) and agnostic timestamps (holidays and events), is also informative [12]. To enhance the temporal context of the time-series input, a practical design is to inject multiple embeddings into the input sequence, such as fixed positional coding and learnable temporal embeddings. Additionally, the introduction of temporal embeddings accompanied by temporal convolutional layers [49] or learnable timestamps [67] has been proposed as effective means to further enhance the temporal context of the input data.. ### Iterated multi-step and direct multi-step The time series forecasting task is to predict the values at the \(T\) future time steps. When \(T\)\(>\) 1, iterated multi-step (IMS) forecasting [148] learns a single-step forecaster and iteratively applies it to obtain multi-step predictions. Alternatively, direct multi-step (DMS) forecasting [149] optimizes the multi-step forecasting objective simultaneously. The variance of the IMS predictions is smaller due to the autoregressive estimation procedure compared to DMS forecasting but is inevitably subject to the error accumulation effects. Therefore, IMS forecasting is more desirable when highly accurate single-step forecasters exist, and \(T\) is relatively small. In contrast, DMS forecasting produces more accurate forecasts when unbiased single-step forecast models are challenging to obtain or when \(T\) is large. Applying the vanilla transformer model to the LTSF problem has some limitations, including the quadratic time/memory complexity with the original self-attention scheme and error accumulation caused by the autoregressive decoder design. Alternative transformer variants have been developed to overcome these challenges, each employing distinct strategies. For instance, LogTrans [49] introduces a dedicated decoder for IMS forecasting, while Informer [12] leverages a generative-style decoder. Additionally, Pyraformer [70] incorporates a fully connected layer that concatenates spatiotemporal axes as its decoder. Autoformer [67] adds the two refined decomposition features of the trend-cyclical components and the stacked autocorrelation mechanism of the seasonal component to obtain the final prediction results. Similarly, FEDformer [79] applies a decomposition scheme and employs the proposed frequency attention block in deciphering the final results. ## 7 Conclusion Transformer architecture has found its application in solving various time-series tasks. The transformer architecture based on self-attention and positional encoding offers better or similar performance as RNNs and variants of LSTMs/GRUs. However, it is more efficient in computing time and overcomes other shortcomings of RNNs/LSTMs/GRUs. In this paper, we summarized the application of the transformer on LTSF. First, we have provided a thorough examination of the fundamental structure of the transformer. Subsequently, we analyzed and summarized the advantages of transformer on LTSF tasks. Given that the transformer encounters intricacies and interdependencies when confronting LTSF tasks, numerous adaptations have been introduced to the original architectural framework, thus equipping transformers with the capacity to handle LTSF tasks effectively. This architectural augmentation, however, brings certain challenges during the training process. To address this, we have incorporated a compendium of best practices that facilitate the practical training of transformers. Additionally, we have collected abundant resources on TSF and LTSF, including datasets, application fields, and evaluation metrics. In summary, our comprehensive review presents an in-depth examination of recent advancements in tranformer-based architecture in LTSF and imparts valuable insights to researchers seeking to improve their models. The transformer architecture is renowned for its remarkable modeling capacity and aptitude for capturing long-term dependencies. However, it encounters challenges regarding time complexity when applied to LTSF tasks. While efforts to reduce complexity may inadvertently lead to the loss of certain interdependencies between data points, thereby compromising prediction accuracy. Consequently, the amalgamation of various techniques within a compound model, leveraging the strengths of each, emerges as a promising avenue for future research in transformer-based LTSF models. This paves the way for innovative model designs, data processing techniques, and benchmarking approaches to tackle the intricate LTSF problems. Notably, researchers have recently explored the integration of Large Language Models (LLMs) in time series forecasting, wherein LLMs exhibit the capability to generate forecasts while offering human-readable explanations for predictions, outperforming traditional statistical models and machine learning approaches. These encouraging findings present a compelling impetus for further exploration, aiming to enhance the precision, comprehensibility, and transparency of forecasting results. ## Conflict of interest The authors declare that they have no conflict of interest. ## Data availability Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. ## Acknowledgment This work was supported by the Project of the Educational Commission of Guangdong Province of China (2022ZDJS113), and the Natural Science Foundation of Top Talent of SZTU (GDRC20221).
2301.01597
Problem-Dependent Power of Quantum Neural Networks on Multi-Class Classification
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood. Some QNNs with specific encoding methods can be efficiently simulated by classical surrogates, while others with quantum memory may perform better than classical classifiers. Here we systematically investigate the problem-dependent power of quantum neural classifiers (QCs) on multi-class classification tasks. Through the analysis of expected risk, a measure that weighs the training loss and the generalization error of a classifier jointly, we identify two key findings: first, the training loss dominates the power rather than the generalization ability; second, QCs undergo a U-shaped risk curve, in contrast to the double-descent risk curve of deep neural classifiers. We also reveal the intrinsic connection between optimal QCs and the Helstrom bound and the equiangular tight frame. Using these findings, we propose a method that uses loss dynamics to probe whether a QC may be more effective than a classical classifier on a particular learning task. Numerical results demonstrate the effectiveness of our approach to explain the superiority of QCs over multilayer Perceptron on parity datasets and their limitations over convolutional neural networks on image datasets. Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
Yuxuan Du, Yibo Yang, Dacheng Tao, Min-Hsiu Hsieh
2022-12-29T10:46:40Z
http://arxiv.org/abs/2301.01597v3
# Demystify Problem-Dependent Power of Quantum Neural Networks on Multi-Class Classification ###### Abstract Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood. Some QNNs with specific encoding methods can be efficiently simulated by classical surrogates, while others with quantum memory may perform better than classical classifiers. Here we systematically investigate the problem-dependent power of quantum neural classifiers (QCs) on multi-class classification tasks. Through the analysis of expected risk, a measure that weighs the training loss and the generalization error of a classifier jointly, we identify two key findings: first, the training loss dominates the power rather than the generalization ability; second, QCs undergo a U-shaped risk curve, in contrast to the double-descent risk curve of deep neural classifiers. We also reveal the intrinsic connection between optimal QCs and the Helstrom bound and the equiangular tight frame. Using these findings, we propose a method that uses loss dynamics to probe whether a QC may be more effective than a classical classifier on a particular learning task. Numerical results demonstrate the effectiveness of our approach to explain the superiority of QCs over multilayer Perceptron on parity datasets and their limitations over convolutional neural networks on image datasets. Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit. ## I Introduction The advent of hardware fabrication pushes the boundary of quantum computing from verifying its superiority on artificial tasks [1, 2, 3] to conquering realistic problems with merits [4, 5, 6]. This has led to the emergence of a popular paradigm known as quantum neural networks (QNNs), which combine variational quantum Ansatze with classical optimizers [7, 8]. So far, various QNN-based methods have been proposed to address difficult problems in areas such as quantum physics [9, 10, 11, 12], quantum information theory [13, 14, 15, 16], combinatorial optimization [17, 18, 19, 20, 21], and machine learning [22, 23, 24, 25, 26]. Among these applications, QNNs are often deployed as _quantum classifiers_ (QCs) to predict correct labels of the input data [27, 28, 29, 30, 31, 32], e.g., categorize image objects [33, 34, 35], classify phases of quantum matters [36, 37, 38, 39], and distinguish entangled states from separable states [40, 41]. To comprehend the full potential of existing quantum classifiers (QCs) and to spur the development of novel QCs, huge efforts have been made to unveil the learnability of QCs [42, 43, 44]. Prior literature establishes the foundations of QCs from three primary aspects, i.e., model capacity [45, 46, 47, 48], trainability [49, 50, 51], and generalization [52, 53, 54, 55, 56, 57]. Nevertheless, the advantages and constraints of QCs have rarely been proven [57, 58, 59, 60, 61, 62]. Meanwhile, previous results cannot rigorously explain the empirical observations such that QCs generally outperform classical classifiers (CCs) on handcraft or quantum data [44, 63] but are inferior to them on realistic problems [64]. As a result, the need for QCs to address classical issues remains highly questionable. A principal criteria in characterizing the power of a classifier is the expected risk [65], which weighs the empirical risk (i.e., training loss) and the generalization error (i.e., test loss) jointly. An _optimal_ classifier is one which achieves zero expected risk. As shown in Fig. 1(a), the success of deep neural classifiers is attributed to their double-descent risk curves [66, 67]. This means that as the hypothesis space is continually expanded, the expected risk of a trained deep neural classifier initially decreases, increases, and when it overfits the train set, undergoes a second descent. As such, to show the superiority of QCs over CCs, it demands to distill ubiquitous rules that capture the risk curve of diverse QCs in addition to conditions where the expected risk of QCs can be lower than CCs. In this study, we unify a broad class of QCs in the same framework and understand their problem-dependent ability under the expected risk (see Fig. 1(b)). Our analysis reveals two substantial outcomes: (i) trainability dominates QCs' ability more than generalization ability; (ii) QCs undergo a U-shape risk curve instead of the double-descent curve for CCs. These outcomes consolidate and refine previous observations. Concretely, the first outcome suggests that the deficiency of QCs on classical data stems from their limited ability to fit the train set, resulting in a larger training loss compared to CCs. The second outcome highlights the distinct learning behavior of QCs and CCs. Despite the fact that overparameterization is fundamental to enhance the performance of CCs, it adversely affects the power of QCs. In line with the diverse dynamics of the risk curves for QCs and CCs, we devise an efficient problem-dependent method to recognize potential merits of QCs, as shown in Fig. 1(a). Conceptually, for a given learning task, our method fits the loss (risk) dynamics of QC and CC under the prior (i.e., U-shape versus double descent) and then identify the 'advant' regime where the risk of QC is lower than CC. Numerical simulations are conducted to support our theoretical results. On the technical level, we approach the two outcomes by separately quantifying the empirical risk and generalization error of QCs. Specifically, we first prove conditions of QCs that lead to near-zero empirical risk, the geometric interpretation of which is depicted in Fig. 1(c). As a byproduct, we elucidate how such conditions are inherently linked to quantum state discrimination and quantum measurement theory. In addition, we prove that deep QCs can never reach the vanished empirical risk by utilizing the concentration property of quantum observables [68, 69]. We next analyze the generalization error of QCs by exploiting algorithmic robustness [70]. The derived bound surpasses prior results because it is the first non-vacuous bound in the over-parameterized regime. By combining the unreachable zero empirical risk with the manipulatable generalization error, we obtain the first outcome. The second outcome is gained by integrating the fact that deep QCs are unable to reach the vanished empirical risk with the first outcome. ## II Main results _Expected risk._-- Let us first introduce a \(K\)-class (\(K\geq 2\)) classification task. Denote the input space as \(\mathcal{X}\), the label (class) space as \(\mathcal{Y}=\{1,\cdots,K\}\), and the train set as \(\mathcal{D}=\bigcup_{k=1}^{K}\{(\mathbf{x}^{(i,k)},y^{(i,k)})\}_{i=1}^{n_{k}}\) with \(|\mathcal{D}|\) samples drawn i.i.d. from an unknown probability distribution \(\mathbb{D}\) on \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\). In standard scenarios, the number of train samples in each class is the same, i.e., \(n_{1}=...=n_{k}\equiv n_{c}\) and \(|\mathcal{D}|:=n=Kn_{c}\). The purpose of a classification algorithm \(\mathcal{A}\) is using \(\mathcal{D}\) to infer a hypothesis (a.k.a., a classifier) \(h_{\mathcal{A}_{\mathcal{D}}}:\mathcal{X}\rightarrow\mathbb{R}^{K}\) from the hypothesis space \(\mathcal{H}\) to separate train examples from different classes. This is equivalent to identifying an optimal hypothesis in \(\mathcal{H}\) minimizing the _expected risk_\(\mathsf{R}(h)=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathbb{D}}[\ell(h(\mathbf{x}),y)]\), where \(\ell(\cdot,\cdot)\) is the per-sample loss and for clarity we specify it as the square error with \(\ell(\mathbf{a},\mathbf{b})=\frac{1}{2}\|\mathbf{a}-\mathbf{b}\|_{2}^{2}\)[71]. Unfortunately, the inaccessible distribution \(\mathbb{D}\) forbids us to assess the expected risk directly. In practice, \(\mathcal{A}\) alternatively learns an _empirical classifier_\(\hat{h}\in\mathcal{H}\), as the global minimizer of the (regularized) loss function \[\mathcal{L}(h,\mathcal{D})=\frac{1}{n}\sum_{i=1}^{n_{c}}\sum_{k=1}^{K}\ell(h( \mathbf{x}^{(i,k)}),y^{(i,k)})+\mathfrak{E}(h), \tag{1}\] where \(\mathfrak{E}(h)\) is an optional regularizer. The foremost role of the risk means that quantum advantages can be ascertained if \(\mathsf{R}(\hat{h}_{Q})<\mathsf{R}(\hat{h}_{C})\), where \(\hat{h}_{Q}\) and \(\hat{h}_{C}\) are the empirical QC and CC on \(\mathcal{D}\). Unlike conventions merely focusing on a QC on one specific task, the above criteria orients to unearth _ubiquitous rules_ of QCs with computational advantages. To reconcile the intractable issue of \(\mathsf{R}(\hat{h})\) and proceed the following analysis, we decomposed it into two measurable terms, i.e., \[\mathsf{R}(\hat{h})=\mathsf{R}_{\text{ERM}}(\hat{h})+\mathsf{R}_{\text{Gene}}( \hat{h}), \tag{2}\] where \(\mathsf{R}_{\text{ERM}}(\hat{h})=\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{K}\ell( \hat{h}(\mathbf{x}^{(i,k)}),y^{(i,k)})\) is the _empirical risk_ and \(\mathsf{R}_{\text{Gene}}(\hat{h})=\mathsf{R}(\hat{h})-\mathsf{R}_{\text{ERM}}( \hat{h})\) is the _generalization error_. Based on Eq. (2), detecting advances of QCs is translated to deriving under what conditions do QCs commit both lower \(\mathsf{R}_{\text{ERM}}\) and \(\mathsf{R}_{\text{Gene}}\) than CCs. To better elucidate our results, let us recall that the general form of QC is \(\hat{h}_{Q}=\arg\min_{h_{Q}\in\mathcal{H}_{Q}}\mathcal{L}(h_{Q},\mathcal{D})\) Figure 1: **Risk curve and geometry of the unified QCs.** (a) The risk curve of QCs and CCs are highlighted by the solid red and blue lines (labeled by ‘Q-\(\mathcal{R}\)’ and ‘C-\(\mathcal{R}\)’), respectively. The former yields a ‘U’ shape while the latter yields a double-descent tendency. Potential advantages of QCs are dominated by the empirical risk, highlighted by the dashed curve. The shaded region refers to the potential merits of QCs. (b) The unified QC consists of two parts, the feature state \(\rho\) and the measure operator \(\mathbf{o}\). This model covers diverse QCs. (c) Geometric relationship between \(\{\rho^{(i,k)}\}\) and \(\mathbf{o}\) of QCs with (near) zero training loss: (i) the feature states associated with train samples belonging to the same class concentrate around their class-feature mean, i.e., \(\bar{\rho}^{*(k)}:=\rho^{*(1,k)}=...=\rho^{*(n_{c},k)}\) for \(\forall k\in[K]\); (ii) the class-feature means are maximally distant with each other, i.e., \(\operatorname{Tr}(\bar{\rho}^{*(k)}\bar{\rho}^{*(k^{\prime})})\sim\delta_{k,k^{ \prime}}\); (iii) the measure operator should align with class-feature means, i.e., \(\operatorname{Tr}(\bar{\rho}^{*(k)}o^{*(k^{\prime})})\sim\delta_{k,k^{\prime}}\). where \(\mathcal{L}\) is defined in Eq. (1) and \(\mathcal{H}_{Q}\) is the hypothesis space. For an \(N\)-qubit QC, its hypothesis space is \[\mathcal{H}_{Q}=\left\{\left[h_{Q}(\cdot,U(\mathbf{\theta}),O^{(k)})\right]_{k=1:K} \left|\mathbf{\theta}\in\mathbf{\Theta}\right.\right\}, \tag{3}\] where \([\cdot]_{k=1:K}\) is a \(K\)-dimensional vector, its \(k\)-th entry \(h_{Q}(\mathbf{x},U(\mathbf{\theta}),O^{(k)})=\mathrm{Tr}(O^{(k)}U(\mathbf{\theta})\sigma( \mathbf{x})U(\mathbf{\theta})^{\dagger})\) for \(\forall k\in[K]\) refers to the output (prediction) of quantum circuits, \(\sigma(\mathbf{x})=U_{E}(\mathbf{x})(\ket{0}\bra{0}^{\otimes N}U_{E}(\mathbf{x})^{\dagger}\) is the input state of \(\mathbf{x}\) with the encoding circuit \(U_{E}(\cdot)\), \(\mathbf{O}=\{O^{(k)}\}_{k=1}^{K}\) is a set of measure operators, and \(U(\mathbf{\theta})\) is the adopted Ansatz with trainable parameters \(\mathbf{\theta}\) living in the parameter space \(\mathbf{\Theta}\). Without loss of generality, we define \(U(\mathbf{\theta})=\prod_{l=1}^{N_{l}}(u_{l}(\mathbf{\theta})u_{e})\in\mathcal{U}(2^{ N})\), where \(u_{l}(\mathbf{\theta})\in\mathcal{U}(2^{m})\) is the \(l\)-th parameterized quantum gate operated with at most \(m\) qubits (\(m\leq N\)) and \(u_{e}\) refers to fixed quantum gates. Similarly, we define \(U_{E}(\mathbf{x})=\prod_{g=1}^{N_{g}}u_{g}(\mathbf{x})\in\mathcal{U}(2^{N})\), where \(u_{g}(\mathbf{x})\in\mathcal{U}(2^{m})\) refers to the \(g\)-th quantum gate operated with at most \(m\) qubits, and \(N_{g}\) gates contain \(N_{ge}\) tunable gates and \(N_{g}-N_{ge}\) fixed gates. Due to the diverse constructions of \(U(\mathbf{\theta})\) and \(U_{E}(\cdot)\), it is necessary to unify various QCs into the same framework to obtain the generic results. Notably, the unified QC should be _agnostic to_ particular forms of these two terms. A feasible way is rewritten \(h_{Q}(\cdot,U(\mathbf{\theta}),O^{(k)})\) as \[h_{Q}(\rho^{(i,k)},o^{(k)}):=\mathrm{Tr}(\rho^{(i,k)}o^{(k)})\ \forall k\in[K], \tag{4}\] where \(O^{(k)}=\mathbb{I}_{2^{N-D}}\otimes o^{(k)}\) with the nontrivial local operator \(o^{(k)}\in\mathbb{C}^{2^{D}\times 2^{D}}\), \(D\) describes the locality, and \(\rho^{(i,k)}=\mathrm{Tr}_{D}(U(\mathbf{\theta})\sigma(\mathbf{x}^{(i,k)})U(\mathbf{\theta} )^{\dagger})\) corresponds to the state before measurements, named as _feature state_. An intuition of the unified QC is shown in Fig. 1(b). We are now ready to exploit the unified framework to analyze the expected risk of QCs. Let \(\mathbf{\rho}=\{\rho^{(i,k)}\}\) and \(\mathbf{o}=\{o^{(k)}\}\) be two sets collecting all feature states and measure operators. The following theorem exhibits conditions in which QCs allow a low expected risk, where the formal statement and the proof are deferred to SM A. **Theorem 1** (informal).: _Following notations in Eqs. (1)-(4), when the train data size is \(nO(KN_{ge}\log\frac{KN_{g}}{\epsilon\delta})\) with \(\epsilon\) being the tolerable error, and the optimal sets of \(\mathbf{\rho}^{*}\) and \(\mathbf{o}^{*}\) satisfy three conditions: (i) the feature states have the vanished variability in the same class; (ii) all feature states are equal length and are orthogonal in the varied classes; (iii) any feature state is alignment with the measure operator in the same class, with probability \(1-\delta\), the expected risk of QC tends to be zero, i.e., \(\mathsf{R}(\hat{h}_{Q})\to 0\)._ Conditions (i)-(iii) visualized in Fig. 1(c) sculpt the geometric interpretations of \(\mathbf{\rho}^{*}\) and \(\mathbf{o}^{*}\). These properties come across the design philosophy of CCs, e.g., linear discriminant analysis and neural collapse phenomenon appeared in most deep neural classifiers [71, 72, 73]. Moreover, these conditions unveil the intrinsic connection between optimal QCs and the quantum state discrimination [74], since \(\mathbf{\rho}^{*}\) and \(\mathbf{o}^{*}\) should maximize the Helstrom bound [75], which explains the ultimate limit of QCs observed in [76]. However, as will be explained later (see Corollary 1 and Lemma 1), under certain scenarios, it is hard for QCs to meet these conditions. A typical instance is applying QC to learn the image dataset, where the difficulty stems from the limited nonlinearity of QC to fit the train set, thereby inducing a large empirical risk. Conditions (i)-(iii) also imply how the quantum measurement theory can be used to guide the design of QCs. Namely, the mean feature states of each class \(\{\tilde{\rho}^{\star(k)}\}\) compose the equiangular tight frame (ETF) and Condition (iii) suggests that the optimal measure operators \(\{\mathbf{o}^{*}\}\) also satisfy this ETF [77]. Due to the relation between symmetric informationally complete (SIC) measurements and ETF [78, 79], the optimal measure operators could be estimated by various SIC construction strategies [80]. Besides, the locality \(D\) of \(\{\mathbf{o}^{*}\}\) should be carefully selected in QCs in which a small \(D\) precludes the acquisition of the optimal QCs (i.e., the complex ETF does not exist when \(2^{D}=K\)[81, 82]), while an extremely large \(D\) may incur the barren plateaus [83, 84]. Furthermore, when \(K\) is large, Pauli-based measurements are preferable than computational basis measurements in QCs, since the former allows classical shadow techniques to accelerate the training of QCs [85, 86]. The scaling behavior of \(n\) indicates that it is data-efficient for QCs to attain a low generalization error, where the size of train set only linearly depends on the class number \(K\) and the number of encoding gates \(N_{ge}\) (see Lemma 3 for the technical elaboration). In other words, the generalization error of QCs can be well controlled by the modest-size train data. According to Theorem 1, the challenges in satisfying Conditions (i)-(iii) and the well controlled generalization error pinpoint that the risk of a QC is mostly dominated by its empirical loss rather than its generalization error. In this view, the core in devising advanced QCs is tailoring \(\mathcal{H}_{Q}\) in Eq. (3) so that \(\hat{h}_{Q}\) has a (near) zero empirical risk on \(\mathcal{D}\), or equivalently examining whether the employed QCs can fulfill Conditions (i)-(iii). _U-shape risk curve._--The risk curve concerns how the expected risk of a classifier behaves with the varied hypothesis space. It is desired that as with deep neural classifiers, QCs undergo a double-descent risk curve in the sense that the over-parameterized QCs consent a low expected risk when the trainable parameters \(N_{t}\) is much greater than the train data \(n\). If so, 'over-parameterization' could serve as a golden law in designing QCs. However, the following corollary refutes the existence of the double-descent risk curve for QCs. **Corollary 1**.: _Following notations in Theorem 1, when \(\{U_{E}(\mathbf{x})|\mathbf{x}\in\mathcal{X}\}\) follows the Haar distribution, with probability \(1-\delta\), the empirical QC follows \(|\,\mathrm{Tr}\left(\sigma(\mathbf{x}^{(i,k)})\sigma(\mathbf{x})\right)-\frac{1}{2^{N}}| \leq\sqrt{\frac{3}{2^{2N}\delta}}\). When \(\{U(\mathbf{\theta})|\mathbf{\theta}\in\Theta\}\) follows the Haar distribution, with probability \(1-\delta\), the empirical QC follows \(\sqrt{\frac{\operatorname{Tr}(\rho^{(k^{\prime})})^{2}+2\operatorname{Tr}((\rho^{(k^ {\prime})})^{2})}{2^{2}\hbar\delta}}\). The proof is deferred to SM B. The achieved results reveal the caveat of deep QCs. Specifically, when \(U_{E}(\mathbf{x})\) is deep, two encoded states \(\sigma(\mathbf{x}^{(i,k)})\) and \(\sigma(\mathbf{x}^{(i^{\prime},k)})\) from the same class tend to be orthogonal, which contradicts with Conditions (i) in Theorem 1. Besides, when \(U(\mathbf{\theta})\) is deep, the output of QC concentrates to zero, regardless how \(o^{(k^{\prime})}\) and \(\rho^{(i,k)}\) are selected. This violates Condition (iii) in Theorem 1. Overall, over-parameterized QCs encounter the high empirical risk and thus the high expected risk, which suggests that QCs experience a _U-shape risk curve_. This observation differs the dynamics of QCs from variational quantum Eigensolvers, since the latter can benefit from over-parameterization, e.g., better trainability and convergence rate [87, 88, 89, 90]. Moreover, the rule of thumb in QCs' construction is slimming \(\mathcal{H}_{Q}\) to find the valley region. Intriguingly, tailoring the feature states echoes with quantum metric learning and quantum self-supervised learning [91, 92, 93, 94, 95]. _Probe power of QCs via loss dynamics._--The distinct tendency of the risk curves between QCs and CCs provides a succinct way to recognize the potential quantum advantages. As shown in Fig. 1(a), given a specific data set, the U-shape risk curve of QCs indicates that its advantages mostly appear in the valley region. Precisely, if the risk values of QC around the basin are lower than those of CC, potential merits may exist; otherwise, QC is inferior to CC. The proved learning behavior of QCs, accompanied with the tight generalization bound, allows us to effectively fit its risk curve according to their loss dynamics. Specifically, our method contains three steps. First, \(W\) tuples of \(\{n,N_{t},T\}\) are initialized based on Theorem 1 so that the collected risk points of QC span the basin area with low generalization error. Second, we execute QC and CC under these \(W\) hyper-parameter settings and fit their loss dynamics to attain the risk curve. Last, we compare two risk curves and probe potential advantages. See SM F for the implementation details. _Technical analysis._--Theorem 1 is achieved by analyzing when \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})\) and \(\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\) are (near) zero. In the analysis of \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})\), we first consider the most general case in which both \(\mathbf{\rho}\) and \(\mathbf{o}\) are tunable, where \(\hat{h}_{Q}\equiv h_{Q}(\mathbf{\rho}^{*},\mathbf{o}^{*})\) with \((\mathbf{\rho}^{*},\mathbf{o}^{*})=\min_{\mathbf{\rho},\mathbf{o}}\mathcal{L}(\mathbf{\rho},\mathbf{ o})\). **Lemma 1** (Informal).: _When the regularizer \(\mathfrak{E}\) is considered and \((\mathbf{\rho}^{*},\mathbf{o}^{*})\) meets the three conditions in Theorem 1, the global minimizer leads to \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})=C_{1}^{2}/2\) with \(C_{1}\) depending on the hyper-parameters in \(\mathfrak{E}\)._ The achieved properties of \(\mathbf{o}^{*}\) can be used as a priori to simplify QCs. To this end, the following lemma quantifies \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})\) when \(\mathbf{o}\) is predefined and \(\mathfrak{E}=0\), where \(\hat{h}_{Q}\equiv h_{Q}(\mathbf{\rho}^{*},\mathbf{o})\) with \(\mathbf{\rho}^{*}=\min_{\mathbf{\rho}}\mathcal{L}(\mathbf{\rho},\mathbf{o})\). **Lemma 2** (Informal).: _When the predefined \(\{o^{(k)}\}\) are mutually orthogonal with each other and the conditions in Theorem 1 are satisfied, we have \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})=0\)._ The proofs of Lemmas 1 and 2 are given in SM C&D. We next analyze \(\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\). Prior results cannot be used to prove Theorem 1, since such bounds polynomially scale with the trainable parameters and become vacuous in the over-parameterized regime. To remedy this issue, we utilize the concept of algorithmic robustness [70]. **Definition 1** (Robustness).: _A learning algorithm \(\mathcal{A}\) is \((R,\nu(\cdot))\)-robust with \(R\in\mathbb{N}\) and \(\nu(\cdot):\mathcal{Z}^{n}\to\mathbb{R}\), if \(\mathcal{Z}\) can be partitioned into \(R\) disjoint sets, denoted by \(\{C_{r}\}_{r=1}^{R}\), such that the following holds for all \(\mathcal{D}\subset\mathcal{Z}^{n}:\forall\mathbf{s}=(\mathbf{x}^{(i)},y^{(i)})\in \mathcal{D}\), \(\forall z=(\mathbf{x},y)\in\mathcal{Z}\), \(\forall r\in[R]\),_ \[\mathbf{s},\mathbf{z}\in\mathcal{C}_{r}\Rightarrow|l(h_{\mathcal{A}_{D}}(\mathbf{x}^{(i)}), y^{(i)})-l(h_{\mathcal{A}_{D}}(\mathbf{x}),y)|\leq\nu(\mathcal{D}).\] Concisely, robustness measures how much the loss value can be varied with respect to the input space \(\mathcal{Z}\). A higher robustness of a classifier admits lower \(R\), \(\nu(\cdot)\), and \(\mathsf{R}_{\mathrm{Gene}}\)[70]. The following lemma quantifies the upper bound of \(\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\) whose proof is given in SM E. **Lemma 3**.: _Suppose the measure operator is bounded by \(C_{2}\) with \(\max_{k\in[K]}\|o^{(k)}\|\leq C_{2}\). Define \(\epsilon\) as the tolerable error. Following notations in Definition 1, the empirical QC is \((K(28N_{ge}/\epsilon)^{4nN_{ge}},4L_{1}KC_{2}\epsilon)\)-robust, and with probability \(1-\delta\) we have_ \[\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\leq 4L_{1}KC_{2}\epsilon+5\xi(\hat{h}_{Q}) \sqrt{\frac{|\mathcal{T}_{\mathcal{D}}|4^{m}N_{ge}\ln\frac{56KN_{ge}}{\epsilon \delta}}{n}},\] _where \(L_{1}\) is the Lipschitz constant of \(\ell\) with respect to \(h_{Q}\), \(\mathcal{T}_{\mathcal{D}}^{\mathcal{D}}=\{i\in[n]:\mathbf{z}^{(i)}\in\mathcal{C}_{r}\}\), \(\xi(\hat{h}):=\max_{\mathbf{z}\in\mathcal{Z}}(\xi(\hat{h},\mathbf{z}))\), and \(\mathcal{T}_{\mathcal{D}}:=\{r\in[R]:|\mathcal{T}_{r}^{\mathcal{D}}|\geq 1\}\)._ The achieved results convey threefold insights. First, our bound does not explicitly depend on the number of trainable parameters [96]. This unlocks a new way to understand the generalization ability of QCs, especially for the over-parameterized ones. Next, our bound hints that a carefully designed \(U_{E}\) can enhance performance of QCs [97, 53]. Last, \(\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\to 0\) requires \(n\gg|\mathcal{T}_{\mathcal{D}}|4^{m}N_{ge}\). Fortunately, a reasonable value of \(n\) is sufficient to warrant this condition, because in general \(m\leq 2\), \(N_{ge}\propto|\mathbf{x}|\), and \(|\mathcal{T}_{\mathcal{D}}|\) is continuously decreased from \(n\) to \(K\) with respect to the reduced empirical loss. ## III Numerical simulations We conduct numerical simulations to exhibit that the advantages and limitations of QCs on different classification tasks can be interpreted by the derived risk curve and feature states. The omitted construction details and results are deferred to SM G. We first apply QC to accomplish the binary classification on the parity dataset [98, 99, 100]. The number of qubits is \(N=6\) and the hardware-efficient Ansatz is adopted to realize \(U(\mathbf{\theta})\). The gradient descent method is used as the classical optimizer. Two measure operators are \(o^{(1)}=\left|0\right\rangle\left\langle 0\right|\) and \(o^{(2)}=\left|1\right\rangle\left\langle 1\right|\). The simulation results of QC with \(N_{t}=54\) are displayed in Fig. 2(a). Particularly, the averaged train (test) accuracy steadily grows from \(44.1\%\) to \(100\%\) within \(22\) epochs, and the corresponding loss decreases from \(0.26\) to \(4\times 10^{-5}\). The dynamics of the feature states \(\{\rho^{(i,t,\dagger)}\}\) with \(t\in\{0,10,20,30,40\}\) visualized by Bloch spheres echo with Lemma 2. Besides, QC becomes more robust when we continue the training. Although the train (test) accuracy reaches the optimum, the loss can be further reduced and suggests a lower risk warranted by Theorem 1. We further compare the risk curve between QC and multilayer Perceptron (MLP) on this dataset. We fit their risk curves following the proposed method to probe potential quantum merits. As shown in Fig. 2(b), QC clearly outperforms MLP when the trainable parameters ranges from \(20\) to \(140\) and the valley of the risk curve is around \(N_{t}=70\)[101]. We then apply QC to learn the Fashion-MNIST image dataset with \(K=9\)[102]. The employed number of qubits is \(N=10\) and the Pauli-based measure operators are employed. Convolutional neural networks (CNNs) are exploited as the reference. For all classifiers, the number of epochs is fixed to be \(T=50\) and the number of trainable parameters \(N_{t}\) ranges from \(60\) to \(9000\). Each setting is repeated with \(3\) times. As shown in Fig. 3, when the layer number is \(50\) with \(N_{t}=1500\), both the train and test accuracies of QC are about \(50\%\). This performance is inferior to CNN under the similar setting. To explore whether QC has the potential to outperform CNN on this dataset, we compare their risk curves. As shown in Fig. 3(b), unlike the parity dataset, QC is evidently inferior to CNN on Fashion-MNIST dataset. ## IV Discussions and Outlook We understand the potential of diverse QCs in terms of the expected risk. Our theoretical findings demonstrate that the efficacy of QCs is dependent on the problem at hand, which explains the empirical evidence of their superiority on synthetic and quantum datasets, yet inferiority on realistic tasks. With the clear difference between the risk curve of QCs and deep neural classifiers, we present a concise technique to investigate potential quantum benefits by fitting their loss dynamics. Numerical results validate our theoretical results and the effectiveness of our method. There are several interesting future research directions. The U-shape curve of QCs poses two open questions. First, can contemporary QCs attain quantum benefits on certain classical data when only limited data and restricted computing resources are available? Secondly, is it necessary to redesign QCs such as nonlinear QCs [103, 104] that can also exhibit a double-descent risk curve? Besides, the unearthed connection between the conditions towards optimal empirical risk and quantum state discrimination opens a new research avenue that amplifies the potential of QCs on quantum data aided by quantum information theory. Finally, it is intriguing to extend the developed non-vacuous generalization error bound of QCs to other scenarios, such as out-of-distribution data, in order to identify potential quantum advantages. ###### Acknowledgements. The authors thank Xinbiao Wang for valuable input and inspiring discussions. Figure 3: **Multi-class classification on the image dataset with \(K=9\).** (a) The learning performance of QC when the layer number is \(50\). (b) The fitted risk curve of QC and CNN. All labels have the same meaning with those used in Fig. 2. Figure 2: **Binary classification on the parity dataset.** (a) The learning performance of QC when the layer number is \(3\). The \(x\)-axis denotes the epoch numbers. Shaded region represents variance. The Bloch spheres display the quantum feature states at different epochs. (b) The fitted risk curve of QC and MLP. The \(x\)-axis denotes the number of trainable parameters. The label ‘_QC-risk_’ (‘_MLP-risk_’) refers to the fitted risk curve of QC and MLP. The label ‘_QC-res_’ (‘_MLP-res_’) refers to the collected results used for fitting the curves.
2307.16365
Consumption and portfolio optimization solvable problems with recursive preferences
This paper considers consumption and portfolio optimization problems with recursive preferences in both infinite and finite time regions. Specially, the financial market consists of a risk-free asset and a risky asset that follows a general stochastic volatility process. By using Bellman's dynamic programming principle, the Hamilton-Jacobi-Bellman (HJB) equation is derived for characterizing the optimal consumption-investment strategy and the corresponding value function. Based on the conjecture of the exponential-polynomial form of the value function, we prove that, when the order of the polynomial $n\leq2$, the HJB equation has an analytical solution if the investor with unit elasticity of intertemporal substitution (EIS) and an approximate solution otherwise.
Jian-hao Kang, Zhun Gou, Nan-jing Huang
2023-07-31T01:59:02Z
http://arxiv.org/abs/2307.16365v1
# Consumption and portfolio optimization solvable problems with recursive preferences1 ###### Abstract This paper considers consumption and portfolio optimization problems with recursive preferences in both infinite and finite time regions. Specially, the financial market consists of a risk-free asset and a risky asset that follows a general stochastic volatility process. By using Bellman's dynamic programming principle, the Hamilton-Jacobi-Bellman (HJB) equation is derived for characterizing the optimal consumption-investment strategy and the corresponding value function. Based on the conjecture of the exponential-polynomial form of the value function, we prove that, when the order of the polynomial \(n\leq 2\), the HJB equation has an analytical solution if the investor with unit elasticity of intertemporal substitution (EIS) and an approximate solution otherwise. **Keywords**: Stochastic volatility; consumption and investment; recursive utility; HJB equation. **AMS Subject Classification:** 93E20, 91G10, 91G80, 60H30. ## 1 Introduction Since the seminal work by Merton [16], the investment problems under the expected utility maximization criterion have obtained much attention from researchers. However, the assumption of constant volatility for risky assets was adopted in this pioneering work. Numerous empirical studies documented that the stochastic volatility model can capture actual situations better than the constant volatility model, such as describing the phenomena of the volatility clustering and the heavy tailedness of return distributions [4, 7]. Recently, Cheng and Escobar-Anel [3] revealed a largest class of stochastic volatility processes solvable in closed form within expected utility theory for a hyperbolic absolute risk aversion investor. Although the investment strategies under expected utility theory are very appealing, consumption is essential in people's daily lives and the decisions made for consumption would naturally have an effect on the optimality of the investment strategies [8]. Therefore, it is desirable to investigate the investment problems under the stochastic volatility model by taking consumption into consideration [9, 14, 19]. On the other hand, in the category of utility functions, the recursive utility that can separate the relative risk aversion from the EIS has been one of hot choices to capture the investor's consumption and investment preferences [6, 13]. As a continuous time limit of recursive utility, stochastic differential utility (SDU) introduced by Duffine and Epstein [5] receives more and more attention. For example, Schroder and Skiadas [17] developed the utility gradient (or martingale) approach to considering portfolio and consumption problem with SDU in finite time region. Chacko and Viceira [2] applied Bellman's dynamic programming principle to study optimal portfolio choice and consumption under stochastic volatility with SDU in infinite time region. Kraft et al. [12] analyzed continuous-time optimal consumption and investment with Epstein-Zin recursive preferences in incomplete markets and finite time region. It is worth mentioning that the utility gradient approach and the dynamic programming approach are two main methods to solve the optimal consumption-investment problems with SDU. Specially, solving the optimal consumption-investment problems with SDU by the dynamic programming approach can be changed into solving the corresponding HJB equations, which may not be tractable under stochastic volatility models. The present paper is thus devoted to studying consumption and portfolio optimization problems with SDU in a financial market governed by a general stochastic volatility model within both infinite and finite time regions and to providing solvability analysis of these problems by the dynamic programming approach. The main contributions of this paper are threefold. First, a more general model setting is established, which seems more realistic and includes some models in the aforementioned literature [2, 5] as special cases. For instance, speaking of strategies, this paper considers the optimal consumption-investment, while [3] only studies the optimal investment. Speaking of the stochastic volatility and investment interval, we adopt a general stochastic volatility model within both infinite and finite investment regions, while [2] uses a specific stochastic volatility model only within infinite investment region. Second, the method of changing control is used, which is helpful in revealing solvable models with more complex expressions for the drift and diffusion terms of the risky asset return and makes it convenient to give solvability analysis of the HJB equations under the setting of the general stochastic volatility model at the same time. Third, some new results of the optimal consumption-investment with SDU are obtained. With the conjecture of the exponential-polynomial form of the value function, we verify that, when the order of the polynomial \(n\leq 2\), the HJB equation has an analytical solution if the investor with unit EIS and an approximate solution otherwise. We also prove that the conjecture does not work in solving the HJB equation when the order of the polynomial \(n>3\). The rest of this paper is organized as follows. Section 2 introduces the financial market and related assumptions. Section 3 formulates the consumption and portfolio problems with recursive preferences. Section 4 provides the solvability analysis of consumption and portfolio problems. Section 5 gives some conclusions and appendices contain the proofs. ## 2 Preliminaries and assumptions Let \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\) be a complete filtered probability space equipped with a natural filtration \(\mathbb{F}=\{\mathcal{F}_{t}\}_{t\geq 0}\) satisfying the usual conditions and a physical probability measure \(\mathbb{P}\), where \(\mathbb{F}\) is generated by Brownian motions \(W_{t}^{S}\) and \(W_{t}^{\nu}\) that will be defined later. We consider a frictionless financial market consisting of a risk-free asset and a risky asset that can be traded continuously. Specifically, the price \(M_{t}\) of the risk-free asset follows \[\mathrm{d}M_{t}=rM_{t}\mathrm{d}t,\quad M_{0}=1, \tag{2.1}\] where \(r\) is a positive constant that denotes the risk-free interest rate. Moreover, inspired by the studies [3, 12], we assume that the price \(S_{t}\) of the risky asset satisfies the following general model \[\frac{\mathrm{d}S_{t}}{S_{t}}=\left[r+\eta(\nu_{t})G(\nu_{t},t)\right] \mathrm{d}t+G(\nu_{t},t)\mathrm{d}W_{t}^{S},\quad S_{0}=s_{0}>0 \tag{2.2}\] with a state variable \(\nu_{t}\) evolving as follows \[\mathrm{d}\nu_{t}=m_{1}(\nu_{t})\mathrm{d}t+m_{2}(\nu_{t})\mathrm{d}W_{t}^{ \nu},\quad\nu_{0}=\overline{\nu}>0, \tag{2.3}\] where \(\eta(\nu_{t})\) represents the market price of risk, \(G(\nu_{t},t)\) denotes the volatility of \(S_{t}\), and \(m_{1}(\nu_{t})\) and \(m_{2}(\nu_{t})\) are the drift and volatility of \(\nu_{t}\). The correlation between \(S_{t}\) and \(\nu_{t}\) can be captured by \(W_{t}^{S}\) and \(W_{t}^{\nu}\) via the parameter \(\rho\in[-1,1]\). Therefore, we can write \(\mathrm{d}W_{t}^{S}=\rho\mathrm{d}W_{t}^{\nu}+\sqrt{1-\rho^{2}}\mathrm{d}W_{t} ^{\perp}\), where \(W_{t}^{\perp}\) is another standard Brownian motion independent of \(W_{t}^{\nu}\). Following the work [3], we now make the following assumptions: * All the coefficients of equations (2.2) and (2.3) are progressively measurable with respect to \(\{\mathcal{F}_{t}\}_{t\geq 0}\). * To guarantee the uniqueness of the solution to (2.2), we assume (see, for example, [10, 11]) \[\int_{0}^{\infty}\left(|\eta(\nu_{t})G(\nu_{t},t)|+G^{2}(\nu_{t},t)\right) \mathrm{d}t<\infty\quad a.s.\] (2.4) * To obtain the existence of the solution to (2.3), the growth condition on the coefficients of (2.3) holds. That is, \[m_{1}^{2}(x_{0})+m_{2}^{2}(x_{0})\leq K(1+x_{0}^{2})\] (2.5) for \(x_{0}\in\mathbb{R}\) and \(|x_{0}|\leq K_{1}\) with positive constants \(K\) and \(K_{1}\). * To ensure the uniqueness of the solution to (2.3), the Yamada-Watanabe condition (see, for example, Theorem 4 in [18]) holds. That is, there exist real-valued functions \(\widetilde{f}(x)\) and \(g(x)\) defined on \([0,K_{2})\) with \(K_{2}>0\) such that \[|m_{1}(x_{0})-m_{1}(y_{0})| \leq\widetilde{f}(|x_{0}-y_{0}|),\] \[|m_{2}(x_{0})-m_{2}(y_{0})| \leq g(|x_{0}-y_{0}|)\] (2.6) for all \(x_{0},y_{0}\in\mathbb{R}\) with \(|x_{0}-y_{0}|<K_{2}\), where \(\widetilde{f}\) and \(g\) are continuous, positive and increasing with \(\widetilde{f}(0)=g(0)=0\), and \(\widetilde{f}(x)\) and \(g^{2}(x)x^{-1}\) are concave and satisfy \[\int_{0+}\left[\widetilde{f}(x)+g^{2}(x)x^{-1}\right]^{-1}\mathrm{d}x=\infty.\] (2.7) ## 3 Problem formulation In this section, we consider a rational investor who can invest in the aforementioned financial market. We denote by \(\pi_{t}\) the proportion of wealth invested in the risky asset and \((1-\pi_{t})\) the remaining proportion of wealth invested in the risk-free asset. Moreover, the investor can consume at an instantaneous rate \(c_{t}\) at time \(t\). Therefore, given the initial wealth value \(X_{0}\), the wealth dynamics of the investor is described by the following stochastic differential equation (SDE) \[\mathrm{d}X_{t}=[r+\pi_{t}\eta(\nu_{t})G(\nu_{t},t)]\,X_{t}\mathrm{d}t-c_{t} \mathrm{d}t+\pi_{t}G(\nu_{t},t)X_{t}\mathrm{d}W_{t}^{S}. \tag{3.8}\] Now, we assume that the preferences of the investor are captured by recursive utility functions, which are also known as stochastic differential utilities in continuous time. Due to [2, 5], the investor's preference can be given by \[J_{t}=\mathbb{E}_{t}\left[\int_{t}^{\infty}f(c_{s},J_{s})\mathrm{d}s\right]. \tag{3.9}\] Here, \(\mathbb{E}_{t}\) represents the \(\mathcal{F}_{t}\)-conditional expectation with respect to the measure \(\mathbb{P}\) and \(f(c,J)\) denotes the normalized aggregator of consumption and continuation value that takes the form \[f(c,J)=\beta(1-\frac{1}{\phi})^{-1}(1-\gamma)J\left[\left(\frac{c}{((1-\gamma )J)^{\frac{1}{1-\gamma}}}\right)^{1-\frac{1}{\phi}}-1\right], \tag{3.10}\] where \(\beta>0\) denotes the investor's discount rate, \(\gamma>0\), \(\gamma\neq 1\) and \(\phi>0\), \(\phi\neq 1\), and \(\gamma\) represents the relative risk aversion coefficient and \(\phi\) is the EIS parameter. When \(\phi=1\), the aggregator \(f(c,J)\) takes the form \[f(c,J)=\beta(1-\gamma)J\left[\ln c-\frac{1}{1-\gamma}\ln((1-\gamma)J)\right]. \tag{3.11}\] If \(\phi=\frac{1}{\gamma}\), \(f\) takes the form of the power utility, and further takes the form of the logarithmic utility if \(\phi=\gamma=1\). Next, given the wealth dynamics (3.8) and the recursive preference (3.10) or (3.11), the investor aims to choose a consumption-investment strategy \((c_{t},\pi_{t})\) to maximize the recursive utility \[\sup_{(c_{t},\pi_{t})\in\widetilde{\Pi}_{1}}\,\mathbb{E}_{t}\left[\int_{t}^{ \infty}f(c_{s},J_{s})\mathrm{d}s\right], \tag{3.12}\] where \(\widetilde{\Pi}_{1}\) denotes the set of admissible strategies as defined below. **Definition 3.1**.: _A consumption-investment strategy \((c_{t},\pi_{t})\) of (3.12) is admissible if the following conditions are satisfied:_ 1. \(c_{t}\) _and_ \(\pi_{t}\) _are_ \(\mathcal{F}_{t}\)_-progressively measurable processes and_ \(c_{t}\geq 0\)_;_ 2. _for any initial value_ \((t,x,\nu)\in[0,\infty)\times\mathbb{R}^{+}\times\mathbb{R}\)_, the SDE (_3.8_) for_ \(\{X_{s}\}_{s\geq t}\) _with_ \(X_{t}=x\) _and_ \(\nu_{t}=\nu\) _admits a pathwise unique positive solution;_ 3. _the necessary integrability conditions for the conditional expectation_ \(\mathbb{E}_{t}\) _in (_3.12_) to be well-defined hold._ For the sake of convenience, we denote a new control variable by \[\psi_{t}=\pi_{t}G(\nu_{t},t) \tag{3.13}\] and the corresponding admissible set by \(\Pi_{1}\). Therefore, the wealth process (3.8) and problem (3.12) in term of the new control variable can be rewritten as \[\mathrm{d}X_{t}=\left[r+\eta(\nu_{t})\psi_{t}\right]X_{t}\mathrm{d}t-c_{t} \mathrm{d}t+\psi_{t}X_{t}\mathrm{d}W_{t}^{S} \tag{3.14}\] and \[\sup_{(c_{t},\psi_{t})\in\Pi_{1}}\,\mathbb{E}_{t}\left[\int_{t}^{\infty}f(c_{ s},J_{s})\mathrm{d}s\right], \tag{3.15}\] respectively. It seems that the equation (3.14) under \(\psi_{t}\) looks simpler than (3.8). In addition, if the problem (3.15) is solved in term of \((c_{t},\psi_{t})\), then we can obtain the solution to the original problem (3.12). When the investment period is a finite interval, we follow [12] to rewrite the investor's preference in (3.9) as \[J_{t}=\mathbb{E}_{t}\left[\int_{t}^{T}f(c_{s},J_{s})\mathrm{d}s+U(X_{T}) \right], \tag{3.16}\] where \(T>0\) denotes the terminal of investment interval, \(U:(0,\infty)\to\mathbb{R}\) with \(U(x)=\varepsilon^{1-\gamma}\frac{x^{1-\gamma}}{1-\gamma}\) is a constant relative risk aversion utility function for bequest, and where \(\varepsilon\in(0,\infty)\) is a weight factor. Then the investor wants to adopt the control variable \((c_{t},\psi_{t})\) to maximize the preference (3.16) as follows \[\sup_{(c_{t},\psi_{t})\in\Pi_{2}}\,\mathbb{E}_{t}\left[\int_{t}^{T}f(c_{s},J_ {s})\mathrm{d}s+U(X_{T})\right]. \tag{3.17}\] Here, \(\Pi_{2}\) denotes the set of admissible strategies, which can be defined similarly as Definition 3.1 by replacing \(\pi_{t}\) with \(\psi_{t}\) and by changing the conditional expectation \(\mathbb{E}_{t}\) in (3.12) with the conditional expectation \(\mathbb{E}_{t}\) in (3.17). Optimal consumption and investment solvable problems In this section, we will adopt Bellman's dynamic programming principle to solve problems (3.15) and (3.17), respectively. ### The case of infinite time-horizon First of all, we consider the case of the infinite time interval. When the investor has unit EIS, which implies the aggregator \(f(c,J)\) takes the form (3.11) in the problem (3.15), it follows from [2] that the corresponding HJB equation can be given by \[\sup_{c\in(0,\infty),\psi\in\mathbb{R}} \left\{\left[r+\psi\eta(\nu)\right]x\omega_{x}-c\omega_{x}+\frac{ 1}{2}\psi^{2}x^{2}\omega_{xx}+m_{1}(\nu)\omega_{\nu}+\frac{1}{2}m_{2}^{2}(\nu) \omega_{\nu\nu}\right. \tag{4.18}\] \[\left.+x\psi\rho m_{2}(\nu)\omega_{x\nu}+\beta(1-\gamma)\omega \left[\ln c-\frac{1}{1-\gamma}\ln((1-\gamma)\omega)\right]\right\}=0,\] where \(\omega\) denotes the value function of the problem (3.15) when taking the form (3.11), and \(\omega_{x}\), \(\omega_{\nu}\), \(\omega_{xx}\), \(\omega_{\nu\nu}\) and \(\omega_{x\nu}\) represent the first and second partial derivatives of \(\omega\) with respect to \(x\) and \(\nu\). By differentiating the expression inside the bracket of (4.18) with respect to \(c\) and \(\psi\), respectively, one can obtain \[\begin{cases}c_{t}^{*}=\frac{\beta(1-\gamma)\omega}{\omega_{x}}, \\ \psi_{t}^{*}=-\frac{\eta(\nu)\omega_{x}+\rho m_{2}(\nu)\omega_{x\nu}}{x \omega_{xx}}.\end{cases} \tag{4.19}\] We make an ansatz that \(\omega(x,\nu)=\frac{x^{1-\gamma}}{1-\gamma}h(\nu)^{1-\gamma}\) for some deterministic function \(h(\nu)\). Substituting (4.19) and the ansatz for \(\omega(x,\nu)\) into (4.18) yields the following partial differential equation (PDE) \[r -\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta\left(\ln\beta-\ln h \right)+\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma) \right]\frac{h_{\nu}}{h} \tag{4.20}\] \[+\frac{1}{2}m_{2}^{2}(\nu)\left[\frac{\rho^{2}(1-\gamma)^{2}}{ \gamma}-\gamma\right]\frac{h_{\nu}^{2}}{h^{2}}+\frac{1}{2}m_{2}^{2}(\nu)\frac {h_{\nu\nu}}{h}=0.\] In what follows, we will make a general ansatz for \(h\) as an exponential-polynomial form and prove that when the order of the polynomial is higher than 2, the problem (3.15) with \(f(c,J)\) given by (3.11) is unsolvable under this ansatz. **Theorem 4.1**.: _Consider the problem (3.15) when \(\phi=1\). If \(h(\nu)\) is conjectured in the exponential-polynomial form such that_ \[\omega(x,\nu)=\frac{x^{1-\gamma}}{1-\gamma}h(\nu)^{1-\gamma}=\frac{x^{1-\gamma }}{1-\gamma}\exp\left\{(1-\gamma)\left(A_{0}+\sum_{k=1}^{n}\frac{1}{k}A_{k} \nu^{k}\right)\right\}, \tag{4.21}\] _where \(A_{k}\) for all \(k=0,\cdots,n\) are constants, then the above ansatz is not useful in solving the PDE (4.20) when the order \(n\) of the polynomial is higher than 2._ Proof.: See Appendix A. Next, we consider the case of \(\phi\neq 1\). That is, the aggregator \(f(c,J)\) takes the form (3.10) in the problem (3.15). Similarly, the corresponding HJB equation can be given as follows \[\sup_{c\in(0,\infty),\psi\in\mathbb{R}} \left\{\left[r+\psi\eta(\nu)\right]x\omega_{x}-c\omega_{x}+\frac{ 1}{2}\psi^{2}x^{2}\omega_{xx}+m_{1}(\nu)\omega_{\nu}+\frac{1}{2}m_{2}^{2}(\nu) \omega_{\nu\nu}\right. \tag{4.22}\] \[\left.+x\psi\rho m_{2}(\nu)\omega_{x\nu}+\beta(1-\frac{1}{\phi})^ {-1}(1-\gamma)\omega\left[\left(\frac{c}{((1-\gamma)\omega)^{\frac{1}{1-\gamma }}}\right)^{1-\frac{1}{\phi}}-1\right]\right\}=0\] and the candidate optimal consumption-investment strategy satisfies \[\begin{cases}c_{t}^{*}=\left[\frac{\omega_{x}}{\beta(1-\gamma)\omega}((1-\gamma) \omega)^{\frac{1-\frac{\gamma}{2}}{1-\gamma}}\right]^{-\phi},\\ \psi_{t}^{*}=-\frac{\eta(\nu)\omega_{x}+\rho m_{2}(\nu)\omega_{x\nu}}{x\omega_ {xx}}.\end{cases} \tag{4.23}\] We make an ansatz that \(\omega(x,\nu)=\frac{x^{1-\gamma}}{1-\gamma}h(\nu)^{-\frac{1-\gamma}{1-\phi}}\) for some deterministic function \(h(\nu)\). By substituting (4.23) and the above ansatz for \(\omega(x,\nu)\) into (4.22), we can derive that \[r -\beta^{\phi}h^{-1}+\frac{1}{2\gamma}\eta^{2}(\nu)+\frac{\beta \phi}{\phi-1}\left(\beta^{\phi-1}h^{-1}-1\right)-\frac{1}{1-\phi}\left[m_{1}( \nu)+\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right]\frac{h_{\nu}}{h}\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right]\frac{h_{\nu}^{2}}{h^{2}}-\frac{1}{2(1- \phi)}m_{2}^{2}(\nu)\frac{h_{\nu\nu}}{h}=0. \tag{4.24}\] Since the equation (4.24) involves the term \(\beta^{\phi}h^{-1}\), it usually does not admit an exact analytical solution. In the following, we will adopt the log-linear approximation method used in [1] to obtain an approximate solution. Inserting the conjecture of \(\omega(x,\nu)\) into (4.23) yields \(c_{t}^{*}=\beta^{\phi}h^{-1}x\), which shows that \[\beta^{\phi}h^{-1}=\frac{c_{t}^{*}}{x}=\exp\{\ln c_{t}^{*}-\ln x\}:=\exp\{ \overline{c}_{t}^{*}-\overline{x}\},\] where \(\overline{c}_{t}^{*}=\ln c_{t}^{*}\) and \(\overline{x}=\ln x\). By the first-order Taylor expansion of \(\beta^{\phi}h^{-1}\) around the expected value of the log consumption-wealth ratio \(\mathbb{E}(\overline{c}_{t}^{*}-\overline{x})\), one has \[\beta^{\phi}h^{-1}\approx \exp\{\mathbb{E}(\overline{c}_{t}^{*}-\overline{x})\}+\exp\{ \mathbb{E}(\overline{c}_{t}^{*}-\overline{x})\}\left[\overline{c}_{t}^{*}- \overline{x}-\mathbb{E}(\overline{c}_{t}^{*}-\overline{x})\right]=\zeta_{1}+ \zeta_{2}(\overline{c}_{t}^{*}-\overline{x}). \tag{4.25}\] Here, \(\zeta_{1}=\exp\{\mathbb{E}(\overline{c}_{t}^{*}-\overline{x})\}\left[1- \mathbb{E}(\overline{c}_{t}^{*}-\overline{x})\right]\) and \(\zeta_{2}=\exp\{\mathbb{E}(\overline{c}_{t}^{*}-\overline{x})\}\). Moreover, one can obtain that \[\beta^{\phi}h^{-1}\approx\zeta_{1}+\zeta_{2}(\ln c_{t}^{*}-\ln x)=\zeta_{1}+ \zeta_{2}(\phi\ln\beta-\ln h). \tag{4.26}\] By making a general ansatz for \(h\) as an exponential-polynomial form, we will show that when the order of the polynomial is higher than 2, the problem (3.15) with \(f(c,J)\) given by (3.10) is approximately unsolvable under this ansatz. **Theorem 4.2**.: _Consider the problem (3.15) when \(\phi\neq 1\). If \(h(\nu)\) is conjectured in the exponential-polynomial form such that_ \[\omega(x,\nu)=\frac{x^{1-\gamma}}{1-\gamma}h(\nu)^{-\frac{1-\gamma}{1-\phi}}= \frac{x^{1-\gamma}}{1-\gamma}\exp\left\{-\frac{1-\gamma}{1-\phi}\left(A_{0}+ \sum_{k=1}^{n}\frac{1}{k}A_{k}\nu^{k}\right)\right\}, \tag{4.27}\] _where \(A_{k}\) for all \(k=0,\cdot\cdot\cdot,n\) are constants, then the above ansatz is not useful in providing an approximate solution to the PDE (4.24) when the order \(n\) of the polynomial is higher than 2._ Proof.: See Appendix B. ### The case of finite time-horizon Now, we will concentrate on the case of the finite time interval. If the investor has unit EIS, which means the aggregator \(f(c,J)\) takes the form (3.11) in the problem (3.17), then it follows from [12] that the corresponding HJB equation satisfies \[\begin{split}\sup_{c\in(0,\infty),\psi\in\mathbb{R}}& \left\{\omega_{t}+\left[r+\psi\eta(\nu)\right]x\omega_{x}-c\omega_{x}+ \frac{1}{2}\psi^{2}x^{2}\omega_{xx}+m_{1}(\nu)\omega_{\nu}+\frac{1}{2}m_{2}^{2} (\nu)\omega_{\nu\nu}\\ &\quad+x\psi\rho m_{2}(\nu)\omega_{x\nu}+\beta(1-\gamma)\omega \left[\ln c-\frac{1}{1-\gamma}\ln((1-\gamma)\omega)\right]\right\}=0\end{split} \tag{4.28}\] with the boundary condition \(\omega(T,x,\nu)=\varepsilon^{1-\gamma}\frac{x^{1-\gamma}}{1-\gamma}\). Solving the maximization problem (4.28) with respect to \(c\) and \(\psi\), we have the following candidate optimal consumption-investment strategy \[\begin{cases}c_{t}^{*}=\frac{\beta(1-\gamma)\omega}{\omega_{x}},\\ \psi_{t}^{*}=-\frac{\eta(\nu)\omega_{x}+\rho m_{2}(\nu)\omega_{x\nu}}{x \omega_{xx}}.\end{cases} \tag{4.29}\] Moreover, we conjecture a solution of the form \(\omega(t,x,\nu)=\frac{x^{1-\gamma}}{1-\gamma}h(t,\nu)^{1-\gamma}\) for some deterministic function \(h(t,\nu)\). Substituting (4.29) and the ansatz for \(\omega(t,x,\nu)\) into (4.28) yields the following PDE \[h_{t} +\left[r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta\left(\ln\beta -\ln h\right)\right]h+\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m_{2}( \nu)(1-\gamma)\right]h_{\nu}\] \[+\frac{1}{2}m_{2}^{2}(\nu)\left[\frac{\rho^{2}(1-\gamma)^{2}}{ \gamma}-\gamma\right]\frac{h_{\nu}^{2}}{h}+\frac{1}{2}m_{2}^{2}(\nu)h_{\nu\nu }=0. \tag{4.30}\] In the following, we will assume a general ansatz for \(h\) as an exponential-polynomial form and show that when the order of the polynomial is higher than 2, the problem (3.17) with \(f(c,J)\) given by (3.11) is unsolvable under this ansatz. **Theorem 4.3**.: _Consider the problem (3.17) when \(\phi=1\). If \(h(t,\nu)\) is conjectured in the exponential-polynomial form such that_ \[\omega(t,x,\nu)=\frac{x^{1-\gamma}}{1-\gamma}h(t,\nu)^{1-\gamma}=\frac{x^{1- \gamma}}{1-\gamma}\exp\left\{(1-\gamma)\left(A_{0}(T-t)+\sum_{k=1}^{n}\frac{1 }{k}A_{k}(T-t)\nu^{k}\right)\right\}, \tag{4.31}\] _where \(A_{k}(t)\) for all \(k=0,\cdot\cdot\cdot,n\) are functions of \(t\) with \(A_{0}(0)=\ln\varepsilon\) and \(A_{k}(0)=0\)\((k=1,\cdot\cdot\cdot,n)\), then the above ansatz is not useful in solving the PDE (4.30) when the order \(n\) of the polynomial is higher than 2._ Proof.: See Appendix C. Then, we study the case of \(\phi\neq 1\). That is, the aggregator \(f(c,J)\) takes the form (3.10) in the problem (3.17). Following [12], the corresponding HJB equation is \[\begin{split}\sup_{c\in(0,\infty),\psi\in\mathbb{R}}& \left\{\omega_{t}+\left[r+\psi\eta(\nu)\right]x\omega_{x}-c\omega_{x}+\frac{1 }{2}\psi^{2}x^{2}\omega_{xx}+m_{1}(\nu)\omega_{\nu}+\frac{1}{2}m_{2}^{2}(\nu) \omega_{\nu\nu}\\ +x\psi\rho m_{2}(\nu)\omega_{x\nu}+\beta(1-\frac{1}{\phi})^{-1}( 1-\gamma)\omega\left[\left(\frac{c}{((1-\gamma)\omega)^{\frac{1}{1-\gamma}}} \right)^{1-\frac{1}{\phi}}-1\right]\right\}=0.\end{split} \tag{4.32}\] Moreover, we can derive that the candidate optimal consumption-investment strategy follows \[\begin{cases}c_{t}^{*}=\left[\frac{\omega_{x}}{\beta(1-\gamma)\omega}((1- \gamma)\omega)^{\frac{1-\frac{1}{\phi}}{1-\gamma}}\right]^{-\phi},\\ \psi_{t}^{*}=-\frac{\eta(\nu)\omega_{x}+\rho m_{2}(\nu)\omega_{x\nu}}{x \omega_{xx}}.\end{cases} \tag{4.33}\] We assume \(\omega(t,x,\nu)=\frac{x^{1-\gamma}}{1-\gamma}h(t,\nu)^{-\frac{1-\gamma}{1- \phi}}\) for some deterministic function \(h(t,\nu)\). Substituting (4.33) and the above form of \(\omega(t,x,\nu)\) into (4.32) leads to \[-\frac{h_{t}}{(1-\phi)h} +r-\beta^{\phi}h^{-1}+\frac{1}{2\gamma}\eta^{2}(\nu)+\frac{\beta \phi}{\phi-1}\left(\beta^{\phi-1}h^{-1}-1\right)-\frac{1}{1-\phi}\left[m_{1}( \nu)+\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right]\frac{h_{\nu}}{h}\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right]\frac{h_{\nu}^{2}}{h^{2}}-\frac{1}{2(1- \phi)}m_{2}^{2}(\nu)\frac{h_{\nu\nu}}{h}=0. \tag{4.34}\] In addition, we have \(c_{t}^{*}=\beta^{\phi}h^{-1}x\). Similar to the solvability analysis of the PDE (4.24), it follows from (4.26) that (4.34) can be approximated by the following formula \[- \frac{h_{t}}{(1-\phi)h}+r-\zeta_{1}-\zeta_{2}(\phi\ln\beta-\ln h)+ \frac{1}{2\gamma}\eta^{2}(\nu)\] \[+\frac{\beta\phi}{\phi-1}\left(\beta^{-1}(\zeta_{1}+\zeta_{2}( \phi\ln\beta-\ln h))-1\right)-\frac{1}{1-\phi}\left[m_{1}(\nu)+\frac{\eta(\nu) }{\gamma}\rho m_{2}(\nu)(1-\gamma)\right]\frac{h_{\nu}}{h}\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right]\frac{h_{\nu}^{2}}{h^{2}}-\frac{1}{2(1- \phi)}m_{2}^{2}(\nu)\frac{h_{\nu\nu}}{h}=0. \tag{4.35}\] By assuming that \(h\) takes a form of exponential-polynomial, we will discuss that when the order of the polynomial is higher than 2, the PDE (4.35) is unsolvable and thus the problem (3.17) with \(f(c,J)\) given by (3.10) is approximately unsolvable under this ansatz. **Theorem 4.4**.: _Consider the problem (3.17) when \(\phi\neq 1\). If \(h(t,\nu)\) is conjectured in the exponential-polynomial form such that_ \[\omega(t,x,\nu)=\frac{x^{1-\gamma}}{1-\gamma}h(t,\nu)^{-\frac{1- \gamma}{1-\phi}}=\frac{x^{1-\gamma}}{1-\gamma}\exp\left\{-\frac{1-\gamma}{1- \phi}\left(A_{0}(T-t)+\sum_{k=1}^{n}\frac{1}{k}A_{k}(T-t)\nu^{k}\right)\right\}, \tag{4.36}\] _where \(A_{k}(t)\) for all \(k=0,\cdots,n\) are functions of \(t\) with \(A_{0}(0)=(\phi-1)\ln\varepsilon\) and \(A_{k}(0)=0\)\((k=1,\cdots,n)\), then the above ansatz is not useful in providing an analytical solution to the PDE (4.35) when the order \(n\) of the polynomial is higher than 2._ Proof.: See Appendix D. ## 5 Conclusions This paper was devoted to investigating consumption and portfolio optimization problems under the general stochastic volatility model with recursive preferences in both infinite and finite time regions. In both cases of time regions, we further considered the investor with unit EIS and general EIS. By the dynamic programming approach, the optimization problems were changed into the solvability analysis of the corresponding HJB equations. Since the HJB equations usually do not have analytical solutions when the investor takes general EIS, we turned to find approximate solutions through the log-linear approximation method. By the conjecture of the exponential-polynomial form of the value function of the optimization problems, we proved that, when the order of the polynomial \(n\leq 2\), the HJB equation exists an analytical solution if the investor with unit EIS and an approximation solution otherwise. Without further essential difficulties, our studies also work when considering multiple risky assets and multiple state variable as well as ambiguity-aversion within the framework of [15]. It is also interesting to derive the expressions for analytical or approximate solutions under specific stochastic volatility models. We hope to present relevant study in the future. ## Appendix A Proof of Theorem 4.1 Proof.: We will first discuss the cases of constant, linear and quadratic, then we will prove that the cubic and higher order cases do not work. Note that we will also point out the solvability of the PDE (4.20) in terms of \(\eta(\nu)\), \(m_{1}(\nu)\) and \(m_{2}(\nu)\). **Exponential-constant:** Assume that \(h(\nu)=\exp\{A_{0}\}\). Substituting the partial derivatives of \(h\) into (4.20) yields \[r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta(\ln\beta-A_{0})=0,\] which implies that the PDE (4.20) is solvable for a constant \(\eta\) and any feasible \(m_{1}\) and \(m_{2}\). **Exponential-linear:** Assume that \(h(\nu)=\exp\{A_{0}+A_{1}\nu\}\). Substituting the partial derivatives of \(h\) into (4.20) leads to \[r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta(\ln\beta-A_{0}-A_{1 }\nu)+\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right] A_{1}\] \[\quad+\frac{1}{2}m_{2}^{2}(\nu)\left[1-\gamma+\frac{\rho^{2}}{ \gamma}(1-\gamma)^{2}\right]A_{1}^{2}=0,\] which implies that the PDE (4.20) is solvable when \(\eta^{2}(\nu)\), \(m_{1}(\nu)\), \(\eta(\nu)m_{2}(\nu)\) and \(m_{2}^{2}(\nu)\) are linear in \(\nu\). **Exponential-quadratic:** Assume that \(h(\nu)=\exp\{A_{0}+A_{1}\nu+\frac{1}{2}A_{2}\nu^{2}\}\). Similarly, we can obtain that \[r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta(\ln\beta-A_{0}-A_{1 }\nu-\frac{1}{2}A_{2}\nu^{2})+\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m_ {2}(\nu)(1-\gamma)\right](A_{1}+A_{2}\nu)\] \[\quad+\frac{1}{2}m_{2}^{2}(\nu)\left[(1-\gamma)(A_{1}+A_{2}\nu)^ {2}+A_{2}\right]+\frac{\rho^{2}}{2\gamma}m_{2}^{2}(\nu)(1-\gamma)^{2}(A_{1}+A_ {2}\nu)^{2}=0,\] which means that the PDE (4.20) is solvable when \(\eta^{2}(\nu)\) is quadratic in \(\nu\), \(m_{1}(\nu)\) and \(\eta(\nu)m_{2}(\nu)\) is linear in \(\nu\), and \(m_{2}^{2}(\nu)\) is a constant. **Exponential-cubic:** Assume that \(h(\nu)=\exp\{A_{0}+A_{1}\nu+\frac{1}{2}A_{2}\nu^{2}+\frac{1}{3}A_{3}\nu^{3}\}\). Inserting the partial derivatives of \(h\) into (4.20), one has \[r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta(\ln\beta-A_{0}-A_{ 1}\nu-\frac{1}{2}A_{2}\nu^{2}-\frac{1}{3}A_{3}\nu^{3})+\left[m_{1}(\nu)+\frac{ \eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right](A_{1}+A_{2}\nu+A_{3}\nu^{2})\] \[\quad+\frac{1}{2}m_{2}^{2}(\nu)\left[(1-\gamma)(A_{1}+A_{2}\nu+A _{3}\nu^{2})^{2}+A_{2}+2A_{3}\nu\right]+\frac{\rho^{2}}{2\gamma}m_{2}^{2}(\nu )(1-\gamma)^{2}(A_{1}+A_{2}\nu+A_{3}\nu^{2})^{2}=0.\] (A.37) Note that the above equation has terms involving \(\nu^{4}\). For this equation to be solvable, one needs to cancel the terms involving \(\nu^{4}\). It implies that \[1-\gamma+\frac{\rho^{2}}{\gamma}(1-\gamma)^{2}=0\] (A.38) should be satisfied, which is equivalent to \(\rho^{2}=\frac{\gamma}{\gamma-1}\) due to \(\gamma\neq 1\). If \(\gamma>1\), then \(\rho^{2}>1\), which contradicts the fact that \(\rho\in[-1,1]\). If \(0<\gamma<1\), then \(\rho^{2}<0\), which is impossible. Therefore, the terms involving \(\nu^{4}\) in (A.37) cannot be matched. Thus, the conjecture of the exponential-cubic form of \(h\) cannot solve the problem (3.15) when \(\phi=1\). **Exponential-nth:** Assume that \(h(\nu)=\exp\{A_{0}+\sum\limits_{k=1}^{n}\frac{1}{k}A_{k}\nu^{k}\}\). By substituting the partial derivatives of \(h\) into (4.20), we have \[r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta\left(\ln\beta-A_{0 }-\sum\limits_{k=1}^{n}\frac{1}{k}A_{k}\nu^{k}\right)+\left[m_{1}(\nu)+\frac{ \eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right]\sum\limits_{k=1}^{n}A_{k} \nu^{k-1}\] \[\quad+\frac{1}{2}m_{2}^{2}(\nu)\left[(1-\gamma)\left(\sum\limits_ {k=1}^{n}A_{k}\nu^{k-1}\right)^{2}+\sum\limits_{k=2}^{n}(k-1)A_{k}\nu^{k-2} \right]+\frac{\rho^{2}}{2\gamma}m_{2}^{2}(\nu)(1-\gamma)^{2}\left(\sum\limits_ {k=1}^{n}A_{k}\nu^{k-1}\right)^{2}=0,\] which involves \(\nu^{n+1},\cdots,\nu^{2n-2}\) for \(n\geq 3\). These terms cannot be matched unless \(1-\gamma+\frac{\rho^{2}}{\gamma}(1-\gamma)^{2}=0\). It follows from the previous analysis that the equation (A.38) does not hold. Therefore, when \(n\geq 3\), the conjecture of the exponential-polynomial form of \(h\) cannot solve the problem (3.15) when \(\phi=1\). ## Appendix B Proof of Theorem 4.2 Proof.: Similar to the proof of Theorem 4.1, we will first study the cases of constant, linear and quadratic, and then verify that the cubic and higher order cases do not work. By the way, we will state out the solvability of approximate solutions of the PDE (4.24) in terms of \(\eta(\nu)\), \(m_{1}(\nu)\) and \(m_{2}(\nu)\). **Exponential-constant:** Assume that \(h(\nu)=\exp\{A_{0}\}\). Substituting the partial derivatives of \(h\) and (4.26) into (4.24) yields \[r-\zeta_{1}-\zeta_{2}(\phi\ln\beta-A_{0})+\frac{1}{2\gamma}\eta^{2}(\nu)+ \frac{\beta\phi}{\phi-1}\left[\frac{1}{\beta}\bigg{(}\zeta_{1}+\zeta_{2}(\phi \ln\beta-A_{0})\bigg{)}-1\right]=0,\] which implies that the PDE (4.24) can be approximately solvable for a constant \(\eta\) and any feasible \(m_{1}\) and \(m_{2}\). **Exponential-linear:** Assume that \(h(\nu)=\exp\{A_{0}+A_{1}\nu\}\). Substituting the partial derivatives of \(h\) and (4.26) into (4.24) leads to \[r -\zeta_{1}-\zeta_{2}(\phi\ln\beta-A_{0}-A_{1}\nu)+\frac{1}{2 \gamma}\eta^{2}(\nu)+\frac{\beta\phi}{\phi-1}\left[\frac{1}{\beta}\bigg{(} \zeta_{1}+\zeta_{2}(\phi\ln\beta-A_{0}-A_{1}\nu)\bigg{)}-1\right]\] \[-\frac{1}{1-\phi}\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m _{2}(\nu)(1-\gamma)\right]A_{1}\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right]A_{1}^{2}-\frac{1}{2(1-\phi)}m_{2}^{2}( \nu)A_{1}^{2}=0,\] which implies that the PDE (4.24) can be approximately solvable when \(\eta^{2}(\nu)\), \(m_{1}(\nu)\), \(\eta(\nu)m_{2}(\nu)\) and \(m_{2}^{2}(\nu)\) are linear in \(\nu\). **Exponential-quadratic:** Assume that \(h(\nu)=\exp\{A_{0}+A_{1}\nu+\frac{1}{2}A_{2}\nu^{2}\}\). Similarly, one can obtain that \[r -\zeta_{1}-\zeta_{2}(\phi\ln\beta-A_{0}-A_{1}\nu-\frac{1}{2}A_{2} \nu^{2})+\frac{\beta\phi}{\phi-1}\left[\frac{1}{\beta}\bigg{(}\zeta_{1}+\zeta _{2}(\phi\ln\beta-A_{0}-A_{1}\nu-\frac{1}{2}A_{2}\nu^{2})\bigg{)}-1\right]\] \[+\frac{1}{2\gamma}\eta^{2}(\nu)-\frac{1}{1-\phi}\left[m_{1}(\nu) +\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right](A_{1}+A_{2}\nu)\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right](A_{1}+A_{2}\nu)^{2}-\frac{1}{2(1-\phi )}m_{2}^{2}(\nu)\left[(A_{1}+A_{2}\nu)^{2}+A_{2}\right]=0,\] which means that the PDE (4.24) can be approximately solvable when \(\eta^{2}(\nu)\) is quadratic in \(\nu\), \(m_{1}(\nu)\) and \(\eta(\nu)m_{2}(\nu)\) is linear in \(\nu\), and \(m_{2}^{2}(\nu)\) is a constant. **Exponential-cubic:** Assume that \(h(\nu)=\exp\{A_{0}+A_{1}\nu+\frac{1}{2}A_{2}\nu^{2}+\frac{1}{3}A_{3}\nu^{3}\}\). Inserting the partial derivatives of \(h\) and (4.26) into (4.24), we can derive that \[r -\zeta_{1}-\zeta_{2}\phi\ln\beta+\zeta_{2}(A_{0}+A_{1}\nu+\frac{1 }{2}A_{2}\nu^{2}+\frac{1}{3}A_{3}\nu^{3})+\frac{1}{2\gamma}\eta^{2}(\nu)\] \[+\frac{\beta\phi}{\phi-1}\left[\frac{1}{\beta}\bigg{(}\zeta_{1}+ \zeta_{2}(\phi\ln\beta-A_{0}-A_{1}\nu-\frac{1}{2}A_{2}\nu^{2}-\frac{1}{3}A_{3} \nu^{3})\bigg{)}-1\right]\] \[-\frac{1}{1-\phi}\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m _{2}(\nu)(1-\gamma)\right](A_{1}+A_{2}\nu+A_{3}\nu^{2})\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{\rho^{2}(1- \gamma)^{2}}{\gamma}\right](A_{1}+A_{2}\nu+A_{3}\nu^{2})^{2}\] \[-\frac{1}{2(1-\phi)}m_{2}^{2}(\nu)\left[(A_{1}+A_{2}\nu+A_{3}\nu^{ 2})^{2}+A_{2}+2A_{3}\nu\right]=0.\] (B.39) We can see that the above equation has terms involving \(\nu^{4}\). If this equation can be solvable, one has to cancel the terms involving \(\nu^{4}\). It implies that \[\frac{1}{(1-\phi)^{2}}\left[2-\phi-\gamma+\frac{\rho^{2}(1-\gamma)^{2}}{\gamma }\right]-\frac{1}{1-\phi}=0\] (B.40) should be satisfied. It is easy to derive that the equation (B.40) is equivalent to \(\rho^{2}=\frac{\gamma}{\gamma-1}\), which does not hold by the proof of Theorem 4.1. Therefore, the terms involving \(\nu^{4}\) in (B.39) cannot be matched. Thus, the conjecture of the exponential-cubic form of \(h\) cannot approximately solve the problem (3.15) when \(\phi\neq 1\). **Exponential-nth:** Assume that \(h(\nu)=\exp\{A_{0}+\sum\limits_{k=1}^{n}\frac{1}{k}A_{k}\nu^{k}\}\). By substituting the partial derivatives of \(h\) and (4.26) into (4.24), we have \[r-\zeta_{1}-\zeta_{2}\left(\phi\ln\beta-A_{0}-\sum\limits_{k=1} ^{n}\frac{1}{k}A_{k}\nu^{k}\right)+\frac{\beta\phi}{\phi-1}\left[\frac{1}{ \beta}\bigg{(}\zeta_{1}+\zeta_{2}(\phi\ln\beta-A_{0}-\sum\limits_{k=1}^{n} \frac{1}{k}A_{k}\nu^{k})\bigg{)}-1\right]\] \[+\frac{1}{2\gamma}\eta^{2}(\nu)-\frac{1}{1-\phi}\left[m_{1}(\nu )+\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right]\sum\limits_{k=1}^{ n}A_{k}\nu^{k-1}\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right]\left(\sum\limits_{k=1}^{n}A_{k}\nu^{k- 1}\right)^{2}\] \[-\frac{1}{2(1-\phi)}m_{2}^{2}(\nu)\left[\left(\sum\limits_{k=1}^{ n}A_{k}\nu^{k-1}\right)^{2}+\sum\limits_{k=2}^{n}(k-1)A_{k}\nu^{k-2}\right]=0,\] which involves \(\nu^{n+1},\cdot\cdot\cdot,\nu^{2n-2}\) for \(n\geq 3\). Since the equation (B.40) does not hold, these terms cannot be matched. Therefore, given \(n\geq 3\), the conjecture of the exponential-polynomial form of \(h\) cannot approximately solve the problem (3.15) when \(\phi\neq 1\). ## Appendix C Proof of Theorem 4.3 Proof.: We will first investigate the constant, linear and quadratic cases and show the solvability of the PDE (4.30) in terms of \(\eta(\nu)\), \(m_{1}(\nu)\) and \(m_{2}(\nu)\). Then we will show that the conjecture does not work in solving the PDE (4.30) under the cases of the cubic and higher order. **Exponential-constant:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)\}\). Substituting the partial derivatives of \(h\) into (4.30) leads to \[-A_{0}^{\prime}+r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta(\ln\beta-A_{0})=0,\] where \(A_{0}^{\prime}\) denotes the derivative of \(A_{0}(t)\) with respect to \(t\). Similar expressions will be used later when there is no ambiguity. Therefore, the PDE (4.30) is solvable for a constant \(\eta\) and any feasible \(m_{1}\) and \(m_{2}\). **Exponential-linear:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)+A_{1}(T-t)\nu\}\). Inserting the partial derivatives of \(h\) into (4.30) yields \[-A_{0}^{\prime}-A_{1}^{\prime}\nu+r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+ \beta(\ln\beta-A_{0}-A_{1}\nu)+\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m _{2}(\nu)(1-\gamma)\right]A_{1}\] \[+\frac{1}{2}m_{2}^{2}(\nu)\left[1-\gamma+\frac{\rho^{2}}{\gamma}(1- \gamma)^{2}\right]A_{1}^{2}=0,\] which shows that the PDE (4.30) is solvable when \(\eta^{2}(\nu)\), \(m_{1}(\nu)\), \(\eta(\nu)m_{2}(\nu)\) and \(m_{2}^{2}(\nu)\) are linear in \(\nu\). **Exponential-quadratic:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)+A_{1}(T-t)\nu+\frac{1}{2}A_{2}(T-t)\nu^{2}\}\). Similarly, one has \[-A_{0}^{\prime} -A_{1}^{\prime}\nu-\frac{1}{2}A_{2}^{\prime}\nu^{2}+r-\beta+\frac {1}{2\gamma}\eta^{2}(\nu)+\beta(\ln\beta-A_{0}-A_{1}\nu-\frac{1}{2}A_{2}\nu^{ 2})+\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right] (A_{1}+A_{2}\nu)\] \[+\frac{1}{2}m_{2}^{2}(\nu)\left[(1-\gamma)(A_{1}+A_{2}\nu)^{2}+A _{2}\right]+\frac{\rho^{2}}{2\gamma}m_{2}^{2}(\nu)(1-\gamma)^{2}(A_{1}+A_{2} \nu)^{2}=0,\] which means that the PDE (4.30) is solvable when \(\eta^{2}(\nu)\) is quadratic in \(\nu\), \(m_{1}(\nu)\) and \(\eta(\nu)m_{2}(\nu)\) is linear in \(\nu\), and \(m_{2}^{2}(\nu)\) is a constant. **Exponential-cubic:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)+A_{1}(T-t)\nu+\frac{1}{2}A_{2}(T-t)\nu^{2}+\frac{1}{ 3}A_{3}(T-t)\nu^{3}\}\). By using the partial derivatives of \(h\) and the equation (4.30), we have \[-A_{0}^{\prime} -A_{1}^{\prime}\nu-\frac{1}{2}A_{2}^{\prime}\nu^{2}-\frac{1}{3}A_ {3}^{\prime}\nu^{3}+r-\beta+\frac{1}{2\gamma}\eta^{2}(\nu)+\beta(\ln\beta-A_{0 }-A_{1}\nu-\frac{1}{2}A_{2}\nu^{2}-\frac{1}{3}A_{3}\nu^{3})\] \[+\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1- \gamma)\right](A_{1}+A_{2}\nu+A_{3}\nu^{2})+\frac{1}{2}m_{2}^{2}(\nu)\] \[\times\left[(1-\gamma)(A_{1}+A_{2}\nu+A_{3}\nu^{2})^{2}+A_{2}+2A_ {3}\nu\right]+\frac{\rho^{2}}{2\gamma}m_{2}^{2}(\nu)(1-\gamma)^{2}(A_{1}+A_{2 }\nu+A_{3}\nu^{2})^{2}=0.\] (C.41) Similar to the proof of Theorem 4.1, we can know that the terms involving \(\nu^{4}\) in (C) cannot be matched and then the conjecture of the exponential-cubic form of \(h\) cannot solve the problem (3.17) when \(\phi=1\). **Exponential-nth:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)+\sum\limits_{k=1}^{n}\frac{1}{k}A_{k}(T-t)\nu^{k}\}\). By substituting the partial derivatives of \(h\) into (4.30), we can derive that \[-A_{0}^{\prime} -\sum\limits_{k=1}^{n}\frac{1}{k}A_{k}^{\prime}\nu^{k}+r-\beta+ \frac{1}{2\gamma}\eta^{2}(\nu)+\beta\left(\ln\beta-A_{0}-\sum\limits_{k=1}^{n} \frac{1}{k}A_{k}\nu^{k}\right)+\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m _{2}(\nu)(1-\gamma)\right]\sum\limits_{k=1}^{n}A_{k}\nu^{k-1}\] \[+\frac{1}{2}m_{2}^{2}(\nu)\left[(1-\gamma)\left(\sum\limits_{k=1} ^{n}A_{k}\nu^{k-1}\right)^{2}+\sum\limits_{k=2}^{n}(k-1)A_{k}\nu^{k-2}\right] +\frac{\rho^{2}}{2\gamma}m_{2}^{2}(\nu)(1-\gamma)^{2}\left(\sum\limits_{k=1}^{ n}A_{k}\nu^{k-1}\right)^{2}=0,\] which involves \(\nu^{n+1},\cdot\cdot\cdot,\nu^{2n-2}\) for \(n\geq 3\). It follows from the proof of Theorem 4.1 that these terms cannot be matched. Thus, when \(n\geq 3\), the exponential-polynomial form of \(h\) cannot solve the problem (3.17) with \(\phi=1\). ## Appendix D Proof of Theorem 4.4 Proof.: In the following proof process, we first consider the solvability of the PDE (4.35) in terms of \(\eta(\nu)\), \(m_{1}(\nu)\) and \(m_{2}(\nu)\) under the cases of constant, linear and quadratic. Then we show that the conjecture of \(h\) does not work under the cases of the cubic and higher order. **Exponential-constant:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)\}\). Substituting the partial derivatives of \(h\) into (4.35) yields \[\frac{A_{0}^{\prime}}{1-\phi}+r-\zeta_{1}-\zeta_{2}(\phi\ln\beta-A_{0})+\frac{ 1}{2\gamma}\eta^{2}(\nu)+\frac{\beta\phi}{\phi-1}\left[\frac{1}{\beta}\bigg{(} \zeta_{1}+\zeta_{2}(\phi\ln\beta-A_{0})\bigg{)}-1\right]=0,\] which implies that the PDE (4.35) can be solvable for a constant \(\eta\) and any feasible \(m_{1}\) and \(m_{2}\). **Exponential-linear:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)+A_{1}(T-t)\nu\}\). Similarly, one can obtain that \[\frac{1}{1-\phi} (A_{0}^{\prime}+A_{1}^{\prime}\nu)+r-\zeta_{1}-\zeta_{2}(\phi\ln \beta-A_{0}-A_{1}\nu)+\frac{1}{2\gamma}\eta^{2}(\nu)\] \[+\frac{\beta\phi}{\phi-1}\left[\frac{1}{\beta}\bigg{(}\zeta_{1}+ \zeta_{2}(\phi\ln\beta-A_{0}-A_{1}\nu)\bigg{)}-1\right]-\frac{1}{1-\phi} \left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right]A_{1}\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right]A_{1}^{2}-\frac{1}{2(1-\phi)}m_{2}^{2}( \nu)A_{1}^{2}=0,\] which shows that the PDE (4.35) can be solvable when \(\eta^{2}(\nu)\), \(m_{1}(\nu)\), \(\eta(\nu)m_{2}(\nu)\) and \(m_{2}^{2}(\nu)\) are linear in \(\nu\). **Exponential-quadratic:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)+A_{1}(T-t)\nu+\frac{1}{2}A_{2}(T-t)\nu^{2}\}\). Inserting the partial derivatives of \(h\) into (4.35) leads to \[\frac{1}{1-\phi} (A_{0}^{\prime}+A_{1}^{\prime}\nu+\frac{1}{2}A_{2}^{\prime}\nu^{ 2})+r-\zeta_{1}-\zeta_{2}(\phi\ln\beta-A_{0}-A_{1}\nu-\frac{1}{2}A_{2}\nu^{2})\] \[+\frac{\beta\phi}{\phi-1}\left[\frac{1}{\beta}\bigg{(}\zeta_{1}+ \zeta_{2}(\phi\ln\beta-A_{0}-A_{1}\nu-\frac{1}{2}A_{2}\nu^{2})\bigg{)}-1\right]\] \[+\frac{1}{2\gamma}\eta^{2}(\nu)-\frac{1}{1-\phi}\left[m_{1}(\nu) +\frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right](A_{1}+A_{2}\nu)\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right](A_{1}+A_{2}\nu)^{2}-\frac{1}{2(1-\phi) }m_{2}^{2}(\nu)\left[(A_{1}+A_{2}\nu)^{2}+A_{2}\right]=0,\] which means that the PDE (4.35) can be solvable when \(\eta^{2}(\nu)\) is quadratic in \(\nu\), \(m_{1}(\nu)\) and \(\eta(\nu)m_{2}(\nu)\) is linear in \(\nu\), and \(m_{2}^{2}(\nu)\) is a constant. **Exponential-cubic:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)+A_{1}(T-t)\nu+\frac{1}{2}A_{2}(T-t)\nu^{2}+\frac{1}{ 3}A_{3}(T-t)\nu^{3}\}\). By substituting the partial derivatives of \(h\) into (4.35), we can derive that \[\frac{1}{1-\phi} (A_{0}^{\prime}+A_{1}^{\prime}\nu+\frac{1}{2}A_{2}^{\prime}\nu^{ 2}+\frac{1}{3}A_{3}^{\prime}\nu^{3})+r-\zeta_{1}-\zeta_{2}\phi\ln\beta+\zeta_{ 2}(A_{0}+A_{1}\nu+\frac{1}{2}A_{2}\nu^{2}+\frac{1}{3}A_{3}\nu^{3})\] \[+\frac{1}{2\gamma}\eta^{2}(\nu)+\frac{\beta\phi}{\phi-1}\left[ \frac{1}{\beta}\bigg{(}\zeta_{1}+\zeta_{2}(\phi\ln\beta-A_{0}-A_{1}\nu-\frac {1}{2}A_{2}\nu^{2}-\frac{1}{3}A_{3}\nu^{3})\bigg{)}-1\right]\] \[-\frac{1}{1-\phi}\left[m_{1}(\nu)+\frac{\eta(\nu)}{\gamma}\rho m _{2}(\nu)(1-\gamma)\right](A_{1}+A_{2}\nu+A_{3}\nu^{2})\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right](A_{1}+A_{2}\nu+A_{3}\nu^{2})^{2}\] \[-\frac{1}{2(1-\phi)}m_{2}^{2}(\nu)\left[(A_{1}+A_{2}\nu+A_{3}\nu^ {2})^{2}+A_{2}+2A_{3}\nu\right]=0,\] (D.42) which has terms involving \(\nu^{4}\). By the proof of Theorem 4.2, the terms involving \(\nu^{4}\) in (D) cannot be matched and thus the exponential-cubic form of \(h\) cannot approximately solve the problem (3.17) when \(\phi\neq 1\). **Exponential-nth:** Assume that \(h(t,\nu)=\exp\{A_{0}(T-t)+\sum\limits_{k=1}^{n}\frac{1}{k}A_{k}(T-t)\nu^{k}\}\). Substituting the partial derivatives of \(h\) into (4.35), we have \[\frac{1}{1-\phi} (A_{0}^{\prime}+\sum\limits_{k=1}^{n}\frac{1}{k}A_{k}^{\prime}\nu ^{k})+r-\zeta_{1}-\zeta_{2}\left(\phi\ln\beta-A_{0}-\sum\limits_{k=1}^{n}\frac{1} {k}A_{k}\nu^{k}\right)\] \[+\frac{\beta\phi}{\phi-1}\left[\frac{1}{\beta}\bigg{(}\zeta_{1}+ \zeta_{2}(\phi\ln\beta-A_{0}-\sum\limits_{k=1}^{n}\frac{1}{k}A_{k}\nu^{k}) \bigg{)}-1\right]\] \[+\frac{1}{2\gamma}\eta^{2}(\nu)-\frac{1}{1-\phi}\left[m_{1}(\nu)+ \frac{\eta(\nu)}{\gamma}\rho m_{2}(\nu)(1-\gamma)\right]\sum_{k=1}^{n}A_{k}\nu^{ k-1}\] \[+\frac{1}{2(1-\phi)^{2}}m_{2}^{2}(\nu)\left[2-\phi-\gamma+\frac{ \rho^{2}(1-\gamma)^{2}}{\gamma}\right]\left(\sum_{k=1}^{n}A_{k}\nu^{k-1} \right)^{2}\] \[-\frac{1}{2(1-\phi)}m_{2}^{2}(\nu)\left[\left(\sum_{k=1}^{n}A_{k }\nu^{k-1}\right)^{2}+\sum_{k=2}^{n}(k-1)A_{k}\nu^{k-2}\right]=0,\] which involves \(\nu^{n+1},\cdots,\nu^{2n-2}\) for \(n\geq 3\). It follows from the proof of Theorem 4.2 that these terms cannot be matched. Thus, when \(n\geq 3\), the conjecture of the exponential-polynomial form of \(h\) cannot approximately solve the problem (3.17) with \(\phi\neq 1\).
2304.00057
SiMWiSense: Simultaneous Multi-Subject Activity Classification Through Wi-Fi Signals
Recent advances in Wi-Fi sensing have ushered in a plethora of pervasive applications in home surveillance, remote healthcare, road safety, and home entertainment, among others. Most of the existing works are limited to the activity classification of a single human subject at a given time. Conversely, a more realistic scenario is to achieve simultaneous, multi-subject activity classification. The first key challenge in that context is that the number of classes grows exponentially with the number of subjects and activities. Moreover, it is known that Wi-Fi sensing systems struggle to adapt to new environments and subjects. To address both issues, we propose SiMWiSense, the first framework for simultaneous multi-subject activity classification based on Wi-Fi that generalizes to multiple environments and subjects. We address the scalability issue by using the Channel State Information (CSI) computed from the device positioned closest to the subject. We experimentally prove this intuition by confirming that the best accuracy is experienced when the CSI computed by the transceiver positioned closest to the subject is used for classification. To address the generalization issue, we develop a brand-new few-shot learning algorithm named Feature Reusable Embedding Learning (FREL). Through an extensive data collection campaign in 3 different environments and 3 subjects performing 20 different activities simultaneously, we demonstrate that SiMWiSense achieves classification accuracy of up to 97%, while FREL improves the accuracy by 85% in comparison to a traditional Convolutional Neural Network (CNN) and up to 20% when compared to the state-of-the-art few-shot embedding learning (FSEL), by using only 15 seconds of additional data for each class. For reproducibility purposes, we share our 1TB dataset and code repository.
Khandaker Foysal Haque, Milin Zhang, Francesco Restuccia
2023-03-31T18:19:23Z
http://arxiv.org/abs/2304.00057v1
# SiMWiSense: Simultaneous Multi-Subject Activity Classification Through Wi-Fi Signals ###### Abstract Recent advances in Wi-Fi sensing have ushered in a plethora of pervasive applications in home surveillance, remote healthcare, road safety, and home entertainment, among others. Most of the existing works are limited to the activity classification of a single human subject at a given time. Conversely, a more realistic scenario is to achieve _simultaneous_, _multi-subject_ activity classification. The first key challenge in that context is that the number of classes grows exponentially with the number of subjects and activities. Moreover, it is known that Wi-Fi sensing systems struggle to adapt to new environments and subjects. To address both issues, we propose SiMWiSense, the first framework for simultaneous multi-subject activity classification based on Wi-Fi that generalizes to multiple environments and subjects. We address the scalability issue by using the Channel State Information (CSI) computed from the device positioned closest to the subject. We experimentally prove this intuition by confirming that the best accuracy is experienced when the CSI computed by the transceiver positioned closest to the subject is used for classification. To address the generalization issue, we develop a brand-new few-shot learning algorithm named Feature Reusable Embedding Learning (FREL). Through an extensive data collection campaign in 3 different environments and 3 subjects performing 20 different activities simultaneously, we demonstrate that SiMWiSense achieves classification accuracy of up to 97%, while FREL improves the accuracy by 85% in comparison to a traditional Convolutional Neural Network (CNN) and up to 20% when compared to the state-of-the-art few-shot embedding learning (FSEL), by using only 15 seconds of additional data for each class. For reproducibility purposes, we share our 1TB dataset and code repository1[1]. Footnote 1: [https://github.com/kfoysalhaque/SiMWiSense](https://github.com/kfoysalhaque/SiMWiSense) ## I Introduction Wi-Fi is one of the most pervasive wireless technologies worldwide - it has been estimated that by 2025, the Wi-Fi economy will reach a value of $4.9T [2]. Beyond ubiquitous indoor connectivity, Wi-Fi also allows to develop highly-pervasive device-free sensing applications. The latter are based on the intuition that the received Wi-Fi signals - in particular, the Channel State Information (CSI) computed to perform channel estimation and equalization - are affected by changes in the physical environment caused by any entity in between the source and the receiver. Among other applications, Wi-Fi sensing can be used for fine-grained indoor localization [3], activity recognition [4, 5], and health monitoring [6]. For an excellent survey on the topic, we refer the reader to [7]. Most of the relevant existing work - discussed in detail in Section V - focuses on performing classification of a single subject at a given time [8, 9, 10, 11]. Even though achieving acceptable sensing performance, a significantly more relevant (and realistic) problem is performing _simultaneous, multi-subject_ Wi-Fi sensing. Moreover, it is well known that Wi-Fi sensing is highly-dependent of the considered subject and environment [12]. Although some attempts to address the issue have been made, they consider few activities - less than 5 [13] - or do not consider multi-subject classification [10]. In stark contrast to the existing works, we propose SiMWiSense, a completely novel approach for simultaneous multi-subject activity classification through Wi-Fi. Figure 1 shows a high-level overview of our approach. Beyond the generalization issue, the key challenge addressed by SiMWiSense is that by defining as \(n\) and \(m\) respectively the number of subjects and activities, the number of classes to distinguish becomes \(n^{m}\). For example, 3 subjects and 10 activities correspond to more than 59,000 classes. To address this critical issue, we utilize multiple Wi-Fi devices as CSI collectors, where the _closest_ to a given subject will classify the activities conducted by that subject. We experimentally prove in Section II that the device closest to the subject will dominantly characterize the channel property between itself and the source of the Wi-Fi signal. Finding the closest device to a given subject falls under the Wi-Fi indoor localization and/or fingerprinting problem [14, 15, 16], which has been thoroughly investigated and thus considered out of our scope. Although assigning a device to a subject addresses the scalability issue - the classifier output becomes \(m\) sized - the overall performance may significantly degrade with new untrained environments and subjects. Thus, we developed a novel Few-Shot Learning (FSL) architecture which can adapt to any new environment, change Fig. 1: High-level overview of SiMWiSense. in environment or any new subject with up to 15 seconds of new data for each class. _Summary of Novel Contributions_ \(\bullet\) We present SiMWiSense, the first framework for multi-subject simultaneous activity classification using Wi-Fi (Section III). Unlike existing approaches, SiMWiSense can distinctly classify among different human subjects performing multiple activities simultaneously by utilizing multiple CSI collectors, each associated to a given subject; \(\bullet\) To address the challenge of generalizing to new environments and subjects, we propose a novel FSL-based architecture called Feature Reusable Embedding Learning (FREL). In stark contrast to existing approaches, FREL can adapt to any new environment and subject through two main steps, namely meta-learning and fine-tuning. Moreover, in contrast to the traditional FSL, FREL combines both the embedding learning and meta-learning approaches to achieve better performance through fine-tuning the classifier with only a few additional samples (Section III); \(\bullet\) We extensively evaluate SiMWiSense through an exhaustive data collection campaign in 3 different environments and with 3 subjects performing 20 different activities simultaneously. We demonstrate that SiMWiSense achieves classification accuracy of up to 98%, while FREL improves the accuracy by 85% in comparison to a traditional convolutional neural network (CNN) and up to 30% when compared to the state of the art few-shot embedding learning (FSEL) [17], by only using 15 seconds of additional data for each class. **For reproducibility, we share our whole dataset, captured video streams of the activities as ground truth, and our code repository [1].** ## II Sensing Proximity Test Wi-Fi sensing leverages tiny changes in the CSI computed through pilot symbols included in the physical layer (PHY) preamble. Although the CSI may be captured by monitoring a transmission link between Access Point (AP) and the stations (STAs) without any direct communication with the AP, a monitoring device captures the CSI of the propagation channel between itself and the AP. Thus, when the CSI monitors are spatially distant enough, they would monitor the independent propagation path between the corresponding antenna pair of the AP and itself [18]. Our key intuition is that the captured CSI is dominantly characterized by any physical change in the environment at spatially closer proximity. To evaluate this, we perform the following extensive preliminary tests which demonstrate the viability of our key concept. We have performed the sensing proximity test in 3 different environments with 3 different subjects and 20 activities. We assign a CSI monitor to each of the subjects. The monitors are placed at a distance of 1.5m - 3.0m from each other, whereas one human subject performs activity at a distance of 1.5m - 2.0m from each of the sensing monitors for each of the environments. We considered three different environments as explained in Section IV-A. The experimental setup is shown in Figure 2. We define the subjects with the closest proximity to the CSI Monitor 1, CSI Monitor 2 and CSI Monitor 3 as Subject 1, Subject 2 and Subject 3, respectively. From each environment, CSI is collected in three separate rounds where in every round a subject does 20 different activities and other subjects perform random activities. Figure 3 confirms our intuition. For example, it shows that in the classroom environment, the accuracy of Subject 1 is 95% from the CSI data of Monitor 1 whereas, with the exact same setup and tests, the accuracy of Subject 2 and Subject 3 decreases by 30% on an average with Monitor 1. This is because they are comparatively farther away from Subject 1 and more prone to the noises created by the other subjects at that instant. However, their performances improve drastically to 96% and 97% when we consider CSI Monitor 2 and CSI Monitor 3 for Subject 2 and Subject 3 respectively. The other two environments follow similar trends. **This clearly demonstrates that the CSI monitor closest to the subject performs better than other CSI monitors.** ## III Overview of SiMWiSense We describe the SiMWiSense framework dividing it into three main task blocks: (i) sensing block (ii) preprocessing block and (iii) learning block as presented in Figure 4. Fig. 3: Classification accuracy as function of sensing proximity. Fig. 2: SiMWiSense proximity test. ### SiMWiSense Sensing and Preprocessing Blocks The sensing block of SiMWiSense collects the CSI of Wi-Fi transmissions. Modern Wi-Fi systems are based on the Orthogonal Frequency Division Multiplexing (OFDM) modulation which processes multiple data streams in parallel over multiple orthogonal subcarriers. Each spatially diverse CSI monitor captures \(S\) samples during the time interval \(T=t-t^{\prime}\) with \(K\) orthogonal parallel subcarriers. Thus, the extracted CSI matrix of an \(M\times N\) system is shown in Equation 1. \[H_{r}^{m,n}=\begin{bmatrix}h_{1,1}^{m,n}&\ldots&h_{1,s}^{m,n}&\ldots&h_{1,K}^{m,n}\\ \vdots&&\vdots&&\vdots&\vdots\\ h_{s,1}^{m,n}&\ldots&h_{s,s}^{m,n}&\ldots&h_{s,K}^{m,n}\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ h_{S,1}^{m,n}&\ldots&h_{S,k}^{m,n}&\ldots&h_{S,K}^{m,n}\end{bmatrix} \tag{1}\] Here, \(H_{r}^{m,n}\) denotes the CSI matrix at receiver \(r\) of the transmit antenna \(m\) and receive antenna \(n\) where \(1\leq n\leq N\) and \(1\leq m\leq M\). The value \(h_{k,s}^{m,n}\) denotes the CSI of the S-th sample at K-th subcarrier from transmit antenna \(m\) to the receive antenna \(n\). For example, during the time interval \(T=0.2s\), if any CSI monitor captures \(S=600\) samples with a channel of 80 MHz bandwidth, \(H_{r}^{m,n}\) will have \(S\times K\) components where \(K=242\). It is worthwhile mentioning that even though the total number of subcarriers in 80 MHz channel is 256, we only consider data-transmitting subcarriers discarding the null and the guard ones. After the collection of the CSI samples, we preprocess the captured data, as presented in Figure 5, before it is fed to the learning block. After \(S\) samples are collected, we align the data by discarding the missing and/or corrupted CSI measurements. Moreover, any abrupt amplification in the data is removed by normalizing with the mean CSI amplitude. Then, the captures are segmented with a fixed size non-overlapping window along the time domain. If the total number of samples captured during any time interval \(T\) is \(S\), such that \(T=T_{1}+T_{2}+T_{3}+.......+T_{n}\) where \(T\) is divided into \(n\) equal time windows, \(S=S_{1}+S_{2}+S_{3}+.........+S_{n}\) are the corresponding sample captures of the \(n\) time segments. Thus, each window has the tensor dimension of \(S_{p}\times K\times N\) where \(S_{p}\) is the number of samples in p-th time window \(T_{p}\), and \(N=2\) is the complex CSI measurement. This processed data is then fed to the input of the learning block. ### SiMWiSense Learning Block One of the challenges in multi-subject detection is scalability. For \(P\) persons and \(Q\) activities, it has \(Q^{P}\) possible combinations, resulting in an exponential increase in the number of classes. One centralized model to classify multi-subject activities becomes difficult when \(P\) and \(Q\) are large. To tackle this problem, we propose a decentralized detection system for each subject. Specifically, a learning model is assigned to each device to sense the subject which is closest to it. Therefore, each subject only requires \(Q\) detection regions in hyperspace. For \(P\) subjects, the overall complexity reduces to \(P\times Q\). This sensing system has two assumptions: (i) there will be at least the same number of CSI collectors as subjects; (ii) the subject closest to the device take the most significant part in shaping the channel property between the device and AP. Assumption (i) is reasonable since nowadays, almost everyone is inseparable from their smart devices, such as laptops or smartphones, in their work and daily lives. For (ii), we developed an experiment based on sensing proximity as shown in Section II. To further decrease complexity, we propose a cascaded model for two-stage detection. As shown in Figure 6, in the first stage, a deep learning (DL) model is used to discriminate different subjects \(S\) coarsely. After the coarse detection, another fine-grained DL model is used to determine the activities \(A\). Regardless of the output at the first stage, all the subjects will share the same fine-grained model at the second stage. Thus, the overall complexity becomes \(P+Q\). Fig. 4: Overview of SiMWiSense and Processing Blocks. Fig. 5: CSI data processing in SiMWiSense Fig. 6: Cascaded Learning Block. One challenging problem of the hierarchical detection model is that even if different persons do the same activities, their movements will have personal patterns and gestures. Furthermore, subjects may join or leave the detection system. Thus, it is impractical to have a universal classifier for activity detection. In addition, the performance of data-driven algorithms in wireless sensing usually will be downgraded by time-varying channel conditions. Thus, we need a model which can swiftly adapt to new subjects and channel features. ### _FREL Learning Algorithm_ We propose a novel FSL architecture named FREL, which allows the DL model to adapt to new scenarios with only a few data. We utilize this algorithm in both subject detection and activity detection stages. Next, we will discuss the algorithm in detail. FSL aims at training models that can rapidly generalize to new tasks with only a limited number of labeled samples, is a strong candidate to tackle the data collection problem. One approach to FSL is to learn an embedding for multiple tasks [19, 20]. Specifically, a deep neural network (DNN) is used to learn a clustered mapping from input to the latent space. During the inference time, the embedding network does not need to be fine-tuned, and a few samples will be used as references to classify unobserved data. Another approach is meta-learning [21, 22] which involves two phases: (i) meta-training and (ii) fine-tuning. Meta-learning aims to learn shared features between different tasks during the meta-learning phase and quickly optimize the parameters with a few data points during the fine-tuning stage. For the first time, FREL combines embedding learning and meta-learning. Figure 7 demonstrates the structure of our FREL model. First, a DNN model is used to learn the embedding of the input and another classifier is used to decode the features in the latent space. Similar to meta-learning, the FREL consists of meta-learning and fine-tuning stages. We train the embedding network and classifier jointly with a mini-dataset during the meta-learning phase, which is the same as embedding learning. After obtaining the embedding, we further optimize the classifier with a few samples while testing. In contrast to embedding learning, we believe fine-tuning can provide better flexibility and granularity, which is more suitable for a dynamic system. Different from meta-learning, we only retrain the simple classifier instead of the whole structure, which can reduce the computation, enabling a faster adaption to new tasks. This design is inspired by [23], which shows that the effectiveness of meta-learning is mainly due to feature reuse. It is a simplified version of MAML [21] that fine-tunes the last few layers which can achieve comparable performance as the original algorithm. Formally, we consider the embedding network as a function \(E_{\theta}:X\to Z\), where \(Z\) denotes the latent vector. The classifier \(C_{\phi}:Z\to Y\) is to find a mapping between encoded features \(Z\) and labels \(Y\). \(\theta\) and \(\phi\) are the trainable parameters of the embedding network and classifier, respectively. Hence, The overall system \(F_{\psi}(X)=Y\) can be written as \(C_{\phi}(E_{\theta}(X))=Y\), where \(\psi=\{\theta,\phi\}\) is the total trainable parameters of the whole system. In FREL, models are trained on a set of mini-batches of data that only have N different classes (ways) and K samples (shots) of each class. Each batch of few-shot data can be considered as a new task \(\tau_{j}=\{(x_{i}^{j},y_{i}^{j})\}|_{i=1}^{m}\) in meta-learning. \(m=N\times K\) denotes the total number of samples in one batch. The objective of meta-learning is to find a set of parameters \(\psi\) that minimize the expectation of the loss function \(\mathcal{L}\) with respect to a group of meta-learning tasks \(\mathcal{T}=\{\tau_{j}\}|_{j=1}^{n}\), i.e., \[\min_{\{\theta,\phi\}}\quad\frac{1}{n}\sum_{j=1}^{n}[\frac{1}{m}\sum_{i=1}^{m }\mathcal{L}(C_{\phi}(E_{\theta}(x_{i}^{j}))=y_{i}^{j})] \tag{2}\] We merge the task set \(\mathcal{T}\) into a single dataset \(\mathbb{D}^{train}\) to get a better embedding, which is given by \[\begin{split}\mathbb{D}^{train}&=\tau_{1}\cup \cdots\cup\tau_{j}\cup\cdots\cup\tau_{n}\\ &=\{(x_{i}^{1},y_{i}^{1})\}|_{i=1}^{m}\cup\cdots\cup\{(x_{i}^{n}, y_{i}^{n})\}|_{i=1}^{m}\end{split} \tag{3}\] We notice that by merging multiple tasks into single dataset, the optimization problem in Equation 2 can be reduced to a general DL problem, which can be solved by a gradient decent optimizer iteratively, \[\{\theta,\phi\}=\{\theta,\phi\}-\alpha\frac{1}{mn}\sum_{i=1}^{mn}\nabla_{\{ \theta,\phi\}}\mathcal{L}(C_{\phi}(E_{\theta}(x_{i})),y_{i}) \tag{4}\] where \(mn\) is the total number of data points in \(\mathbb{D}^{train}\) and \(\alpha\) is the learning rate. Once the optimal embedding \(\theta^{*}\) is obtained, the classifier is fine-tuned on another small portion of data \(\mathbb{D}^{tune}\). Unlike training on a combined set during the meta-learning, each iteration we randomly sample K shots from each of N ways in \(\mathbb{D}^{tune}\) to build a new task \(\tau\) and update classifier by gradient decent, \[\phi=\phi-\beta\frac{1}{m}\sum_{i=1}^{m}\nabla_{\phi}\mathcal{L}(C_{\phi}(E_{ \theta^{*}}(x_{i})),y_{i}) \tag{5}\] where \(\beta\) denotes the learning rate in the fine-tuning phase. Finally, performance is evaluated on the rest of unseen dataset \(\mathbb{D}^{test}\). We summarize FREL in Algorithm 1. It is worth pointing out that although FREL is proposed for WiFi sensing initially, the architecture is presented generally since it can be used for other FSL purposes. Next we discuss the specific setup that is used in the SiMWiSense learning block. \(\bullet\)**Embedding network:** Figure 8 shows the DNN architecture we use for embedding learning in FREL. It is composed of 4 convolutional layers followed by batch normalization and Rectified Linear Unit (ReLU) activation. Each convolutional Fig. 7: Feature Reusable Embedding Learning (FREL). layer comprises 64 channels with a kernel size of \(3\times 3\). After the first three convolutional layers, \(2\times 2\) Max pooling layers are used to down sample the previous layer's output. After the fourth convolutional layer, a global average pooling strategy is chosen to extract the feature to the latent space, resulting in a 64-dimensional feature space. ``` Phase 1: FREL meta-learning Require: learning rate \(\alpha\), dataset \(\mathbb{D}^{train}\) Initialize: \(\theta\) for embedding, \(\phi\) for classifier for\(iteration=1,2,...\)do update \(\theta\) and \(\phi\) with \(\mathbb{D}^{train}\) by Equation 4 end for Return: \(\theta^{*}\) for embedding Phase 2: FREL fine-tuning Require: learning rate \(\beta\), dataset \(\mathbb{D}^{tune}\) Initialize: \(\phi\) for classifier for\(epoch=1,2,...\)do for\(episode=1,2,...\)do sample a task \(\tau=\{(x_{i},y_{i})|_{i=1}^{m}\}\) from \(\mathbb{D}^{tune}\) update \(\phi\) with \(\tau\) by Equation 5 end for Return: \(\phi^{*}\) for classifier ``` **Algorithm 1**Feature Reusable Embedding Learning * **Classifier:** A fully-connected layer is used on top of the pre-trained embedding network as a linear decoder. Nonlinearity functions such as ReLU are not applied since we aim to study the efficacy of the overall FREL's design rather than develop complicated DNN models. To investigate the effectiveness of fine-tuning in FREL, we also implement an untrainable K-Nearest Neighbor (K-NN) algorithm as a comparison after the meta-learning phase following the same procedure as another state-of-the-art FSEL model [17]. During the inference time of K-NN, K samples from each class are transformed into embedding as supports, and the queries are classified by a plurality vote of the K nearest supports. * **Mini-dataset:** Usually in general FSL, the dataset such as Omniglot [24] and Mini-ImageNet [19] contains a large number of tasks and a few number of samples in each task. Algorithms are first pre-trained on multiple tasks and then fine-tuned and tested on single specific task. However, it is never feasible to get a dataset with comprehensive tasks in wireless sensing since the changing environment can always generate new tasks that models have never seen before. Thus, one significant difference in our implementation is the mini-dataset. We only utilize limited number of data collected in 15 seconds for \(\mathbb{D}^{tune}\). The test set \(\mathbb{D}^{test}\) is never exposed to the model during pre-training and fine-tuning. The mini-dataset makes the problem more challenging as models are learned from not only a few samples but a few tasks. * **Learning strategy:** We evaluate our model with 5-shot learning, which means we have 5 samples for each class in every mini-batch data. We choose Adam as the optimizer in both phases. The learning rate \(\alpha\) and \(\beta\) are 0.01. Cross-entropy loss is used during the pre-training and fine-tuning stages for simplicity. Other metrics such as deep k-means [25] and prototypical loss [20] can be applied for different purpose. ## IV Experimental Evaluation ### _Experimental Setup_ We evaluate the multi-subject sensing capability of SiMWiSense as well as the generalizing feature of FREL through an extensive data collection campaign in 3 different environments: classroom, office and kitchen with 3 human subjects performing 20 different activities in random order. We used off-the-shelf Netgear Nighthawk X4S AC2600 routers to set up the network, whereas IEEE 802.11ac compliant Asus RT-AC86U routers with Nexmon tool are used as the CSI extractor [26]. The AP and the STAs are configured with \(M=3\) antennas and \(N_{ss}=3\) spatial streams, whil Fig. 8: CNN Embedding Network. Fig. 9: Experimental Setup of SiMWiSense. configured with \(M=1\) antenna and \(N_{ss}=1\) spatial streams respectively. We send UDP packets from AP to the STAs to trigger the CSI monitors. The CSI has been collected at center frequency \(f_{c}=5.21GHz\) (i.e., channel 42) with signals having 80 MHz bandwidth. Three CSI monitors were placed in each of the environments at a distance of 1.5m - 2.0m from each other as shown in Figure 9. Three different subjects performed all the 20 activities: _push forward, rotate, hands up and down, waive, brush, clap, sit, eat, drink, kick, bend forward, wash hands, call, browsing phone, check wrist, read, waive while sitting, writing, side bend, and standing_ at a distance of 1.5m -2.0m from the CSI monitors. Three separate models are trained for each of the monitors with the data of each corresponding subject performing the activities while the other subjects do random activities. For example, the model for Monitor 1 is trained with Subject 1 performing each of the 20 activities for 5 minutes while Subject 2 and Subject 3 perform different random activities in random order. Similarly, the data from each of the monitors are collected at the mentioned three different environments and for each subject. To evaluate the simultaneous multi-subject sensing, testing data has been collected while different subjects perform randomly chosen activities simultaneously. The experimental setup and the activity zone of the subjects at the mentioned three experimental sites are presented in Figure 10. To create the ground truth, we captured the video streams of the subjects performing different activities in synchronization with the data collection. ### _Performance Evaluation of SiMWiSense_ A time window size of 0.1s is considered for data segmentation, while each data window has 50 samples for all the tests performed in IV-B. Firstly, we do the performance evaluation of our two-stage detection system: (i) subject identification (ii) activity classification with baseline CNN as shown in Figure 8. Then, we demonstrate the generalization capability of FREL in both of the stages of the detection system and compare the performance of FREL with both baseline CNN and state-of-the-art FSEL [17] model. In the rest of IV-B, Monitor 1, Monitor 2 and Monitor 3 are denoted as M1, M2, and M3 respectively whereas Subject 1, Subject 2 and Subject 3 are presented as Sub1, Sub2 and Sub3 respectively. **Baseline CNN:** In the first step of the two-stage SiMWiSense detection, each CSI monitor classify Sub1, Sub2, Sub3 or "no activity". The subject identification performances of each of the monitors in three environments are presented in Figure 11. The results show that with the baseline CNN, the average accuracy of M1, M2 and M3 across all three environments are 95.47%, 96.68% and 97.34% respectively, which shows no significant performance discrepancy. On the other hand, the average performances in three different environments are 95.73%, 98.77% and 94.87% respectively which shows an average of 3.51% performance boost in office compared to the other environments. It is caused by the Non-line-of-sight propagation between the monitors and the distant subjects due to the presence of desks and computers, causing less noise in identifying the closest subject. The simultaneous multi-subject activity classification performance of SiMWiSense with baseline CNN is presented in Figure 12. The average accuracy in the environments: office, classroom and kitchen are 98.51%, 97.37% and 97.49% respectively which follows the similar trend in performance depicting the stability and robustness of SiMWiSense. Moreover, the performance discrepancy of monitors M1, M2 and M3 are less than 2% achieving an average accuracy of 98.0%, 96.84% and 98.54% respectively with M1, M2 and M3. **Performance as a function of subcarrier resolution:** It is known that Wi-Fi sensing performs worse with lower subcarrier resolutions [6, 27]. To compensate the lower subcarrier resolution, one can adopt extensive feature extraction techniques or higher sampling frequency which would increase the computation burden by intensifying the pre-processing steps and learning process dramatically. This stimulates us to study the trade-off of the subcarrier resolution and the SiMWiSense performance. Figure 13 shows the performance of SiMWiSense as a function of the number of subcarriers. The first consecutive 20, Fig. 11: Subject identification in SiMWiSense with baseline CNN Fig. 10: Data collection locations. 40, 80 and 160 and 242 subcarriers are considered to emulate a sensing system with a lower bandwidth - thus, less number of consecutive subcarriers. The results show that the average performance of monitors decrease to 84.92% when the number of subcarriers decrease from 242 to 80, and it goes down to 62.76% with a percent decrease of 34.94 when we switch to only 20 subcarriers. Figure 14 presents the confusion matrices of baseline CNN when trained with 242 and 20 subcarriers respectively where they achieved an accuracy of 98.66% and 64.37% respectively. It is evident that, with only 20 subcarriers, the model gets confused with few activities whereas it performs comparatively better in other activities. The top three classes which are hardest to distinguish at lower subcarrier resolutions are: wash hands (index 17), rotate (index 10), and brush (index 15). However, it is noticeable that when we switch to 20 subcarriers from 242, the input tensor dimension reduces by 12 times from \(50\times 242\times 2=24200\) to \(50\times 20\times 2=2000\) and still achieve around 64% accuracy on an average in the considered scenarios. **SiMWiSense performance with FREL:** Even though the performance of the traditional CNN is quite good, they fail to generalize the environments or subjects. In such instances for generalizing to new environments and subjects, FREL excels in comparison to the traditional CNN. The performance of the FREL in new untrained environment are presented and compared with traditional CNN and FSEL [17] in Figure 15 and Figure 16 respectively. The results show that the overall performance of SiMWiSense with FREL in new untrained environments improves by 86.06%, 87.56% and 86.90% when the embedding network is trained in classroom, office and kitchen respectively in comparison to the baseline CNN. The highest performance achieved by FREL in any new untrained environment is 97.24% in the kitchen with monitor M1 whereas the lowest accuracy is 89.33% in the kitchen with monitor M2. Thus, it demonstrates the robustness, reliability and strong adaptive capability of FREL to new environments. As shown in Figure 16, FREL surpasses the FSEL by 25.71%, 17.43%, and 15.56% respectively when the FREL is trained in the classroom, office and kitchen and tested on other corresponding environments. Figure 17 presents the confusion matrices of FREL and FSEL when trained with monitor M1 in the classroom and tested with monitor M1 in the office. It is evident from Figure 17b that the accuracy drop of FSEL is caused by a few activities which it finds difficult to distinguish. The top three activities Figure 14: Confusion matrices of baseline CNN with 242 and 20 subcarriers respectively (in classroom, with monitor M2). Figure 12: Performance of SiMWiSense at three different environments with baseline CNN as presented in Figure 8 Figure 13: Performance of SiMWiSense with baseline CNN as a function of number of subcarriers (environment: classroom). Figure 15: Performance of FREL in simultaneous activity sensing with new untrained environments. which FSEL finds most difficult to distinguish are waving while sitting (index 18), rotating (index 10) and eating (index 3). On the other hand, the top three distinct activities for FSEL are drinking (index 6), waving (index 8) and phone call (index 13). **FREL in generalizing subject identification across environments:** FREL can also generalize the subject identification in new untrained environments, as presented in Figure 18. In fact, it can achieve up to an accuracy of 96.53% whereas the traditional CNN only limits to 6.19% compared to the 93.53% for FREL on an average across the monitors and the environments. As depicted in Figure 19, when compared to FSEL, proposed FREL demonstrates an accuracy boost of 17.79%, 17.93%, and 17.55% in classroom, office and kitchen, respectively. Thus, FREL signifies stable and reliable performances across the environments in generalizing the'subject identification' phase of the learning also. **FREL in generalizing the monitors across the environment:** It would be interesting to see how a FREL model trained in one monitor can adapt and generalize to other monitors, thus generalizing new subjects of the same environment. It would further lessen the training time and drastically reduce the system deployment complexity. Figure 20 presents the performance of FREL with new untrained monitors across the three different environments. It can achieve an average accuracy of 91.71%, 94.51% and 94.78% in generalizing the other monitors while trained with Monitor 1, Monitor 2 and Monitor 3, respectively. Thus FREL enables any system to be trained only on one monitor and deployed with n number of monitors with only 15s new data samples from each monitor. ## V Related Work A significant amount of research in Wi-Fi sensing has been performed over the last few years. The reader may refer to the following surveys for a good compendium of the state of Fig. 16: Performance comparison of FREL with FSEL in new untrained environments. Fig. 17: Confusion matrices for FREL and FSEL when trained in classroom, with monitor M1 and tested in office, with monitor M1. Fig. 18: Performance of FREL as the subject identifier in untrained environments Fig. 19: Performance comparison of FREL and FSEL as the subject identifier in untrained environments the art [7, 28, 29]. There have been several approaches to address the challenges of Wi-Fi sensing, which includes received signal strength indicator (RSSI) based approaches [30, 31], and passive Wi-Fi radar (PWR) [32, 33]. Wi-Fi sensing leveraging beamforming feedback information (BFI) which can be captured from any Wi-Fi network is the state-of-the-art approach [34]. However, CSI-based Wi-Fi sensing is by far the most popular approach. DL has already been proven to be effective in various CSI-based Wi-Fi sensing applications, including human activity classification [35, 36, 37], gesture recognition [38, 39], health-monitoring [40, 41, 42], human counting [43, 44], indoor localization [15]. To explore other DL-based CSI sensing applications, we refer the readers to [7, 45]. A few of the interesting works on human activity classification are briefly discussed below. Shalaby et al. [35] proposed four different DL models, namely CNN with Gated Recurrent Unit (GRU), glscnn with GRU and attention, CNN with a GRU and a second CNN, and CNN with Long Short-Term Memory (LSTM) and a second CNN, achieved accuracy up to 99.46%. However, these models only consider a single subject performing six activities and can not generalize to new untrained environments. MCBCAR by Wang et al. [46] used generative adversarial network (GAN) and semi-supervised learning to address the challenge of the non-uniformly distributed data due to environmental dynamics. Even though this work considers the dynamic change in the environment, the framework is not designed to adapt to new untrained environments and simultaneous multi-subject sensing. AFSL-HAR framework by Wang et al. [47] achieves a performance gain in recognizing the new activities with a few samples of new data through few-shot learning. Even though this work addressed the challenges of new scenario, activity or subject by fast adaptation with few new samples, the framework classify only one subject at a time in any environment. Ding et al. proposed WiLSensing [48] to address the challenge of variations in activity locations in the same environment with few new data samples through Protonet. However, it is not evident how WiLSensing would adapt to a completely new scenario or new subject. Ding et al. proposed RF-Net [49], a meta-learning framework to adapt to new environments with few labeled data samples. However, the RF-Net's CSI-based sensing performance ranges in (70-80)%. In contrast to the DL-based systems, Abdelnasser et al. [8] designed WiGest for gesture classification based on mutually-independent distinguishable application actions which does not need any training. **To the best of our knowledge, we are the first to propose a framework for simultaneous multi-subject activity recognition with Wi-Fi sensing.** ## VI Concluding Remarks In this paper, we have proposed SiMWiSense, the first framework for simultaneous multi-subject sensing based on Wi-Fi CSI sensing. In contrast to the existing approaches, SiMWiSense can classify the activities of multiple subjects in the same environment independently and simultaneously. An FSL-based algorithm: FREL is also proposed for fast adaptation to changing data distributions, making the system robust for the dynamic environment, which can generalize new environments, subjects with only a few new samples of data achieving an accuracy up to 98.94%. We have evaluated the efficacy of the overall design using extensive data collected in three different scenarios: classroom, office and kitchen, with three subjects performing 20 activities simultaneously. We demonstrate that SiMWiSense surpasses the traditional CNN-based approach and a state-of-the-art FSL model by 85% and 20% on average. ## VII Acknowledgements This material is based upon work supported in part by the National Science Foundation (NSF) under Grant No. CNS-2134973 and CNS-2120447. The views and opinions expressed in this work are those of the authors and do not necessarily reflect those of the NSF.
2305.00418
Using Large Language Models to Generate JUnit Tests: An Empirical Study
A code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models (e.g., GitHub Copilot) are increasingly being adopted in practice, it is unclear whether they can successfully be used for unit test generation without fine-tuning for a strongly typed language like Java. To fill this gap, we investigated how well three models (Codex, GPT-3.5-Turbo, and StarCoder) can generate unit tests. We used two benchmarks (HumanEval and Evosuite SF110) to investigate the effect of context generation on the unit test generation process. We evaluated the models based on compilation rates, test correctness, test coverage, and test smells. We found that the Codex model achieved above 80% coverage for the HumanEval dataset, but no model had more than 2% coverage for the EvoSuite SF110 benchmark. The generated tests also suffered from test smells, such as Duplicated Asserts and Empty Tests.
Mohammed Latif Siddiq, Joanna C. S. Santos, Ridwanul Hasan Tanvir, Noshin Ulfat, Fahmid Al Rifat, Vinicius Carvalho Lopes
2023-04-30T07:28:06Z
http://arxiv.org/abs/2305.00418v4
# Exploring the Effectiveness of Large Language Models in Generating Unit Tests ###### Abstract A code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models (_e.g._, GitHub Copilot) are increasingly being adopted in practice, it is unclear whether they can successfully be used for unit test generation without fine-tuning. To fill this gap, we investigated how well three generative models (CodeGen, Codex, and GPT-3.5) can generate test cases. We used two benchmarks (HumanEval and Evesuite SF110) to investigate the context generation's effect in the unit test generation process. We evaluated the models based on compilation rates, test correctness, coverage, and test smells. We found that the Codex model achieved above 80% coverage for the HumanEval dataset, but no model had more than 2% coverage for the SF110 benchmark. The generated tests also suffered from test smells, such as Duplicated Asserts and Empty Tests. code generation, unit testing, large language models, test smells, test generation ## I Introduction Automated code generation approaches generate code from _prompts_[1]. These prompts specify the developer's intent and have varying granularity and structure. They may include sentences, code comments, code elements (_e.g._, function signatures, expressions, _etc._), or a combination of these. Hence, developers can write an initial code and/or comment and rely on these tools to generate the remaining code to speed up the software development process [2]. With the release of GitHub Copilot1 in 2021, these techniques are increasingly being adopted in the industry [2]. GitHub Copilot relies on a transformer-based [3] Large Language Model (LLM) fine-tuned for code generation. Footnote 1: [https://github.com/features/copilot](https://github.com/features/copilot) With the increasing popularity of LLMs prior works have investigated the correctness of the generated code [4], their quality (in terms of code smells) [5], security [6] as well as whether it can be used for API learning tasks [7], and code complexity prediction [8]. However, it is currently unclear the effectiveness of using prompt-based pre-trained code generation models for generating _unit tests_. Unit testing is an important software maintenance activity because it helps developers identify and fix defects early on in the development process before they can propagate and cause more significant problems [9, 10, 11]. Moreover, unit tests help developers understand how the various code units in a software system fit together and work as a cohesive whole [12]. Given its importance, prior works (_e.g._, [13]) developed automated test case generation techniques. To better understand the current capabilities of LLMs in generating unit tests, we conducted an empirical study using three LLMs (Codex [14], ChatGPT3.5 [15] and CodeGen [16]) to generate JUnit5 tests for classes in the HumanEval dataset's Java version [17] and in 47 open-source projects from the SF110 dataset [13]. We answered two research questions. In the first question, we used the full class under test as a context for the LLMs to generate unit test cases. In the second research question, we examine how different context styles (_e.g._, only using the method under test, the presence, and absence of JavaDoc _etc._) can influence the generated tests. We examined the produced tests with respect to their compilation rates, correctness, code coverage, and test smells. The **contributions** of our work are: * A systematic study of three LLMs for zero-shot unit test generation for 194 classes from 47 open-source projects in the SF110 dataset [18] and 160 classes from the HumanEval dataset [17]. * An investigation of the quality of the produced unit tests by studying the prevalence of test smells in the generated unit tests by different code generation models. * A comparison of how different context styles affect the performance of LLMs in generating tests. * A thorough discussion about the implication of using code generation models for unit test generation in a Test Driven Development (TDD) environment. * A replication package with all the scripts used to gather the data and spreadsheets compiling all the results2. Footnote 2: [https://doi.org/10.5281/zenodo.7875623](https://doi.org/10.5281/zenodo.7875623) ## II Background In this section, we define the core concepts needed for our paper to be understood. ### _Unit Tests & Test Smells_ The goal of _unit testing_ is to validate that each program unit is working as intended and meets the specified requirements [19]. A _unit_ refers to a piece of code that can be separated and examined independently (_e.g._, functions, methods, classes, _etc._). In this paper, _classes_ are our units under test. Just like production code, unit tests need to be not only _correct_ but also satisfy other quality attributes, such as _maintainability_ and _readability_[20]. _Unit test smells_ (henceforth "test smells") are indicators of potential problems, inefficiencies, or bad programming/design practices in a unit test suite [21, 22, 23, 24, 25]. They are often subtle and may not necessarily result in immediate failures or defects, but they can significantly impact the maintainability and effectiveness of the test suite over time [26]. There are many test smell types, ranging from tests that are too slow/fragile to tests that are too complex or too tightly coupled to implementing the code under test. For example, the Java code in Listing 1 has a unit test for a method (largestDivisor(int)). It checks whether the Method Under Test (MUT) returns the largest divisor of a number. Although this test is correct, there is no explanation for the expected outputs passed to the assertions, which is a case of the _Magic Number Test_ smell [26]. It also contains multiple test cases under the same test methods, an example of _Assertion Roulette_ smell [24]. ``` [MISSING_PAGE_POST] [38]; 2 we generated JUnit5 tests using three LLMs; 3 we computed the compilation rates, correctness, number of smells, as well as the line/branch coverage for the generated tests and compared with Evosuite v1.2.0, which is a state-of-the-art unit test generation tool [13]. Footnote 3: The code formatting is modified for a better presentation. #### Iii-C1 Data Collection We retrieved Java classes from: * The **multilingual HumanEval dataset**[17] contains **160**_prompts_ for Java and other programming languages crafted from the original Python-based HumanEval dataset [14]. However, this multilingual version does not provide an implementation for each prompt (_i.e._, a _canonical solution_). Hence, we wrote the solution for each problem and tested our implementation using the provided test cases. Our solution is encapsulated in a class as a public static method. Listing 2 shows a sample taken from this dataset3, where the prompt is in lines 1-12 and the solution is in lines 13-16. Footnote 3: The code formatting is modified for a better presentation. ``` 1import java.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.net.netnet.net.net.net.net.netnet.net.net.net.net.net.net.netnet.net.net.net.netnet.net.net.net.net.netnet.net.net.netnet.net.net.net.netnet.net.net.netnet.net.net.net.netnet.net.net.netnet.net.netnet.netnet.net.netnet.net.net.netnet.net.netnet.net.netnet.net.netnet.net.net.netnet.net.netnet.net.net.netnet.net.net.netnet.net.net.netnet.net.net.net.netnet.net.net.netnet.net.net.net.netnet.net.net.netnet.net.netnet.net.net.netnet.net.net.netnet.net.netnet.net.netnet.net.netnet.net.netnet.net.net.netnet.net.net.net.netnet.net.net.netnet.net.net.netnet.net.net.netnet.net.net.netnet.net.net.netnet.net.netnet.net.netnet.net.netnet.net.netnet.net.netnet.netnet.net.netnet.net.netnet.netnet.net.netnet.net.netnet.net.netnet.net.netnet.net.netnet.net.netnet.net.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.net.netnet.net.netnet.netnet.netnet.net.netnet.netnet.netnet.netnet.netnet.net.netnet.netnet.netnet.net.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnet.netnetnet.netnet.netnet.netnet.netnet.netnet.netnetnet.netnet.netnet.netnetnet.netnetnet.netnetnet.netnetnet.netnet.netnetnet.netnetnet.netnetnet.netnetnet.netnetnet.netnetnet.netnetnet.netnetnet.netnetnet.netnetnet.netnetnet.netnetnetnet.netnetnetnet.netnetnetnet.netnetnetnetnetnet. from the JUnit5 API. Listing 3 illustrates the structure of these prompts and context, in which lines 1-11 (highlighted in blue) and lines 12-24 are part of the _context_ and _prompt_, respectively. The context starts with a comment indicating the CUT's file name followed by the CUT's full code (_i.e._, its package declaration, imports, fields, methods, _etc._). Similarly, the prompt starts with a comment indicating the expected file name of the generated unit test. Since a class can have more than one testable method, we generated one unit test file for each testable method in a CUT and appended a suffix to avoid duplicated test file names. A suffix is a number that starts from zero. After this code comment, the prompt includes the same package declaration and import statements from the CUT. It also has import statements to the @Test annotation and the assert* methods (_e.g._, assertTrue(...)) from JUnit5. Subsequently, the prompt contains test class' JavaDoc that specifies the MUT, and how many test cases to generate. The prompt ends with the test class declaration followed by a new line (\(\backslash\)n), which will trigger the LLM to generate code to complete the test class declaration. ``` 1//GlashHome/.jore 2[]Home unit test out of the total number of lines [43, 44], _i.e._, \(\frac{Number\ of\ executed\ lines}{Total\ number\ of\ lines}\times 100\). **Branch Coverage** is the most well-known and practiced metric in software testing [43] and measures how many branches are covered by a test. It is computed as: \(\frac{Number\ of\ visited\ branches}{Total\ number\ of\ branches}\times 100\). **Test Correctness** measures how effectively an LLM generates correct input/output pairs. This study assumes that the code under test is implemented correctly. The reasoning behind this assumption is twofold: the HumanEval dataset contains common problems with well-known solutions (which we wrote and tested ourselves), and the SF110 projects are mature open-source projects. Given this assumption, a failing test case is considered to be _incorrect_. Thus, we compute the number of generated unit tests that did not fail. We ran the tests using a timeout of **2** and **10** minutes for the HumanEval and the SF110 datasets, respectively, because we observed generated tests with infinite loops. Moreover, we analyzed the quality of the unit test from the perspective of the test smells. To this end, we used TsDetect, a state-of-the-art tool that detects 20 test smell types [25, 45]. Due to space constraints, we provide a list of the test smells detectable by TsDetect with their description and source in our replication package. ### _RQ2: Code Elements in a Context_ To investigate how different code elements in a context influence the generated unit test, we first created _scenarios_ for each of the 160 CUTs collected in RQ1 from the multilingual HumanEval dataset [17] and the 194 CUTs from the 47 Java projects from the EvoSuite benchmark dataset [18, 38]. Next, we generated JUnit5 tests for each scenario. Lastly, we computed the same metrics as in RQ1 (Section III-A3). #### Iv-B1 Scenario Creation We created _three_ scenarios for the HumanEval dataset and _four_ for the EvoSuite Benchmark. **HumanEval Scenarios**: Recall that each MUT in this dataset has a JavaDoc describing the method's expected behavior and examples of input-output pairs (see Listing 1). Thus, the three scenarios (**S1**-**S3**) are created as follows: * It does not contain any JavaDoc (_e.g._, the JavaDoc from lines 5-11 within Listing 2 is removed from the CUT). * The JavaDoc does not include input/output examples, only the method's behavior description (_e.g._, Listing 2 will not have lines 9 and 10). * The MUT does not include its implementation, only its signature (_e.g._, Listing 2 will not have lines 13 and 14). The first two scenarios demonstrate the effect of changing the JavaDoc elements. Test-Driven Development (TDD) [46] inspires the last scenario approach, where test cases are written before the code implementation. **SF110 Scenarios**: Unlike HumanEval, the CUTs from SF110 do not necessarily include input/output pairs. Thus, we generated scenarios slightly different than before: * It removes (i) any code within the CUT _before_ and _after_ the method under test as well as (ii) the MUT's JavaDoc. * It is the same as S1, but _including_ the JavaDoc for the method under test. * It is the same as S2, except that there is no method implementation for the MUT (only its signature). * It mimics S3, but it also includes all the fields and the signatures for the other methods/constructors in the CUT. S1 and S2 demonstrate the effect of having or not having code documentation (JavaDoc). S3 aims to verify the usefulness of LLMs for TDD whereas S4 is used to understand how other CUT elements are helpful for test generation. We followed the same models and steps outlined in Section III-A to generate the unit tests. That is, we generated unit tests for each MUT and scenario combination. Then, we used JUnit5, JaCoCo, and TsDetect to measure test coverage, correctness, and quality. Similar to RQ1, we also compared the results to EvoSuite [13]. ## IV RQ1 Results: Unit Test Generation using LLMs We analyze the generated unit tests according to four dimensions: **(i)**_compilation status_; **(ii)**_correctness_; **(iii)**_coverage_; and **(iv)**_quality_ (in terms of test smells). ### _Compilation Status_ As shown in the second column of Table I, less than 50% of the generated unit tests for the classes in HumanEval are compilable across all the studied models. Only 23.8% of the unit tests generated by CodeGen are compilable. In contrast, Codex and ChatGPT-3.5 had higher compilation rates (\(\approx\) 1.5-1.8 times higher than CodeGen). Codex (4K) was the model with the highest compilation rate (44.4%). For the SF110 dataset, the compilation rates are even lower. Between 2.7% and 21% of the generated unit tests for the SF110 dataset are compilable across all the studied LLMs. CodeGen was the LLM that generated the highest amount of compilable tests (21%), whereas Codex had the lowest compilation rate (2.7% and 3.4% for 2K and 4K versions, respectively). After applying the heuristics described in Section III-A2, we observed that we were able to automatically fix several unit tests (as shown in Table I). For HumanEval tests, the heuristic-based fixes increased the compilation rates by over 55% for Codex, by 38% for ChatGPT, and by 6% for CodeGen for HumanEval. The compilation rates of Codex (2K and 4K) and ChatGPT-3.5 increased from 37.5%, 44.4%, 43.1% to 100%, 99.4%, 81.3%, respectively. In the case of CodeGen, however, not as many samples were repaired; the increase was from 23.8% to 33.1%. This is because CodeGen's unit tests had far more complex compilation errors, such as assigning arrays to collections (and vice-versa), missing identifiers (_i.e._, the compiler cannot find the symbol) _etc_. For the SF110-related tests, the compilation rates increased by over 70% for ChatGPT-3.5 and Codex and by 37.5% for CodeGen. Once again, CodeGen was the model with more complex compilation errors that could not be removed through heuristics. Whereas over 74% and 85% of the unit tests for ChatGPT3.5 and Codex are compilable, CodeGen only has a total of 58.5% compilable tests. The number of unit tests and test methods is shown in the last two columns of Table I. It can be observed that CodeGen has more test files than test methods. It is because some test files contain no test method but a functional test class. In fact, the unit tests that were not fixable through heuristics were those that contained _semantic_ errors that failed the compilation. We collected all the compilation errors and clustered them using K-means. We used the silhouette method to find the number of clusters K (\(K=48\)). Upon inspecting these 48 clusters and making manual adjustments to clusters to correct imprecise clustering, we observed that the top 3 compilation errors for HumanEval were caused by unknown symbols (_i.e._, the compiler cannot find the symbol), incompatible types, and no suitable method/constructor found for an invocation/instantiation. Among the errors caused by incorrect method invocations, 51% of them were invocations to an assertion method, _e.g._, assertEquals(). In the case of SF110 tests, the top 3 compilation errors were unknown symbols (_i.e._, the compiler cannot find the symbol), class is not abstract and does not override abstract method. and class is abstract; cannot be instantiated. This differs from what we observed in HumanEval; two reoccurring problems are related to inheritance/polymorphism. Consequently, for HumanEval, we obtained a total **3,432** test methods (_i.e._, a method with an @Test annotation) scattered across **978** compilable Java test files. For SF110, we had **2,022** test methods and **600** compilable tests. The breakdown per model and dataset is shown in Table I. For comparison, we run Evosuite [13] (with default configuration parameters) to generate unit tests for each of the CUTs. Moreover, in the case of HumanEval, we manually created a JUnit5 test for each input/output pair provided in each prompt (one test method per input/output pair). It is worth highlighting that whereas Codex and ChatGPT generated _one_ unit test suggestion per prompt, CodeGen generated _ten_ suggestions per prompt. Thus, CodeGen is expected to produce more test files than the number of prompts. ### _Test Correctness_ We considered a unit test to be _correct_ if it had a success rate of 100% (_i.e._, _all_ of its test methods passed) whereas a _somewhat_ correct unit test is one that had _at least one_ passing test method. Both metrics are reported in Table II. #### Iv-B1 Results for HumanEval dataset Codex (2K) generated the highest amount of correct unit tests (\(\approx\)78%), whereas CodeGen generated the least amount of correct unit tests (24%). It is worth mentioning that although ChatGPT only produced 52% correct unit tests, it was the model that generated the highest amount of tests that have at _at least one_ passing test method (92.3%). From these results, we can infer that although all the models could not produce correct tests, they could still be useful in generating at least a few viable input/output pairs. We also found that increasing Codex's token size did not yield higher correctness rates. #### Iv-B2 Results for SF110 dataset The best-performing model for the SF110 dataset was Codex (2K) which produced 46.5% correct tests. Yet, the achieved correctness rates are rather low. Less than 50% of the produced tests are correct. CodeGen was the least performing model, producing only 6.9% correct tests. Even when considering the unit tests that produced at least one passing test case (somewhat correct), only up to 63% fulfill this criterion. Once again, Codex (2K) was the best-performing LLM, whereas CodeGen was the worst. ### _Test Coverage_ We measured the generated unit tests' line and branch coverage and compared them with the coverage for the tests generated by Evosuite [13]. For HumanEval, we also compared the coverage of the manually created tests. #### Iv-B1 Results for HumanEval dataset Table III shows the line and branch coverage for the HumanEval dataset, computed considering all the CUTs in the dataset. The results show that the LLMs achieved line coverage ranging from **58.2%** to **87.7%** and branch coverage ranging from **54.7%** to **92.8%**. Codex (4K) exhibited the highest line and branch coverage of 87.7% and 92.8%, respectively. However, the coverage of the unit tests generated by LLMs are below the coverage reported by the manual tests and those generated by Evosuite. In fact, Evesuite, which relies on an evolutionary algorithm to generate JUnit tests, has a higher line and branch coverage than the manually written tests. #### Iv-C2 Results for SF110 dataset The test coverage for SF110 is drastically worse when compared to HumanEval. In fact, ChatGPT and CodeGen both achieved a 0% branch coverage and less than 1% line coverage. Among the LLMs, Codex (2K) was the best performing one with 2.5% and 1.4% line and branch coverage. Yet, these coverages are \(\approx\)11-19\(\times\) lower than the coverage achieved by Evosuite's tests. ### _Test Smells_ #### Iv-D1 Results for HumanEval dataset Table IV shows the distribution of test smells in different LLMs6. The LLMs produced the following smells: Assertion Roulette (AR) [24], Conditional Logic Test (CLT) [26], Constructor Initialization (CI) [25], Empty Test (EM) [25], Exception Handling (EH) [25], Redundant Print (RP) [25], Redundant Assertion (RA) [25], Sensitive Equality (SE) [24], Sleepy Test (ST) [25], Eager Test (EA) [24], Lazy Test (LT) [24], Duplicate Assert (DA) [25], Unknown Test (UR) [25], Ignored Test (IT) [25], and Magic Number Test (MNT) [26]. We found that Magic Number Test (MNT) and Lazy Test (LT) are the two most reoccurring test smell types across _all_ the approaches, _i.e._, in the unit tests generated by the LLMs and Evosuite as well as the ones created manually. The **MNT** smell occurs when the unit test hard-codes a value in an assertion without a comment explaining it, whereas the **LT** smell arises when multiple test methods invoke the same production code. Footnote 6: We hide _Default Test_, _General Future_, _Mystery Guest_, _Verbose Test_, _Resource Optimium_, and _Dependent Test_ smell types because they did not occur in any of the listed approaches Whereas Codex and ChatGPT-3.5 did not produce unit tests with the Exception of Handling (EH) smell, this smell type was frequent in all test cases manually created, and the ones generated by Evosuite. **EH** smell also ocurred in the 7.6% of the (compilable) unit tests generated by CodeGen. This smell occurs when the test method itself captures exceptions to pass/fail a test instead of using the expected attribute from the @Test annotation. Assertion Roulette (AR) is a common smell produced by LLMs (frequency between 23.8% - 61.3%) and that also occurred in Evosuite in 15% of its generated tests. This smell occurs when the same test method invokes an assert statement to check for different input/output pairs and does not include an error message for each of these asserts. Similarly, the LLMs and Evosuite also produced unit tests with the Eager Test smell (EA), in which a single test method invokes different methods from the production class, as well as the Duplicate Assert smell (DA) (caused by multiple assertions for the same input/output pair). _2) Results for SF110 dataset:_ The smells detected for the SF110 tests are listed in Table V. Similar to HumanEval, Magic Number Test smell, Assertion Roulette, and Eager Tests are frequently reoccurring smells on the LLMs and the Evosuite. Unlike Evosuite, LLMs produced more Empty Tests (EM) (28.7%) and Constructor Initialization (CI) (9.3%) smells. ## V RQ2 Results: Code Elements in a Context Similar to RQ1, we investigated how code elements in a context influence the generated unit tests with respect to their _compilation status_, _correctness_, _coverage_, and _quality_. ### _Compilation Status_ #### Iv-A1 HumanEval Results Fig. 1 shows the compilation rates for the HumanEval dataset across different scenarios and LLMs. We observed that the scenario 3 increased the original (S0) compilation rates for CodeGen, and Codex (2K and 4K) from 23.8%, 37.5%, and 44.4% to **30.8%**, **53.8%** and **53.1%**, respectively. Although scenario 3 increased the original compilation rates (blue bars in Fig. 1) these tests have similar heuristic-based fix rates (except for CodeGen). In the case of CodeGen, S3 achieved a final compilation rate after applying heuristics equal to **35.4%** which is **2%** higher than fixing the outputs generated from the original prompt (**33.1%**). ChatGPT-3.5, on the other hand, experienced a sharp decrease from 43.1% to 2.5% for S3. Upon further inspection, we found that scenario 3 triggered ChatGPT 3.5 to include the original class under test in its entirety followed by the unit test. This resulted in two package declarations on the produced output; one placed in the very first line (corresponding to the CUT's package) and the other placed after the CUT for the unit test's package. These duplicated package declarations lead to compilation errors. These issues were later fixed by applying the heuristic **H3**. #### Iv-A2 SF110 Results S2 increased the original (S0) compilation rates for ChatGPT3.5 and Codex (4K), as shown in Fig. 1. However, scenarios 3 and 1 were the best performers for CodeGen, and Codex (2K), respectively. Hence, no scenario achieved a consistent best performance overall. ### _Test Correctness_ #### Iv-B1 HumanEval Results Fig. 2 depicts the percentage of unit tests generated by the models that are _correct_. Among all scenarios, scenario 3 had a similar correctness rate compared to the original prompt used in RQ1 for ChatGPT-3.5 and Codex (2K, 4K). In case of CodeGen, it produced **11.7%** more correct tests (**35.9%**). It is important to highlight that whereas ChatGPT-3.5 only had 73.8% compilable tests in scenario3 (compared to 81.3% tests from the original prompt) it still had a similar correctness rate. Yet, the original prompt is the one that has the highest correctness rates. #### Iv-B2 SF110 Results As shown in Fig. 2, while the original prompt achieved the highest correctness rate for ChatGPT-3.5 (6.9%), the other LLMs observed a correctness increase when using the context from scenario 3. Codex (4K) experienced the highest increase (from 37.9% to 50.7%). #### Iv-B3 HumanEval Results The observed line and branch coverage for each scenario and LLMs is shown in Fig. 3, which shows there is no clear trend in terms of consistent scenario performance. In the case of Codex and CodeGen, scenario 1 is the one that had the highest line coverage among the different scenarios in these models. ChatGPT3.5, on the other hand, had scenario 2 as the one with the highest line coverage. With Fig. 1: Compilation rates for HumanEval and SF110 across different scenarios Fig. 3: Line and Branch Coverage across different datasets, scenarios, and LLMs (EVS = Evosuite; MNL = Manual). Fig. 2: Correctness rates across different datasets, scenarios, and LLMs respect to _branch_ coverage, we found that scenario 3 was the best performing one for Codex, scenario 1 was the best one for CodeGen, and scenario 2 is the best one for ChatGPT-3.5. None of the scenarios in Codex outperformed the line/branch coverage of the original prompts nor the coverage achieved by the manual and Evosuite's tests. _2) SF110 Results:_ Among all scenarios, scenario 2 had a slightly higher line coverage (0.4%-2.1% increase) when compared to the original prompt used in RQ1 for all LLMs. In the case of branch coverage, S1 had slightly higher coverage for ChatGPT-3.5, whereas S2 was the best one for the remaining LLMs. However, these increases are still much lower than Evosuite's test coverage, which achieved \(\approx\) 27% line and branch coverage. ### _Test Smells_ _1) HumanEval Results:_ Table VI shows the distribution of smells for different scenarios and LLMs. The cells highlighted in green are those in which the percentage is lower than the original context, whereas those highlighted in red have a higher percentage than the original context. In terms of smell types, all scenarios have the same smell types that occurred in the original prompts (see Table IV). Moreover, we also observe that, overall, the scenarios tended to decrease the incidence of generated smells. When comparing each scenario to one another, there is no clear outperformer across all the LLMs. Yet, Scenario 3 for ChatGPT-3.5 and CodeGen had higher percentages than the original context, on average. Although the average increases are not significant (0.6% and 0.2% for these LLMs, respectively). _2) SF110 Results:_ As shown Table VII, there is not any scenario that consistently outperforms the other. However, we can observe that CodeGen and ChatGPT produce more test smells across the scenarios, as we can see from the cells highlighted in red. ## VI Results Summary and Implications In this section, we summarize the findings and provide the implications of each of them. **- LLMs vs. Evosuite**: Across all the studied dimensions, LLMs performed worse than Evosuite. One reason is that LLMs do not always produce compilable unit tests, as we showed in Table I. For example, while Evosuite produced one unit test for each of the 160 classes under test, ChatGPT3.5 only produced **130** compilable (_i.e._, executable) unit tests. Another reason is that LLMs do not seem to pay attention to the current MUT's implementation. A piece of evidence for this is that scenario 3 (which does not include the MUT's implementation) has better compilation rates than the rest. However, we also observed that ChatGPT generated test cases for "stress-testing", such as using Integer.MAX_VALUE and similar inputs in order to test for the MUT's behavior in the face of exceptionally large inputs. **- CodeGen and Codex are the least and best performing LLMs, respectively**. This can be explained by the fact that CodeGen is a smaller model with 350 million parameters trained with 341.1 Gb of data, compared to Codex and ChatGPT-3.5, which has 12 billion and 175 billion parameters, respectively and trained with over 40 terabytes of data. Moreover, besides being a more powerful model, Codex is an LLM fine-tuned for code-related tasks in contrast to ChatGPT, which is tailored to dialogues (natural language). **- LLMs often "hallucinate" inexistent types, methods, _etc._. For both datasets, the most common compilation error was due to missing symbols. For instance, Codex generated inputs whose type were Tuple, Pair, Triple, Quad, and Quint, which are non-existent in Java's default classpath. **- LLMs may not reason over path feasibility**. We observed 3 unit tests from CodeGen getting stuck in an infinite loop. We also found that when a method's static return type is a supertype (_e.g._, Set), but its actual implementation only returns two specific runtime types (_e.g._, HashSet and TreeSet), the LLMs were often unable to generate expected outputs whose types are only the actual feasible runtime types. Besides, we found that LLMs do not understand type erasure and Java Generics. LLMs attempted to assign arrays to lists (and vice-versa) or used incorrect types that violate the upper/lower bounds of the generic type. **- Synergy between LLMs and TDD**. Although LLMs were not capable of achieving coverages or compilation rates comparable to Evosuite, the LLMs can still be useful as a starting point for TDD. As we showed in our RQ2, LLMs can generate tests based on the MUT's JavaDoc. However, given the low correctness rates of LLMs, developers would still need to adjust the generated tests manually. Given these findings, we observe a need for future research to focus on helping LLMs in reason over data types and path feasibility, as well as exploring the combination of SBST and LLMs for TDD. Furthermore, a recent study [2] surveyed 2,000 developers and analyzed anonymous user data, showing that GitHub Copilot makes developers more productive because the generated code can automate repetitive tasks. Thus, our findings provide some initial evidence that _practitioners_ following a TDD approach could benefit from LLM-generated tests as a means to speed up their testing. Although further user studies would be needed to verify this hypothesis. ### _Threats to Validity_ Creating canonical solutions for the Java samples in the HumanEval dataset [17] introduced an internal validity threat. To mitigate it, we extensively vetted our solution with a test set provided by the dataset creator; they passed the test cases without any problem. Another validity threat relates to the use of SF110 benchmark [13], JaCoCo [42] for calculating coverage results and TsDetect [45] for finding test smells. In this case, our analyses depend on the representativeness of the SF110 dataset (construct validity threat) and accuracy of these tools. However, the SF110 dataset is commonly used to benchmark automated test generation tools [13, 47, 48] and the used tools are well-known among researchers and practitioners [49, 50]. ## VII Related Work Previous works have focused on creating source code that can do a specific task automatically (code generation). The deductive synthesis approach [51, 52], in which the task specification is transformed into constraints, and the program is extracted after demonstrating the satisfaction of the constraints, is one of the foundations of program synthesis [53]. Recurrent networks were used by Yin _et al._[54] to map text to abstract syntax trees, which were subsequently coded using attention. A variety of large language learning models have been made public to generate code (e.g., CodeBert [35], CodeGen [16] and CodeT5 [37]) after being refined on enormous code datasets. Later, GitHub Copilot developed an improved auto-complete mechanism using the upgraded version of Codex [14], which can help to solve fundamental algorithmic problems [4]. Recent works [55, 56, 57] focus on optimizing the process to create, fine-tune, and infer the Large Language Models-based code generation techniques. Using large language models for software test generation is not that common. However, they can be used for downstream tasks, for example, flaky test prediction [58]. However, recent work uses GPT-3 [31] for software graphical interface testing [59]. Our work focuses not on code generation but on how a publicly available code generation tool can be used for specialized tasks like unit test generation without fine-tuning (_i.e._,, zero-shot test generation). Shamshiri _et al._[11] proposed a search-based approach that automatically generates tests that can reveal functionality changes, given two program versions. On the other hand, Tufano _et al._[60] proposed an approach that aims to generate unit test cases by learning from real-world focal methods and developer-written test cases. Pacheco _et al._[61] presented a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created for generating unit tests. Pecorelli _et al._[62] conducted an empirical study on software testing for Android applications about finding effectiveness, design, and bugs in the production code. In our work, we focus on zero-shot unit test generation using different contexts in order to measure the LLM's ability to generate compilable, correct and smell-free tests. Schafer _et al._[63] used Codex [14] to automatically generate unit tests using an adaptive approach. They used 25 npm packages to evaluate their tool, TESTPIGLOT. However, they evaluated their model only on statement coverage. They did not provide insight into the quality of the generated test cases and the choice of using a specific prompt structure. Lemieux _et al._[64] combined the Search-based software testing (SBST) technique with the LLM approach. It explored whether Codex can be used to help SBST's exploration. Nashid _et al._[65] aimed to devise an effective prompt to help large language models with different code-related tasks, _i.e._, program repair and test assertion generation. Their approach provided examples of the same task and asked the LLM to generate code for similar tasks. Bareiss _et al._[66] performed a systematic study to evaluate how a pre-trained language model of code, Codex, works with code mutation, test oracle generation from natural language documentation, and test case generation using few-shot prompting like Nashid _et al._[65]. However, the benchmark has only 32 classes, so the findings may not be generalized. This work provides direction toward using examples of usage or similar tasks as a context. However, in a real case, there may not be any example of using the method and class that can be used in the prompt, and creating an example of a similar task needs human involvement. Our work focused on different contexts taken from the code base. We evaluated the quality of the generated unit tests not only on coverage and correctness but also based on the presence of test smells. ## VIII Conclusion We investigated the capability of three code generation models for unit test generation. We conducted experiments with different contexts in the prompt and compared the result based on compilation rate, test correctness, coverage, and test smells. These models have a close performance with the state-of-the-art test generation tool for the HumanEval dataset, but their performance is poor for open-source projects from Evosuite based on coverage. Though our developed heuristics can improve the compilation rate, several generated tests were not compilable. Moreover, they heavily suffer from test smells like Assertion Roulette and Magic Number Test. In future work, we will explore how to enhance LLMs to understand language semantics better in order to increase test correctness and compilation rates.
2307.16403
Spin decoherence in VOPc@graphene nanoribbon complexes
Carbon nanoribbon or nanographene qubit arrays can facilitate quantum-to-quantum transduction between light, charge, and spin, making them an excellent testbed for fundamental science in quantum coherent systems and for the construction of higher-level qubit circuits. In this work, we study spin decoherence due to coupling with a surrounding nuclear spin bath of an electronic molecular spin of a vanadyl phthalocyanine (VOPc) molecule integrated on an armchair-edged graphene nanoribbon (GNR). Density functional theory (DFT) is used to obtain ground state atomic configurations. Decay of spin coherence in Hahn echo experiments is then simulated using the cluster correlation expansion method with a spin Hamiltonian involving hyperfine and electric field gradient tensors calculated from DFT. We find that the decoherence time $T_2$ is anisotropic with respect to magnetic field orientation and determined only by the hydrogen nuclear spins both on VOPc and GNR. Large electron spin echo envelope modulation (ESEEM) due to nitrogen and vanadium nuclear spins is present at specific field ranges and can be completely suppressed by tuning the magnetic field. The relation between these field ranges and the hyperfine interactions is analyzed. The effects of interactions with the nuclear quadrupole moments are also studied, validating the applicability and limitations of the spin Hamiltonian when they are disregarded.
Xiao Chen, James N. Fry, H. P. Cheng
2023-07-31T04:55:05Z
http://arxiv.org/abs/2307.16403v1
# Spin decoherence in VOPc@graphene nanoribbon complexes ###### Abstract Carbon nanoribbon or nanagraphene qubit arrays can facilitate quantum-to-quantum transduction between light, charge, and spin, making them an excellent testbed for fundamental science in quantum coherent systems and for the construction of higher-level qubit circuits. In this work, we study spin decoherence due to coupling with a surrounding nuclear spin bath of an electronic molecular spin of a vanadyl phthalocyanine (VOPc) molecule integrated on an armchair-edged graphene nanoribbon (GNR). Density functional theory (DFT) is used to obtain ground state atomic configurations. Decay of spin coherence in Hahn echo experiments is then simulated using the cluster correlation expansion method with a spin Hamiltonian involving hyperfine and electric field gradient tensors calculated from DFT. We find that the decoherence time \(T_{2}\) is anisotropic with respect to magnetic field orientation and determined only by the hydrogen nuclear spins both on VOPc and GNR. Large electron spin echo envelope modulation (ESEEM) due to nitrogen and vanadium nuclear spins is present at specific field ranges and can be completely suppressed by tuning the magnetic field. The relation between these field ranges and the hyperfine interactions is analyzed. The effects of interactions with the nuclear quadrupole moments are also studied, validating the applicability and limitations of the spin Hamiltonian when they are disregarded. ## I Introduction Synthesis of smooth-edge carbon nanoribbons (CNR) was first reported in 2008[1]. Soon after that, room temperature bottom-up fabrication techniques allowed ribbon growth with atomic precision[2]. Over more than a decade, much effort was made to engineer electronic properties of CNR by modifying the edge states[3; 4; 5; 6]. In 2016, Li _et al.[7]_ improvised an efficient bottom-up procedure to synthesize armchair CNRs from molecular precursors via a polymerization method. Later, Slota _et al.[8]_ studied coherence control using graphene ribbons with magnetic edges realized by stable spin-bearing radical groups. In their system, long range magnetic exchange coupling was observed, and spin coupling pathways were analysed from multi-frequency electron spin resonance. This work suggested that one might be able to attach magnetic molecules to nanoribbons and create a stable quasi-1D spin array. For qubit applications, one of the desirable features of a molecular qubit is to have spins localized in an individual molecule, from which well defined spin dimers (two qubits), trimers (three qubits), and chains of spins (qubit arrays) with coupling between the qubits can be constructed. Magnetic molecules such as vanadyl phthalocyanine (VOPc) are believed to be promising spin qubit candidates that can be used in quantum information sciences[9]. One of the competitive advantages of magnetic molecules compared to other solid-state spin qubits such as NV centers in diamond[10; 11] and phosphorus impurities in silicon[12] is that the properties of molecular spins can be flexibly engineered in synthetic chemistry by choosing various metal centers and modifying peripheral ligands. In addition, molecules are naturally monodisperse and molecular arrays allow qubits to follow a much more ordered lattice than defects in crystalline materials. To make use of magnetic molecular spins as qubits in quantum computing, one must find a way to create tunable and controllable couplings between different molecular spins[9; 13] in order to realize multi-qubit gates[14; 15]. Otherwise, universal quantum gates[14], a set of quantum gates capable of creating entanglement between qubits that can serve as the building blocks from which any quantum gate can be constructed, are not possible. As part of the effort to fulfill this requirement, a first step is to find a microscopic structure such as the molecular spin chain above that can stably hold these magnetic molecules. Recently it was proposed that VOPc molecules can be integrated onto graphene nanoribbons with a structure as shown in Fig. 1. A recent development in synthetic chemistry has made the synthesis of this architecture possible with atomically precise control of the total length of the repeated one dimensional structure and the spacing between VOPcs[16; 17]. Qubit arrays anchored on (we denote as @) nanagraphene[8; 18] facilitate quantum-to-quantum transduction between light, charge, and spin, making them an excellent testbed for fundamental science in quantum coherent systems and for the construction of higher-level qubit circuits. The coupling and coherence between the molecular qubits can be controlled by varying the nanagraphene size, length, and edge sites. Multiple-molecules@nanagraphene qubit arrays can then be integrated into scalable optical, microwave, and electrical architectures to construct functional qubit circuits. These qubit@nanagraphene arrays can also be left free in a solution or a gas, where they can be functionalized to bind to specific target sites for biological or chemical sensing[19]. Qubit (or spin, for spin qubits) decoherence[20], the loss of relative phase between a superposition of qubit states that causes loss of quantum information, is one of the main constraining factors for the realization of quantum computing, or in general, quantum information sciences. Therefore, properties of decoherence need to be studied when proposing any quantum computing architecture. For molecular spin qubits, the decoherence time \(T_{2}\) contributed by dynamical quantum noises from the environment is conventionally measured in Hahn-echo experiments [21; 22; 23]. Hahn echo is the simplest example of dynamical decoupling (DD). The key idea of the DD approach is to decouple a qubit from the source that causes decoherence by dynamically averaging out the noise from the environment via frequently flipping the spin [22]. DD is a powerful method for suppressing the spin decoherence and has been widely proposed for quantum computing [24; 25; 26; 27]. In order to compare with experiments and considering the promising application of DD, the study of qubit decoherence usually includes a scenario with DD present. [28; 29; 30; 31; 22; 22; 33; 24; 25; 26; 27; 28; 29; 30; 31] Inspired by promising advances in experiments and a lack of theoretical understanding for real-life material, as a first step, in this paper, we study qubit (spin) decoherence in the system of a single VOPc molecule integrated onto an infinitely long graphene nanoribbon (VOPc@GNR). We focus on the pure dephasing regime where spin decoherence is due to the coupling of the molecular electron spin qubit to nuclear spins in the environment, which are the main decoherence source at low temperature where spin-phonon relaxation is suppressed. We use a combination of density functional theory (DFT) and the reduced density matrix (RDM) method to optimize atomic configurations, calculate electronic structure and parameters needed for spin Hamiltonians, and simulate the spin decoherence in Hahn-echo experiments. We report our results from simulations, compare them with a simple analytical product rule model, and discuss underlying physics. The paper is organized as follows: in Sect. II, we describe the first-principles methods we use to obtain interactions for the spin Hamiltonian and the cluster correlation expansion method used to calculate the coherence function. In Sect. III, we report our results for the Hahn-echo coherence functions and discuss the implications of and reasons for certain physical behaviors. This section includes atomic and electronic structures, spin decoherence, two-spin model studies, and effects of nuclear quadrupole moments. Final conclusions are in section IV. ## II Method We use the Vienna Ab initio Simulation Package (VASP) [32; 33; 34] to perform Density Functional Theory (DFT) calculations of the atomic configuration and electronic structure of the VOPc@GNR system. The infinitely long graphene nanoribbon (GNR) we consider is the 6-AGNR as in Fig. 1, which has six rows of carbon atoms with armchair edges, a GNR recently synthesized with atomic precision [16; 18]. The rectangular simulation cell is chosen to have dimensions of \(51.66\times 35\times 20\,\)A along the \(x\)-, \(y\)- and \(z\)-directions, where the quasi-one-dimensional GNR is along \(x\). These dimensions ensure that the part of the GNR in the cell far from the attached VOPc molecule approaches the structure of free GNR and that vacuum in \(y\)- and \(z\)-directions between the VOPc@GNR system and its periodic images is adequately thick. Given the large cell size, first Brillouin zone integrations are done using a \(\Gamma\) point \(k\)-point grid. The energy cutoff of the plane-wave basis was set to \(600\,\)eV throughout all DFT calculations. The total energy tolerance for electronic self-consistency and the force tolerance for ionic relaxation are set to \(1\times 10^{-8}\,\)eV and \(0.01\,\)eV/A, respectively. During ionic relaxation, we adopt the Perdew-Burke-Ernzerhof (PBE) exchange correlation energy functional [35]. After ionic relaxation is complete we use the PBE+\(U\) method for electronic structure calculations to account for localization of electrons on the vanadium atom, with Hubbard \(U\) set to \(4.3\,\)eV, according to previous linear response calculations in the literature [36]. In the simulation of spin decoherence, the GNR in the relaxed atomic configuration in the DFT simulation cell is extended on both sides to infinity in the \(x\)-direction with the free structure of the GNR. The hyperfine interaction and electric field gradient (EFG), which are useful in constructing spin Hamiltonians, are calculated using built-in routines of VASP, where the hyperfine interaction tensor for a nuclear spin \(\mathbf{I}\) at position \(\mathbf{R}_{I}\) is the sum of Fermi contact and dipolar terms, which in Cartesian components are given by \[(A_{FC}^{I})_{ij}=\frac{2}{3}\frac{\mu_{0}\gamma_{e}\gamma_{I}}{\langle Sz \rangle}\delta_{ij}\int\delta_{T}(\mathbf{r})\rho_{s}(\mathbf{r}+\mathbf{R}_{I})\,d\mathbf{r} \tag{1}\] and \[(A_{D}^{I})_{ij}=\frac{\mu_{0}}{4\pi}\frac{\gamma_{e}\gamma_{I}}{\langle S_{z }\rangle}\int\frac{\rho_{s}(\mathbf{r}+\mathbf{R}_{I})}{r^{3}}\,\frac{3r_{i}r_{j}- \delta_{ij}r^{2}}{r^{2}}\,d\mathbf{r}. \tag{2}\] In these expressions, \(r_{i}\) are the components of the position vector \(\mathbf{r}\), with \(r=|\mathbf{r}|\), \(\rho_{s}\) is the spin density, \(\mu_{0}\) the magnetic susceptibility of free space, \(\gamma_{e}\) the electron gyromagnetic ratio, \(\gamma_{I}\) the nuclear gyromagnetic ratio of the nucleus, and \(\langle S_{z}\rangle\) the expectation value of the \(z\)-component of the electronic spin. \(\delta_{T}(\mathbf{r})\) is a smeared \(\delta\) Figure 1: VOPc integrated periodically onto graphene nanoribbons. function, as described in the Appendix of Ref. [37]. The core contribution [38] to the Fermi contact part of the hyperfine interaction has been included in computing \(A_{FC}\) in eq. (1). We use the hyperfine interaction tensor calculated from DFT only for the nuclear spins on the VOPc molecule, since the electron spin density is highly localized around the vanadium center on the VOPc molecule. The nuclear spins on the GNR are from hydrogen nuclei (C nuclei are purely spinless \({}^{12}\)C due to the 1.1% low natural abundance of spin-1/2 \({}^{13}\)C), at a greater distance from the molecular electronic spin than those in VOPc. For the hyperfine interaction of these hydrogen nuclei, the magnetic point dipole-dipole interaction was adopted. We note that spin-orbit coupling is not included in our DFT calculations unless explicitly mentioned. Calculations of coherence functions of the VOPc molecular spin in VOPc@GNR are conducted using the cluster correlation expansion (CCE) method [28; 29], as implemented in the PyCCE code [39]. The spin Hamiltonian describing the dynamics of the electronic spin-1/2 center, which we will call the central spin, interacting with a bath of nuclear spins in the presence of an external magnetic field takes the form \[H=H_{S}+H_{B}+H_{SB}, \tag{3}\] where the terms describing the central spin and its interactions with the surrounding bath nuclear spins are \[H_{S} = -\gamma_{e}\mathbf{B}\cdot\mathbf{\hat{S}},\] \[H_{SB} = \sum_{i}\mathbf{\hat{S}}\cdot\mathbf{A}_{i}\cdot\hat{\mathbf{I}}_{i}. \tag{4}\] The gyromagnetic ratio of the central spin \(\gamma_{e}\) is assumed to take the value for a free electron. The hyperfine interaction tensors \(\mathbf{A}_{i}\) are calculated as described above. The VOPc@GNR system we study includes nuclear spin operators \(\mathbf{I}_{i}\) for one spin-7/2 vanadium (V) nucleus and eight spin-1 nitrogen (N) nuclei as well as all nuclear spin-1/2 hydrogen (H). The Hamiltonian for the bath is given by \[H_{B}=-\sum_{i}\gamma_{i}\mathbf{B}\cdot\hat{\mathbf{I}}_{i}+\sum_{i}\hat{\mathbf{I}}_{i} \cdot\mathbf{P}_{i}\cdot\hat{\mathbf{I}}_{i}+\sum_{i<j}\hat{\mathbf{I}}_{i}\cdot \mathbf{J}_{ij}\cdot\hat{\mathbf{I}}_{j}. \tag{5}\] The first term is the Zeeman energy with \(\gamma_{i}\) the gyromagnetic ratio of nuclear spin \(i\). The second term is the nuclear quadrupole interaction (NQI), which is present only for nuclear spins with spin quantum number larger than one half, here the V and N nuclear spins in VOPc@GNR. The quadrupole interaction tensor \(\mathbf{P}_{i}\) is computed from the EFG tensor obtained from DFT. The third term is the magnetic point dipolar interaction between nuclear bath spins. The combined central spin and bath system is initially prepared in a product state of the form \[\hat{\rho}(0)=\hat{\rho}_{S}(0)\otimes\hat{\rho}_{B}(0), \tag{6}\] where \(\hat{\rho}_{S}(0)\) and \(\hat{\rho}_{B}(0)\) are reduced density operators at \(t=0\) of the central spin and the bath, respectively. \(\hat{\rho}_{S}\) represents a pure state of an equal superposition of qubit states chosen as the two central spin eigenstates of \(H_{S}\), \[\hat{\rho}_{S}(0)=|\psi\rangle\langle\psi|,\qquad|\psi\rangle=\frac{1}{\sqrt {2}}(|0\rangle+|1\rangle), \tag{7}\] where \(|0\rangle\) and \(|1\rangle\) are \(m_{s}=+\frac{1}{2}\) and \(m_{s}=-\frac{1}{2}\) eigenstates of the central spin, and \(\rho_{B}(0)\) is a product state of reduced density operators of individual nuclear spins, \[\hat{\rho}_{B}(0)=\otimes_{i}\hat{\rho}_{i}, \tag{8}\] with each nuclear spin assumed to be purely random: \(\hat{\rho}_{i}=\tilde{I}_{0}/(2I+1)\), where \(I\) is its spin quantum number and \(I_{0}\) the identity operator. This assumption is justified, as we will consider temperatures much larger than the nuclear spin Zeeman energies, which are on the order of \(10^{-5}\)-\(10^{-2}\,\mathrm{K}\). Spin decoherence is studied by simulating the time dependence of the normalized coherence function of the central spin, defined by the off-diagonal elements of its reduced density matrix, as in a Hahn-echo experiment, \[L(t=2\tau)=\frac{\langle 1|\hat{\rho}_{S}(t)|0\rangle}{\langle 1|\hat{\rho}_{S}( 0)|0\rangle}, \tag{9}\] where \(\tau\) is the pulse delay time. The decay of the coherence function, or decoherence, is a loss of information on the relative phase between the two qubit states in superposition and a breakdown of the superposition itself. The lifetime of this decay in Hahn-echo experiments is commonly called the Hahn-echo \(T_{2}\) or simply \(T_{2}\) in the literature. For a large bath of a few hundred spins or more, the coherence function can be efficiently calculated with the CCE method. The key idea of the CCE is that the decoherence of a central electron spin due to interaction with a nuclear spin bath can be exactly expanded as a product of contributions from irreducible correlations of bath-spin clusters [28; 29; 39], \[L(t)=\tilde{L}_{\{\emptyset\}}(t)\prod_{\{i\}}\tilde{L}_{\{i\}}(t)\prod_{\{ ij\}}\tilde{L}_{\{ij\}}(t)\prod_{\{ijk\}}\tilde{L}_{\{ijk\}}(t)\ldots, \tag{10}\] where \(\tilde{L}_{\{\emptyset\}}(t)\) is the phase factor of the free evolution of the central spin, \(\tilde{L}_{\{ij\}}(t)\) is the contribution from single bath spin \(i\), \(\tilde{L}_{\{ij\}}(t)\) is the contribution from unordered spin pairs \(\{ij\}\), and \(\tilde{L}_{\{ijk\}}(t)\) from a cluster of three different spins, _etc_. The irreducible correlation of a cluster is defined iteratively as [28; 29; 39] \[\tilde{L}_{C}=\frac{L_{C}}{\prod_{C^{\prime}\subset C}\tilde{L}_{C^{\prime}}}, \tag{11}\] where \(L_{C}\) is the coherence function of the central spin if only the terms in the spin Hamiltonian (3) containing the central spin \(\mathbf{\hat{S}}\) and bath spins \(\hat{\mathbf{I}}_{i}\) in cluster \(C\), but no other bath spins, are present. We label the sum of these terms as \(H_{C+S}\). To simulate a Hahn-echo experiment, \[L_{C}(t=2\tau)=\big{\langle}0\big{|}\operatorname{Tr}_{C}[\hat{U}_{C+S}(t)\,\hat {\rho}_{C+S}\,\hat{U}_{C+S}^{\dagger}(t)]\,\big{|}1\big{\rangle}, \tag{12}\] \[\hat{U}_{C+S}(t)=e^{-i\hat{H}_{C+S}\tau}e^{-i\hat{S}_{x}}e^{-i\hat{H}_{C+S}\tau}, \tag{13}\] where \(\hat{\rho}_{C+S}\) is the initial density matrix as a product state as in eqs. (7) and (8) for the subsystem of the central spin and the bath-spin cluster \(C\), \(\operatorname{Tr}_{C}\) is the partial trace over the state space of \(C\), and an ideal \(\pi\) pulse flips the central spin at the pulse delay time \(\tau\). \(\hbar\) has been set to \(1\). In the calculation of \(L_{C}(t=2\tau)\), the conventional scheme [39], which assumes no central spin flipping is adopted. This is valid in the problem of electron spin decoherence in a nuclear spin bath, since the electron Zeeman energy is three to four orders of magnitude larger than that of the nuclear spins under the same field. Since contributions from subcluster correlations are divided from \(L_{C}\), \(\bar{L}_{C}\) represents the irreducible correlation between all spins in \(C\). If the expansion (10) converges rapidly, it is valid to truncate it in practice. The maximum number of spins in the clusters included in the expansion determines the order of the CCE approximation. For example, the CCE order 2 (CCE-2) expansion includes only and all irreducible cluster correlation contributions up to two-spin clusters. The CCE expansion provides the essentially exact result if the expansion is truncated at an order where the coherence function is already convergent with respect to CCE order. In the present work, we limit ourselves to CCE order 4 (CCE-4), as our convergence studies show there is essentially no change in \(L(t=2\tau)\) when going from CCE-4 to CCE-5. Convergence test results are presented in Appendix A. ## III Results and discussion In this section, we report our results for VOPc@GNR. The atomic configuration and electronic structure of the system are presented first, followed by the results for spin decoherence and related analyses. Lastly, we investigate the effect of NQI. ### Atomic configuration and electronic structure of VOPc@GNR We first report the results of DFT calculations atomic configuration and electronic structure of a single VOPc molecule and VOPc@GNR. Figure 2 (Top) shows the difference between the spin-up and -down projected density of states (PDOS) summed over all V \(d\) orbitals (red curve) and for the V \(d_{xy}\) orbital (black curve) of an isolated single VOPc molecule. As anticipated, the spin density of a single VOPc molecule is contributed by an unpaired \(d_{xy}\) electron. Figure 2 (Bottom) shows the fat band analysis of the VOPc@GNR system, which we will return to later. For an isolated VOPc molecule, we also calculate energy as a function of spin orientation with inclusion of spin-orbit coupling. The preferred spin direction is along the vanadium-oxygen (V-O) bond and the energy of this spin direction is \(41\,\mu\)eV lower than the situation in which the spin is perpendicular to the V-O bond or in the plane of the molecule. When a VOPc molecule is integrated onto an armchair-edged GNR with a width of two honeycomb units, three isomeric structures are possible, as shown in Fig. 3. The energies of these three configurations in Fig. 3(a), 3(b) and 3(c) are \(0.0\), \(106,\mathrm{meV}\), and \(232\,\mathrm{meV}\), respectively, with the ground state energy being set to zero. In all cases, the plane of the phthalocyanine (Pc) ligand deviates significantly from that of the GNR due to a strong repulsion between hydrogen atoms on the ligand near the GNR and those on the GNR near the ligand. The result shows that if the repulsive force pushes both horizontal isionolide units to one side of the GNR, structures in 3(a) and 3(b) are realized with the oxygen atom nearer to the plane of GNR in 3(a). If the repulsion pushes two horizontal isionolide units to different sides of the GNR then we obtain the structure in 3(c), where the VOPc molecule is twisted with respect to the GNR. In the rest of the paper, we focus on the ground state of the three configurations, as shown in 3(a). We calculate the DFT band structure of VOPc@GNR and the \(k\)-resolved PDOS, _i.e._ we perform the fat band analysis [see Fig. 2 (Bottom) for band structure and \(k\)-resolved density of states projected onto the V \(d_{xy}\) orbital]. It is found that in this VOPc@GNR complex a localized molecular spin-\(1/2\) is still contributed by the unpaired \(d_{xy}\) electron on the V atom. ### Spin decoherence Spin decoherence in the VOPc@GNR system is contributed by the nuclear spins within a radius of roughly \(26\,\mathrm{\SIUnitSymbolAngstrom}\) from the central spin. This is discovered by increasing the radius of a sphere centered at the position of the central spin at the V atom, and for each radius, including only the spins of the nuclei inside the sphere as the bath spins in the spin Hamiltonian. Decay of the Hahn-echo coherence function stops exhibiting any further change when this bath radius is increased to \(26\,\mathrm{\SIUnitSymbolAngstrom}\) [See Appendix A and Fig. 20(a)]. This test of tuning the bath radius also shows why the decoherence rate is much faster in VOPc@GNR [black curve in Fig. 20(b), for which \(T_{2}=17.4\,\mu\mathrm{s}\)] compared to that in a single VOPc molecule, which basically corresponds to setting the bath radius to only \(8\,\mathrm{\SIUnitSymbolAngstrom}\) in VOPc@GNR [green curve in Fig. 20(b), which does not show any sign of decay at least up to \(50\,\mu\mathrm{s}\)], indicating the important role played by the H nuclear spins on the GNR on central spin decoherence. After convergence tests on the coherence function are completed (Appendix A), we start with investigations of the dependence of spin decoherence in the VOPc@GNR system on magnetic field direction and strength. Three mutually perpendicular directions were considered, as shown in Fig. 4. These are (i) parallel to the GNR direction (\(x\)-direction, blue arrow), (ii) in a direction perpendicular to both the GNR direction and the V-O bond (red arrow, approximately parallel to the isoindele unit oriented perpendicularly to the \(x\)-direction), and (iii) in the direction of the V-O bond (green arrow). Without losing generality, we first show results excluding the NQI, _i.e._ the second term in Eq. 5, which is dropped for now; we will discuss the effect of including NQI in the Hamiltonian in Sect. III.4. The Hahn-echo coherence functions for field directions (i),(ii) and (iii) are shown in Fig. 5(a), (b) and (c), respectively, as function of pulse delay time \(2\tau\) as computed according to Eqs. (9)-(12). Here we note that all Hahn-echo coherence functions \(L\) calculated for VOPc@GNR have negligible imaginary part, and we will show the magnitude of \(L\) as the correct measure. We see that the spin decoherence depends on the relative orientation of the single VOPc molecule and the direction of the magnetic field, due to anisotropic hyperfine and inter-nuclear-spin interactions. One also observes that 1) the envelope of the Hahn-echo coherence function decay depends on the direction of the field but not its strength and 2) The oscillation under the envelope, know as electron spin echo envelope modulation (ESEEM), does depend on field strength. This envelope has the longest \(T_{2}\) of \(34.2\,\mu\)s when the field is along the V-O bond, direction (iii), followed by a field in direction (ii) with a \(T_{2}\) of \(28.3\,\mu\)s, and then by field along the GNR with a \(T_{2}\) of \(17.4\,\mu\)s. The value of \(T_{2}\) is obtained by fitting the coherence function to a stretched exponential \(\exp[-(t/T_{2})^{n}]\). In our analysis later in the paper we show that the envelope is determined only by the H nuclear spins in the system. Interestingly, for both fields along the GNR and along direction (ii), at relatively small field strengths of around 0-2 T we observe a large ESEEM effect as represented by the red curves in Fig. 5(a) and (b). When we increase the field strength to 3-6 T, the ESEEM is suppressed, while further increasing the field to around 7-9 T a large ESEEM appears again, as shown by the green curves. In the third field direction, which is along the V-O bond, similar ESEEM effects at relatively small fields of around 0-2 T are observed, while those at large fields are only observed instead at around \(20-24\) T (Fig. 5(c)). In Hahn-echo experiments done on the VOPc molecules using pulsed electron paramagnetic resonance spectroscopy, large ESEEM is indeed found at small fields [40; 41]. The large ESEEM rapidly and significantly modifies the coherence function. Although it still represents a theoretically tractable coherent process between few nuclear spins and the central spin, as we will see soon below, it is desirable to avoid them in potential realizations of spin qubits. A realization of a spin qubit working with dynamical decoupling pulses such as the Hahn-echo pulse sequence while the ESEEM effect is present must incorporate accurately the frequencies of the ESEEM so that one can predict when coherence function becomes close to one and the superposition state of the qubit is recovered, but these frequencies depend on the hyperfine interactions and nuclear Larmor frequencies, which are sensitive to perturbations in the local environment. One can tune the magnetic field strength to suppress ESEEM as observed above, therefore it is helpful to understand at what field strength large ESEEM can occur. In the following analysis we try to understand what nuclear species give rise to the ESEEM observations and why only in the specific ranges of magnetic field strength found above. We take the field direction along GNR as an example but note that the qualitative results also hold for the other two directions. In order to determine what nuclear spin species give rise to the observed ESEEM effects, we compute the decoherence caused by each individual nuclear spin species separately, which is achieved by allowing the central spin coherence function to evolve with a spin Hamiltonian reduced from the full spin Hamiltonian by keeping only Figure 2: (Top) The difference between the spin-up and -down density of states of a single VOPc projected onto all V \(d\)-orbitals (red curve) and onto the V \(d_{xy}\) orbital (black curve). (Bottom) Fat band analysis of the VOPc@GNR complex where the radius of any circle in the bands is proportional to the value of the \(k\)-resolved PDOS for the V \(d_{xy}\) orbital at the energy \(E\) and momentum \(k\) of the position of the circle. The Fermi level is set to zero in both plots. one species of nuclear spin at a time while dropping all the terms in Eq. (3) containing nuclear spins \(\hat{\mathbf{I}}_{i}\) of other species. From the results as represented by the data shown in Fig. 6 and in Fig. 7, we learn that the ESEEM at a relatively small field strength of 0-2 T is contributed by the N nuclear spins and in the range of 7-9 T by the V nuclear spin. In each of Fig. 6 and 7, the coherence function when only H, N or V nuclear spins are present are in subplots (a), (b) and (c), respectively. More interestingly, when we compute the product of the coherence functions due to all three nuclear spin species and compare the result with the coherence function as a result of the full spin Hamiltonian, we find a very good agreement with the product of the three, as can be seen from Figs. 6(d) and 7(d) [Examples of this product rule for directions (ii) and (iii) are shown in Appendix C.]. This product rule is exact in the limit of strong field and when the interactions between different groups of nuclear bath spins, here different species, are set to zero.[42; 43; 44] This has been observed previously in other systems and is due to suppression of spin flip-flop processes between different nuclear spin species because of a large discrepancy in nuclear Larmor frequencies.[45; 46] With this product rule, it is obvious that the coherence function due to H nuclei provides the envelope of the ESEEM from the full Hamiltonian and so determines \(T_{2}\). Note that for a system where the decoherence time is large enough such that a correlation between different nuclear spin species has enough time to develop before the coherence function vanishes, the product rule is no longer valid. A constructed example is shown in Fig. 8, where the same central spin, V and eight N nuclear spins with the same interactions as for VOPc@GNR are put in a sparse random H nuclear spin bath of number density 1/8 nm\({}^{3}\) which alone contributes to a \(T_{2}\) of order 1 ms. A substantial deviation of the Hahn-echo coherence function from the product of those from individual nuclear spin species occurs from 0.1 ms onward. Since this product rule holds for VOPc@GNR, to understand the ESEEM in the coherence function of the full system first at small fields, all we need do is to understand the ESEEM in the coherence function when only the eight N nuclear spins are present as the bath spins, which greatly simplifies the problem. Even in this small system with only one central spin and eight N nuclear spins, a similar product rule holds at the scale of 2\(\tau\) smaller than one millisecond. In Fig. 9(a), we show good agreement over a small range of 2\(\tau\) comparable to \(T_{2}\) in VOPc@GNR between the coherence function due to eight N nuclear spins coupled together and the product of eight coherence functions calculated by including different N nuclear spin one at a time as the only bath spin in the spin Hamiltonian [Examples for directions (ii) and (iii) are shown in Appendix C.]. Deviation between the two curves is not seen until 2\(\tau\) is on the order of one millisecond, as shown in Fig. 9(b) and 9(c). This means that the correlation between different N nuclear spins does not develop until a much larger time scale than \(T_{2}\) of the system we are interested in. Since for the study of decoherence in the VOPc@GNR system this product rule between individual N nuclear spins is valid, now the problem can be further reduced to studying ESEEM due to individual N nuclear spins. Modulation depth of the ESEEM in the central spin coherence function when the central spin is coupled to single N nuclear spins is presented as a function of mag Figure 4: The three mutually perpendicular directions of magnetic field considered are indicated by the arrows. (i) Blue arrow: the GNR direction (\(x\)-direction). (ii) Red arrow: the direction perpendicular to both the GNR direction and the V-O bond. (iii) Green arrow: the direction of the V-O bond. Figure 3: Isomeric structures of the VOPc@GNR system: (a) Oxygen near GNR plane; (b) Oxygen off GNR plane; (c) twisted axis configuration. Color code: red for vanadium, yellow for oxygen, white for nitrogen, brown for carbon, and blue for hydrogen. netic field strength in Fig. 10. Here, modulation depth is defined as the maximum value of \(1-L(2\tau)\) over all \(2\tau\) values, the largest distance away from the full coherence of 1 the coherence function can reach during an ESEEM oscillation. We label the eight N nuclear spins N1 to N8, as indicated in Fig. 10(a), which is a view of relative positions of the V and N nuclear spins through the direction connecting the oxygen to the V nucleus. The quasi-1D GNR is parallel to the N5-N1 direction and below (not shown) N6, 7, 8. Due to symmetry, nuclei N1 and N5 cause the same (de)coherence, as well as the pair N2 and N4 and the pair N6 and N8. N3 and N7 do not cause the same central spin coherence because the GNR is closer to N7. The magnetic field dependence of the ESEEM depth due to N1/N5, N2/N4, N3, N6/N8 and N7 are shown as the red curves in Fig. 10(b), (c), (d), (e) and (f). The data points sampled are marked on the curves as well. Blue curves in the same plots are results obtained by additionally including NQI in the Hamiltonian and will be discussed in Sect. IIID. These results tell us that the ESEEM depth due to a single N nuclear spin has a peak on the field-strength domain centered at a position that can be different for different N positions, between 0 and 2 T. This, following the product rules, gives rise to the significant ESEEM observed in the same field range in the coherence functions due to eight coupled N nuclear spins [Fig. 6(b)] and from the full spin Hamiltonian [Fig. 5(a)]. A two-spin model study to understand why the nuclear spin modulation depth for a single N peaks at a certain field strengths is presented in Sect. III(C). The ESEEM due to the V nuclear spin is also addressed there. ### Simple model study of ESEEM We consider a simple two-spin model from which we can obtain a closed form expression for the oscillations in the Hahn-echo coherence function. The system consists of one electron spin-1/2 and one N nuclear spin-1. The spin Hamiltonian, not including NQI, is \[\hat{H}=-\gamma_{e}\mathbf{B}\cdot\hat{\mathbf{S}}-\gamma_{N}\mathbf{B}\cdot\hat{\mathbf{I}}+ \hat{\mathbf{S}}\cdot\mathbf{A}\cdot\hat{\mathbf{I}}, \tag{14}\] following Eqs. (3)-(5). By simulating the electron spin Hahn-echo coherence function for this Hamiltonian with first-principles inputs for the hyperfine interaction tensor \(\mathbf{A}\), we obtained results on the ESEEM due to individual N nuclear spins reported in Sect. III(B). Without loss of generality, we will set the \(z\) direction in the model along the magnetic field. In order to obtain a closed form expression for the coherence function, a secular approximation is applied where only the terms in the hyperfine interaction containing \(\hat{S}_{z}\), _i.e._\(A_{zi}\hat{S}_{z}\hat{I}_{i}\) (\(i=x\), \(y\), \(z\)), are kept and other hyperfine terms are dropped. The secular approximation requires the electron spin Zeeman splitting to be much larger than all other terms in the spin Hamiltonian, which is the case for the spin interactions in VOPc-GNR at all the field strengths we consider. After a rotation of the \(x\) and \(y\) axes about \(z\), \(A_{zy}\) can be reduced to zero, further simplifying \(\hat{H}\). The Hamiltonian is now \[\hat{H}=\omega_{e}\hat{S}_{z}+\omega_{N}\hat{I}_{z}+A_{zx}\hat{S}_{z}\hat{I}_ {x}+A_{zz}\hat{S}_{z}\hat{I}_{z}, \tag{15}\] where \(\omega_{e}=-\gamma_{e}B\) and \(\omega_{N}=-\gamma_{N}B\). The third and fourth terms represent the pseudosecular and secular part of the hyperfine interaction, respectively. To simulate the Hahn-echo experiment, the initial density matrix following the first, \(\pi/2\) pulse in the Hahn-echo pulse sequence is again described by Eqs. (6)-(8). Its operator form can be written as \[\hat{\rho}(0^{+})=(\hat{S}_{x}+\tfrac{1}{2}\hat{S}_{0})\otimes(\tfrac{1}{3} \hat{I}_{0}), \tag{16}\] where \(\hat{S}_{0}\) and \(\hat{I}_{0}\) are identity operators in the state spaces of the electron and the nuclear spin, respectively. The goal is to find the Hahn-echo coherence function \(L(2\tau)\) of the electron spin from the system density matrix at \(t=2\tau\), with a \(\pi\)-pulse applied to the electron spin at \(t=\tau\). Following Eq. (9), \[L(2\tau)=2\operatorname{Tr}[\hat{\rho}(2\tau)(\hat{S}_{x0}-i\hat{S}_{y0})], \tag{17}\] \[\hat{\rho}(2\tau)=e^{-i\hat{H}\tau}e^{-i\pi\hat{S}_{x0}}e^{-i\hat{H}\tau}\hat {\rho}(0^{+})e^{i\hat{H}\tau}e^{i\pi\hat{S}_{x0}}e^{i\hat{H}\tau}, \tag{18}\] Figure 5: Dependence of the Hahn echo coherence function on the magnetic field direction and strength: (a) magnetic field is in the direction along the GNR; (b) magnetic field is in the direction (ii) in Fig. 4 and as described in the main text; (c) magnetic field is in the direction parallel to the V-O bond. where \(\hat{S}_{x0}=\hat{S}_{x}\hat{I}_{0}\) and \(\hat{S}_{y0}=\hat{S}_{y}\hat{I}_{0}\). This model was first considered by W. B. Mims in Ref. [42], in which he obtained an analytical expression for \(L(2\tau)\). We have reproduced the solution and apply it to study the ESEEM depth from N nuclear spins in VOPc@GNR. The closed form expression for \(L(2\tau)\) is rather long, and we present it in Eqs. (B1) and (B2) in Appendix B. The ESEEM of a single frequency is described by a cosine term in Eq. (B1), and its modulation depth is just the absolute value of the coefficient in front of it. The modulation depths of the ESEEM of all frequencies share a common factor \(C\), \[C=\frac{1}{3\big{[}A_{zx}^{4}+2(A_{zz}^{2}+4B^{2}\gamma_{N}^{2})A_{zx}^{2}+(A_ {zz}^{2}-4B^{2}\gamma_{N}^{2})^{2}\big{]}^{2}}, \tag{19}\] with the remaining factors in the coefficients for each frequency expressed as \(a\), \(b\), \(c\), \(\dots\), \(l\) in Eq. (B2). According to the behavior of \(C\), there are two different scenarios relevant to N nuclear spins in VOPc@GNR: The first is when \(|A_{zz}|\gg|A_{zx}|\), which is the case for the nearest N nuclear spins to the central spin, _i.e._ N1, N3, N5 and N7. For these spins, the isotropic Fermi contact part of the hyperfine interaction [Eq. (1)] is much larger than the anisotropic dipolar part [Eq. (2)], and therefore \(|A_{zz}|\gg|A_{zx}|\) is valid whatever the field direction and correspondingly the \(z\)-axis in the model. In this scenario, \(C(B)\) as a function of \(B\) strongly peaks at the value of \(B\) which satisfies \(|A_{zz}|=2B\gamma_{N}\), which we label \(B_{\rm peak}\). This is because when this condition is satisfied the second term in the square bracket in Eq. (19) dominates over others, leading to the maximum of \(C(B)\), \(C_{max}\approx(1/48)A_{zx}^{-4}A_{zx}^{-4}\), while when \(|2B\gamma_{N}-|A_{zz}|\) is on the order of \(|A_{zz}|\), \(C\sim A_{zx}^{-8}\), and \(C\) is even smaller if \(2B\gamma_{N}\) further deviates from \(|A_{zz}|\). The width of the peak is measured by \(|A_{zx}|/2\gamma_{N}\), as \(C(B_{\rm peak}\pm|A_{zx}|/2\gamma_{N})\approx(1/192)A_{zz}^{-4}A_{zx}^{-4}=(1/ 4)C_{max}\). As an example, for N1 when the field is along the GNR, the corresponding hyperfine interactions, obtained from DFT, are \(|A_{zz}|=7242\,\)kHz and \(|A_{zx}|=286\,\)kHz. The corresponding \(C(B)\) is shown in Fig. 11 where the values of \(B_{\rm peak}\) and \(B_{\rm peak}\pm|A_{zx}|/2\gamma_{N}\) are also labelled by the Figure 6: Product rule for the Hahn-echo coherence function among different elements in the VOPc@GNR system. A magnetic field of strength 0.2 T is applied in the direction along the GNR. The coherence function is contributed by the central spin coupling to (a) H nuclei, (b) N nuclei, and (c) the V nucleus. In (d), the coherence function due to all elements present in the spin Hamiltonian is seen to closely follow the product of (a),(b) and (c). vertical dashed lines. Since the remaining factor in the modulation depth for each ESEEM frequency, \(a\), \(b\), \(c\),..., \(l\) are functions of \(B\) which only show mild variation, the modulation depth \(|C\alpha|\) (\(\alpha=a\), \(b,\dots\), \(l\)) also have significant strength only in a narrow range of magnetic field around \(B_{\rm peak}\). For the example of N1 above, \(|C\alpha|\) are plotted in Fig. 12. Since the total ESEEM due to a single N nuclear spin is just the sum of the ESEEM of all frequencies [Eq. (B1)], it has modulation depth reaching its maximum also at the field strength of \(B_{\rm peak}\), which explains the location of the modulation depth peaks of N1/N5, N3 and N7 in the red curves in Fig. 10(b), (d) and (f), respectively. The value of \(B_{\rm peak}\) is indicated in Fig. 10(b), (d) and (f) as the vertical dashed line. The second scenario is when \(|A_{zz}|\) and \(|A_{zx}|\) are of the same order, which is the case for the N nuclear spins farther away from the central spin, N2, N4, N6 and N8, and for the three directions of field, which defines \(z\) in the model, described in Sect. III(B). \(C(B)\) in this scenario has relatively significant value in the range of field strength starting from zero to a value of the order of \(|A_{zx}|/\gamma_{N}\), compared to fields beyond the range. If \(|A_{zz}|\leq|A_{zx}|\), \(C(B)\) has its maximum at zero field and is a monotonically decreasing function of \(B\). For the special case of \(|A_{zz}|=|A_{zx}|\), \(C(|A_{zx}|/2\gamma_{N})=(16/25)\,C(B=0)\). As an example, for N2 when the field is along the GNR, the corresponding hyperfine interactions, obtained from DFT, are \(|A_{zz}|=243\,\)kHz and \(|A_{zx}|=291\,\)kHz. The corresponding \(C(B)\) is plotted in Fig. 13 where the value of \(|A_{zx}|/2\gamma_{N}\) is labelled by the green dashed line. Similar to the first scenario, constrained by the range of fields where \(C(B)\) is prominent, the modulation depth \(|C\alpha|\) (\(\alpha=a,b,...,l\)) can have significant value only in a narrow range of low magnetic field. When approaching zero field, \(|C\alpha|\) vanish due to a, b,..., l [Eq. (B2)] being polynomials of \(B\) without a constant term. For the example of N2 above, \(|C\alpha|\) are plotted in Fig. 14. Similar Figure 7: Product rule for the Hahn-echo coherence function among different elements in the VOPc@GNR system. A magnetic field of strength 8.0 T is applied in the direction along the GNR. The coherence function is contributed by the central spin coupling to (a) H nuclei, (b) N nuclei and (c) the V nucleus. In (d), the coherence function due to all elements present in the spin Hamiltonian is seen to follow the product of (a),(b) and (c). Inset is a zoom-in at short time scale showing the details of the oscillatory behavior and the close agreement between the two curves. to the first scenario, this explains why modulation depth becomes large at low fields for N2/N4, N6/N8 but vanish at zero field [red curves in Fig. 10(c), (e)]. Now we have an understanding of why the modulation depths of ESEEM due to individual N nuclear spins become large near certain field strengths within the range 0-2 T. For the V nuclear spin, an analytical study of modulation depth in a \(S=1/2\), \(I=7/2\) model is much more complicated than the \(S=1/2\), \(I=1\) model above for N, and a closed form expression may even be impossible. Here we simply state that for the V nuclear spin in VOPc@GNR and the three directions we consider in Sect. III(B), \(|A_{zz}|\gg|A_{zx}|\) from DFT and the qualitative result of the first scenario in the \(S=1/2\), \(I=1\) model above, _i.e._ ESEEM depth caused by the nuclear spin reaches maximum at \(B_{\rm peak}=|A_{zz}|/2\gamma_{n}\), \(\gamma_{n}\) being the nuclear gyromagnetic ratio, is also valid. The central spin ESEEM depth in VOPc@GNR due to the V nuclear spin when the field is along the GNR is plotted as a function of field strength by the red curve in Fig. 10(g). The value of \(B_{\rm peak}=|A_{zz}|/2\gamma_{\rm V}\) is labelled by the vertical dashed line. ### Effect of nuclear quadrupole interaction In this section, we describe the effect of the nuclear quadrupole interaction (NQI) on the Hahn-echo coherence functions. The spin Hamiltonian now includes the NQI term \(\sum_{i}\hat{\mathbf{I}}_{i}\cdot\mathbf{P}_{i}\cdot\hat{\mathbf{I}}_{i}\) [cf. eq. (5)], where the quadrupole interaction tensor \(\mathbf{P}_{i}\) of nuclear spin \(i\) is proportional to the EFG tensor at its position, \[\mathbf{P}_{i}=\frac{eQ_{i}}{2I_{i}(2I_{i}-1)h}\mathbf{V}_{i}, \tag{20}\] where \(e\) is the elementary charge, \(Q_{i}\) the nuclear electric quadrupole moment, \(I_{i}\) the nuclear spin quantum number, and \(h\) the Planck constant. The EFG tensor \(\mathbf{V}_{i}\), the second order derivative of the electrostatic potential at the position of nuclear spin \(i\) due to all charges external to the nucleus with components \((\mathbf{V}_{i})_{\alpha\beta}=\partial^{2}V(\mathbf{R}_{i})/\partial\alpha\, \partial\beta\), \(\alpha\), \(\beta=x\), \(y\), \(z\), is obtained from DFT calculations. Our simulations show that the inclusion of the NQI for V and N nuclear spins does not alter the following results: (1) The general ranges of magnetic field strength where large ESEEM appears are not significantly changed. For magnetic fields along the GNR and along direction (ii) as in Sect. III(B), significant ESEEM is still present both at the relatively small fields of 0-2 T and large fields of 7-9 T, while it is suppressed in the intermediate field range of 3-6 T [Fig. 15(a) and 15(b)]. For the case of the field parallel to the V-O bond, ESEEM is still present only in the field ranges of 0-2 T and 20-24 T [Fig. 15(c)]. (2) The product rule of Hahn-echo coherence functions between different nuclear spin species, as in Figs. 6 and 7, still holds (Figs. 16 and 17). Therefore the coherence time \(T_{2}\) of the envelopes of the coherence functions remain unchanged, since the envelopes are contributed by H nuclear spins, which are not affected by the NQI. (3) The product rule for the Hahn-echo coherence function between different nitrogen nuclear spins still holds (Fig. 18); therefore the variation of the modulation depth of the ESEEM due to all nitrogen nuclear spins can again be understood by those due to individual spins (blue curves in Fig. 10). (4) For the ESEEM due to individual nuclear spins, the modulation depth (blue curves in Fig. 10) still reaches a maximum at approximately the same magnetic field as the case without the NQI (red curves in Fig. 10). Inclusion of the NQI introduces two major changes. One is a change in ESEEM frequencies, as shown in the magnitude of the ESEEM Fourier transforms in Fig. 19(Top) where we compare the frequencies of the ESEEM in Fig. 16(b) which is due to N nuclear spins including the NQI with that without the NQI at the same field. The same comparison at 8 T between the frequencies of the ESEEM due to the V nuclear spin with and without NQI, as in Fig. 7(c) and 17(c), is shown in Fig. 19(Middle). A zoom-in of the first positive-frequency peak structure in Fig. 19(Middle) is displayed in 19(Bottom), showing a change of the detailed satellite structure due to the NQI. The positions of the peak structures in Fig. 19(Middle) are periodic with a frequency spacing of around 89 MHz, which approximately corresponds to the Zeeman splitting of the V nuclear spin at this field, 89.7 MHz. The second major change is that for the ESEEM due to individual nuclear spins, although the modulation depth still reaches maximum at approximately the same magnetic field as the case without NQI, NQI can change the height and width of the peaks in the modulation depth as a function of magnetic field [Fig. 10(c)-(g)] and even introduces additional peaks on both sides [Fig. 10(b)]. When NQI is included, The modulation depth can also be nonzero when the field approaches zero, in contrast to the zero modulation depth in the same limit when NQI is absent [Fig. 10(c)-(f)]. Figure 8: Hahn-echo coherence function simulated with the same central spin, V and eight N nuclear spins as in VOPc@GNR immersed in a random sparse H nuclear bath. ## IV Conclusion In this work, we have performed first-principles calculations of central spin decoherence in a nuclear spin bath for the system of VOPc@GNR. Low energy isomeric atomic configurations of the ground state as well as the corresponding electronic structures are calculated by DFT, which shows that after the integration onto the GNR, the molecular electronic spin of the VOPc molecule remains spin-1/2 and still contributed by an unpaired \(d_{xy}\) electron on the V atom. In the study of spin decoherence, using the CCE method with a spin Hamiltonian in which the hyperfine and NQI tensors are calculated from DFT, the time evolution of the coherence function as the off-diagonal element of the central spin RDM is examined in simulation of Hahn-echo experiments. The central spin decoherence is found to be mainly contributed by nuclear spins within a distance of around 26 A to the central spin. A comparison between the spin decoherence in this VOPc@GNR system to that in a single VOPc molecule also shows a strong decrease of \(T_{2}\) due to the protons on the GNR. Three mutually perpendicular directions for the magnetic field are considered and an anisotropy in \(T_{2}\) is observed, with the value of \(T_{2}\) when the field is along the V-O bond almost twice as large as that when the field is along the GNR. Large ESEEM appears in certain ranges of magnetic field while being suppressed outside these ranges. A detailed investigation of the coherence functions due to individual nuclear spin species reveals a product rule, that spin coherence function due to the full spin Hamiltonian agrees with the product of central spin coherence functions due to individual nuclear spin species. This product rule, valid at small time scales, allows us to identify that the envelope of the coherence function, and therefore \(T_{2}\) in VOPc@GNR, is contributed only by the H nuclear spins and that large ESEEM at relatively small fields is due to N nuclear spins while that at large fields is due to V nuclear spins. In the study of the ESEEM due to N nuclei, a similar product rule is found valid for the collective result for all nuclei and product of results from individual spins, reducing the problem to computing the ESEEM due to individual N nuclei. By investigating the closed-form expressions of the Hahn-echo coherence function and the ESEEM depth in an \(S=1/2\), \(I=1\) two-spin model, we find a relation between, on one side, the range/value of the field strength where ESEEM depth due to an individual N nuclear spin becomes significant/reaches its maximum and, on the other side, the secular \(|A_{zz}|\) and the pseudosecular \(|A_{zx}|\) parts of the hyperfine interaction. This relation explains why ESEEM due to N nuclei is present at relatively low fields. The qualitative result of the scenario for \(|A_{zz}|\gg|A_{zx}|\) in the model, which states that the modulation depth reaches maximum at a field when the secular hyperfine interaction is equal to double the nuclear Zeeman splitting, is found to also apply to the V nuclear spin. Finally, we include the nuclear quadrupole interaction calculated for the bare VOPc@GNR structure, which is not negligible for N and V nuclei, in the spin Hamiltonian and analyzed its effects. Simulation shows that while the NQI does not change the product rules, the decoherence time \(T_{2}\) or the value of the magnetic field where modulation depth of ESEEM due to an individual nuclear spin reaches maximum, it modifies frequencies of ESEEM oscillations, can change the width of the peak in the modulation depth as a function of magnetic field, can introduces additional peaks around the central one and can make the modulation depth nonzero in the limit of zero field. We thus have identified the applicability and limitation of the spin Hamiltonian without NQI that represents an incomplete description of physics in VOPc@GNR systems. Our work provides information on the atomic configuration and the electronic structure of VOPc@GNR and can guide further experiments on this system in identifying the optimal magnetic field direction and strengths where the coherence time \(T_{2}\) due to central electronic Figure 9: Comparison between the coherence functions due to eight N nuclear spins together and the product of coherence functions due to each individual N (a) over a small range of \(2\tau\) comparable to \(T_{2}\) in VOPc@GNR, (b) over a large range of \(2\tau\) on the order of milliseconds, (c) over an intermediate range of \(2\tau\) where deviation between the two curves is just seen. In this example, the magnetic field of strength 0.2 T is along the GNR. spin coupling to nuclear bath spins is maximized and the ESEEM is suppressed. The coherence function product rules show that this \(T_{2}\) is constrained only by H nuclear spins, even if V and N nuclei are closer to the electron spin and have hyperfine interactions orders of magnitude larger, and confirm that the major source of spin decoherence in hydrogen-rich magnetic molecular systems is the H nuclear spins.[30; 47; 48] The finding of a relation between the secular/pseudoscalar hyperfine interaction and the ESEEM depth provides insight for future design of magnetic molecular spin qubits to reduce ESEEM effects. In general, this work shows the capability of combining the DFT and the CCE methods in predicting all the details of the spin decoherence in molecular spin qubit architecture due to central electronic spin-nuclear spin coupling and provides useful insights for future designs of molecular spin-qubit architectures. **Acknowledgements.** The authors are grateful for Figure 10: (a) Relative positions of the V and N nuclear spins viewed through the direction from the oxygen to the V nucleus. The labelling of the eight N nuclear spins N1–N8 is as shown and will be referred to in the following. The GNR (not shown) is parallel to the N5-N1 direction and below the spins in this graph. (b) The modulation depth of ESEEM due to N1/N5 as a function of magnetic field strength. In this example, the field is along the GNR. The red curve and circles showing data sampling are for the case without the NQI. The blue curve includes the NQI in the spin Hamiltonian. (c) Same as (b) but for N2/N4. (d) Same as (b) but for N3. Inset is a zoom-in that shows details of the red peak. (e) Same as (b) but for N6/N8. (f) Same as (b) but for N7. Note that the peak of the modulation depth for the case without the NQI is tiny, as shown in the inset. (g) Same as (b) but for the V nuclear spin. Vertical dashed lines in (b), (d),(f) and (g) indicate the value of \(B_{\text{peak}}=|A_{zz}|/2\gamma_{n}\). useful conversations with Silas Hoffman, Shuanglong Liu, Haechan Park, Steve Hill. This work is supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0022089. Computations were done using the utilities of the National Energy Research Scientific Computing Center and University of Florida Research Computing. Figure 16: Product rule of the Hahn-echo coherence function among different elements in the VOPc@GNR system using the spin Hamiltonian including the NQI. A magnetic field of strength 1 T is applied in the direction along the GNR as an example. The coherence functions are calculated from the central spin coupling to (a) only the H nuclei present as bath spins, (b) only the N nuclei present as bath spins, and (c) only the V nucleus present as the bath spin. In (d), the coherence function due to all elements present in the spin Hamiltonian (blue) is seen to closely follow the product (red) of (a), (b) and (c). Figure 15: Dependence of the Hahn echo coherence functions on magnetic field strength for field directions as in Fig. 5 using the spin Hamiltonian including the NQI: (a) magnetic field in the direction along the GNR; (b) magnetic field in direction (ii) described in Sect. IIIB; (c) magnetic field in the direction parallel to the V-O bond. Figure 17: Product rule of the Hahn-echo coherence function among different elements in the VOPc@GNR system using the spin Hamiltonian including the NQI. A magnetic field of strength 8 T is applied in the direction along the GNR as an example. The coherence functions are calculated from the central spin coupling to (a) only the H nuclei present as bath spins, (b) only the N nuclei present as bath spins, and (c) only the V nucleus present as the bath spin. In (d), the coherence function due to all elements present in the spin Hamiltonian (blue) is seen to closely follow the product (red) of (a), (b) and (c). Figure 19: Comparison between the magnitude of the Fourier transform of the ESEEM due to the spin Hamiltonian including (blue curves) or not including (red curves) the NQI. Here as an example the magnetic field is along the GNR direction. (Top) Field strength \(B=1\,\mathrm{T}\). The blue curve is the magnitude of the Fourier transform of Fig. 16(b) rather than Fig. 16(d) in order to better resolve frequencies related to nitrogen nuclear spins, which are responsible for the ESEEM at this field strength. The red curve is the same quantity in the absence of NQI. (Middle) Field strength \(B=8\,\mathrm{T}\). Blue and red curves are Fourier transforms of Fig. 17(c) and 7(c), respectively. Peak structures appear with a period of around \(89\,\mathrm{MHz}\). (Bottom) A zoom-in of the peak structure in the range of \(84\)–\(94\,\mathrm{MHz}\) in the middle panel showing a change in the detailed satellite structure of the frequency peaks once the NQI is included. Figure 18: Similar to Fig. 9(a), an agreement is found between the coherence functions due to eight N nuclear spins coupled together (blue) and the product of coherence functions due to each individual N (red). Here the NQI is included. A magnetic field of \(1\,\mathrm{T}\) in this example is applied along the GNR.
2309.08391
Fractional Advection Diffusion Asymmetry Equation, derivation, solution and application
The non-Markovian continuous-time random walk model, featuring fat-tailed waiting times and narrow distributed displacements with a non-zero mean, is a well studied model for anomalous diffusion. Using an analytical approach, we recently demonstrated how a fractional space advection diffusion asymmetry equation, usually associated with Markovian L\'evy flights, describes the spreading of a packet of particles. Since we use Gaussian statistics for jump lengths though fat-tailed distribution of waiting times, the appearance of fractional space derivatives in the kinetic equation demands explanations provided in this manuscript. As applications we analyse the spreading of tracers in two dimensions, breakthrough curves investigated in the field of contamination spreading in hydrology and first passage time statistics. We present a subordination scheme valid for the case when the mean waiting time is finite and the variance diverges, which is related to L\'evy statistics for the number of renewals in the process.
Wanli Wang, Eli Barkai
2023-09-15T13:37:27Z
http://arxiv.org/abs/2309.08391v1
# Fractional Advection Diffusion Asymmetry Equation, derivation, solution and application ###### Abstract The non-Markovian continuous-time random walk model, featuring fat-tailed waiting times and narrow distributed displacements with a non-zero mean, is a well studied model for anomalous diffusion. Using an analytical approach, we recently demonstrated how a fractional space advection diffusion asymmetry equation, usually associated with Markovian Levy flights, describes the spreading of a packet of particles. Since we use Gaussian statistics for jump lengths though fat-tailed distribution of waiting times, the appearance of fractional space derivatives in the kinetic equation demands explanations provided in this manuscript. As applications we analyse the spreading of tracers in two dimensions, breakthrough curves investigated in the field of contamination spreading in hydrology and first passage time statistics. We present a subordination scheme valid for the case when the mean waiting time is finite and the variance diverges, which is related to Levy statistics for the number of renewals in the process. ## 1 Introduction Continuous-time random walk [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] (CTRW) is a stochastic jump process for a random walk that jumps instantaneously from one site to another, following a sojourn period on a site. See also recent works in [12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. For example, it was used to describe dispersive transport in time of flight experiment of charge carriers in disordered systems [22]. The non-Markovian characteristic of CTRW becomes apparent, especially when the disorder is introduced into the system, as observed in the sense of ensemble average [10]. Fractional kinetic equations [23, 24, 25, 26, 27, 28, 29] are a popular tool used to describe various anomalous phenomena [30, 31]. When the bias of the system is a constant, the fractional time diffusion equation follows [32, 33] \[\frac{\partial}{\partial t}\mathcal{P}(x,t)=\,_{0}\mathcal{D}_{t}^{1-\alpha} \left[\frac{\partial^{2}}{\partial x^{2}}-\frac{\partial}{\partial x}\right] \mathcal{P}(x,t), \tag{1}\] describing anomalous dynamics with \(0<\alpha<1\). Here we make the assumption that both the generalized mobility and the generalized diffusion coefficient are set to one. The time Riemann-Liouville operator \(\,{}_{0}\mathcal{D}_{t}^{1-\alpha}\) introduces a convolution integral involving a power-law kernel that decays slowly over time, which is related to memory effects of this nonequilibrium system. In addition, \(\,{}_{0}\mathcal{D}_{t}^{1-\alpha}\) is directly governed by heavy-tailed power law waiting times distributions \[\phi(\tau)\propto\tau^{-\alpha-1},\quad\tau\to\infty, \tag{2}\] in the context of the underlying picture, the CTRW model. Due to heavy-tailed waiting times, characterized by an infinite mean, Eq. (1) illustrates sub-diffusion behaviors. However, our focus is on the super-diffusion scenario. Our recent findings demonstrated that the positional distribution, \(\mathcal{P}(x,t)\), of the mentioned CTRW model follows the fractional advection diffusion asymmetry equation (FADAE) \[\frac{\partial}{\partial t}\mathcal{P}=D\frac{\partial^{2}}{\partial x^{2}} \mathcal{P}-V\frac{\partial}{\partial x}\mathcal{P}+S\,\frac{\partial^{ \alpha}}{\partial(-x)^{\alpha}}\mathcal{P} \tag{3}\] for \(1<\alpha<2\). Here \(D\), \(V\), and \(S\) are transport coefficients to be discussed, and the operator \(\frac{\partial^{\alpha}}{\partial(-x)^{\alpha}}\) is the right Riemann-Liouville derivative [34, 35] in regard to space. Thus the fat tailed distribution of sojourn times Eq. (2), which leads to the fractional time Fokker-Planck equation (1), when \(0<\alpha<1\), can lead as shown here to the fractional space differential equation (3) if \(1<\alpha<2\). There is a transition in the basic kinetic description of the transport. When the mean waiting time diverges, we get the mentioned fractional time Fokker-Planck equation. However, if the mean sojourn time is finite but the variance diverges, we obtain a fractional space description. Our first goal is to clarify this point. Figure 1: A random walker moves on a one-dimensional lattice with fat-tailed displacements that describe statistics of Lévy flights [see subplot (a)]. While, subplot (b) displays a trajectory of a CTRW walker with thin tailed displacements, e.g. Gaussian, but the waiting time probability density functions (PDFs) are fat-tailed. Under certain conditions, the packets of spreading particles can appear the same, despite the significant differences of the underlying paths. Fractional space transport equations [4, 30, 34, 36, 37, 38, 39, 40] are usually associated with Levy flights [35, 41, 42, 43, 44, 45, 46, 47, 48]. See subplot (a) in Fig. 1. The term Levy flights, also referred to as Levy motion, was coined by Benoit Mandelbrot in honour of the French mathematician Paul Levy. The individual jump has a length that is distributed with the PDF decaying at large positions as a fat-tailed jump length distribution with diverging variance of the size of the jumps. Importantly, the Levy flight process is a Markovian process, hence the appearance of the fractional framework in the context of a model with fat tailed distributions of waiting times is of interest. In particular, any fitting of data using the fractional space kinetic equation, cannot be used as strong evidence for Levy flights, or for a basic Markovian underlying process. Subordination methods, based on an integral transformation, present a way of solving fractional kinetic equations [49, 50, 51, 52, 53, 54, 55, 56, 57]. To be more exact, when the mean waiting time diverges [33], subordination methods map a classical Fokker-Planck equation onto a fractional diffusion one with the fractional time derivative. For this case, the term "inverse Levy transform" is sometimes used due to the number of renewals that follows the inverse one-sided Levy distribution. This well-known framework works when \(0<\alpha<1\). We will propose a new subordination method to solve a wide range of problems valid when the mean waiting time is finite, but the variance is diverging, namely the case \(1<\alpha<2\). The paper is organized as follows. We start by introducing the CTRW model and give the corresponding concepts in Sec. 2. We derive the corresponding FADAE and give our explanations in Sec. 3. In Sec. 4, the applications and extensions of Eq. (3) are considered, ranging from the FADAE in two dimensions, the positional distribution with the time-dependent bias and variance, breakthrough curves, and the first passage time obtained from subordination methods. Three limiting laws of FADAE, describing the large asymmetric parameter \(S\), the large diffusion term \(D\), and the general typical fluctuations, are discussed in Sec. 5. To conclude, we summarize our findings in Sec. 6. ## 2 CTRW model Now we define the CTRW model [4, 10, 58] discussed in this manuscript. Let us consider the process starting at \(t=0\) with the initial position \(x=0\). A walker is trapped on the origin for time \(\tau_{1}\), then makes a jump and the displacement is \(\chi_{1}\); the walker is further trapped on \(\chi_{1}\) for time \(\tau_{2}\), and then jumps to a new position; this process is then repeated. Thus, the process is characterized by a set of waiting times \(\{\tau_{1},\tau_{2},\cdots,\tau_{N},B_{t}\}\) and the displacements \(\{\chi_{1},\chi_{2},\cdots,\chi_{N}\}\), where \(B_{t}\) is the backward recurrence time [59, 60] and the time dependent \(N\) is the number of renewals from the observation time zero to \(t\). Specifically, \(\sum_{i=1}^{N}\tau_{i}+B_{t}=t\). These random variables, i.e., \(\{\tau_{1},\tau_{2},\cdots,\tau_{N}\}\) and \(\{\chi_{1},\chi_{2},\cdots,\chi_{N}\}\) are mutually independent and identically distributed (IID) random variables with common PDFs \(\phi(\tau)\) and \(f(\chi)\), respectively. Consider the broad distribution characterized by a fat tail \[\phi(\tau)=\left\{\begin{array}{ll}0,&\tau<\tau_{0};\\ \alpha\frac{\tau_{0}^{\alpha}}{\tau_{1}^{\alpha\alpha}},&\tau\geq\tau_{0}, \end{array}\right. \tag{4}\] where \(\tau_{0}\) is a time scale and the power-law index \(1<\alpha<2\). This indicates that the mean \(\langle\tau\rangle=\int_{0}^{\infty}\tau\phi(\tau)d\tau=\alpha\tau_{0}/(\alpha-1)\) is finite, but not the variance. The Laplace transform will be used in solving our problems. It is defined by \(\widehat{g}(s)=\int_{0}^{+\infty}\exp(-s\tau)g(\tau)d\tau\) for a well behaved function \(g(\tau)\). The Laplace transform of \(\phi(\tau)\), \(\tau\to s\), is [4, 59] \[\widehat{\phi}(s)\sim 1-\langle\tau\rangle s+b_{\alpha}s^{\alpha} \tag{5}\] with \(b_{\alpha}=(\tau_{0})^{\alpha}|\Gamma(1-\alpha)|\). For \(s\to 0\), we can check that \(\widehat{\phi}(0)=0\), which indicates that \(\phi(\tau)\) in Eq. (4) is normalized. For the displacement PDF \(f(\chi)\), we assume \(\chi\) has a finite mean \(a>0\) and the variance \(\sigma^{2}\). In this manuscript, we consider a Gaussian distribution \[f(\chi)=\frac{1}{\sqrt{2\sigma^{2}\pi}}\exp\left[-\frac{(\chi-a)^{2}}{2\sigma ^{2}}\right]. \tag{6}\] In Fourier space, \(\widehat{f}(k)=\exp(-\sigma^{2}k^{2}/2-ika)\) with \(\widehat{f}(k)=\int_{-\infty}^{\infty}\exp(-ikx)f(x)dx\). Thus, the small \(k\) expansion of \(\widehat{f}(k)\) reads \[\widetilde{f}(k)\sim 1-ika-\frac{1}{2}(\sigma^{2}+a^{2})k^{2}. \tag{7}\] When the sojourn time PDF has a fat tail and the variance of the displacement is finite, a wide range of anomalous behaviors emerge [3, 61]. As mentioned, here we will focus on the widely applicable case \(1<\alpha<2\)[62, 63, 64, 65, 66, 67, 68, 69]. ## 3 Fractional-Space Advection Diffusion Asymmetry Equation ### Calculation of the Positional distribution We now consider the CTRW model and note that the PDF of the position of a walker at time \(t\) is \[P(x,t)_{\rm CTRW}=\sum_{N=0}^{\infty}Q_{t}(N)\chi_{N}(x)\rightarrow\int_{0}^{ \infty}Q_{t}(N)\chi_{N}(x)\mathrm{d}N, \tag{8}\] where \(Q_{t}(N)\) denotes the probability of the number of events during the time interval \((0,t)\) and \(\chi_{N}(x)\) is the probability that the walker is on \(x\) conditioned that it made \(N\) steps. Equation 8 is known as the subordination of the spatial process \(x\) by the temporal process for \(N\)[3, 33, 54, 70]. The technique was used in computer simulation in the context of fractional Fokker-Planck dynamics [71], random diffusivity [56], population heterogeneity [72] and one-dimensional Brownian search problem [55]. As mentioned before, time intervals and displacements of walkers under investigation are IID with common PDFs \(\phi(\tau)\) and \(f(\chi)\), respectively. Note the relation between \(\chi_{N-1}(x)\) and \(\chi_{N}(x)\), i.e., \(\chi_{N}(x)=\int_{-\infty}^{\infty}\chi_{N-1}(y)f(x-y)\mathrm{d}y\), and the density of the particle's displacement after the first step is given exactly by \(f(x)\), i.e., \(\chi_{1}(x)=f(x)\). Then using the convolution of the Fourier transform, it follows that as well known \[\chi_{N}(x)=\frac{1}{\sqrt{2\pi\sigma^{2}N}}\exp\left(-\frac{(x-aN)^{2}}{2 \sigma^{2}N}\right). \tag{9}\] Based on the renewal process [59], in Laplace space \(\widehat{Q}_{s}(N)\) follows \(\widehat{Q}_{s}(N)=\widehat{\phi}(s)^{N}(1-\widehat{\phi}(s))/s\). Considering the random variable \(\epsilon=N-t/\langle\tau\rangle\) and taking the inverse transform, then typical fluctuations follow [59, 60, 73] \[Q_{t}(N)\sim\frac{1}{(t/\overline{t})^{1/\alpha}}\mathcal{L}_{\alpha}\left( \frac{N-t/\langle\tau\rangle}{(t/\overline{t})^{1/\alpha}}\right) \tag{10}\] with \[\overline{t}=\langle\tau\rangle^{1+\alpha}/((\tau_{0})^{\alpha}|\Gamma(1- \alpha)|). \tag{11}\] Eq. (10) is valid in the limit of \(N-t/\langle\tau\rangle\propto(t/\overline{t})^{1/\alpha}\). Rare fluctuations describing the scaling of \(N-t/\langle\tau\rangle\propto t/\langle\tau\rangle\) were investigated in Ref. [60]. Here \(\mathcal{L}_{\alpha}(x)\) is the Levy distribution [34, 35], defined by \[\mathcal{L}_{\alpha}(x)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp(ikx)\exp[( -ik)^{\alpha}]\mathrm{d}k. \tag{12}\] Utilizing Eqs. (8), (9) and (10), we have \[P(x,t)_{\mathrm{CTRW}}\sim\frac{1}{(t/\overline{t})^{1/\alpha}}\int_{0}^{ \infty}\mathcal{L}_{\alpha}\left(\frac{N-t/\langle\tau\rangle}{(t/\overline{ t})^{1/\alpha}}\right)\frac{\exp\left(-\frac{(x-aN)^{2}}{2\sigma^{2}N}\right)}{ \sqrt{2\pi\sigma^{2}N}}\mathrm{d}N. \tag{13}\] Note that Eq. (13) can be extended to more general situations, e.g. if the jump size distribution is fat-tailed. Starting from Eq. (13) and changing variable \(y=(N-t/\langle\tau\rangle)/(t/\overline{t})^{1/\alpha}\), we get \[P(x,t)_{\mathrm{CTRW}}\sim\int_{-\frac{t}{\langle\tau\rangle}(\overline{t}/t) ^{1/\alpha}}^{\infty}\mathcal{L}_{\alpha}(y)\frac{\exp\left(-\frac{(x-a^{ \alpha}t/\langle\tau\rangle-a\langle\tau\rangle^{2}}{2\sigma^{2}(t/\langle \tau\rangle+a\langle\tau\rangle/\overline{t})^{1/\alpha})}\right)}{\sqrt{2 \sigma^{2}\pi(t/\langle\tau\rangle+y(\frac{t}{\overline{t}})^{1/\alpha})}} \mathrm{d}y, \tag{14}\] describing the scaling when \(x-at/\langle\tau\rangle\propto a(t/\overline{t})^{1/\alpha}\). Note that in the long time limit, the lower limit of the integral in Eq. (14) can be replaced with \(-\infty\); see Eq. (21) below. The reason is as follows the dimensionless parameter \[\frac{t}{\langle\tau\rangle}\left(\frac{\overline{t}}{t}\right)^{1/\alpha}= \left(\frac{t}{\tau_{0}}\right)^{1-1/\alpha}\left(\frac{\alpha}{\Gamma(2- \alpha)}\right)^{1/\alpha}\rightarrow\infty \tag{15}\] for \(1<\alpha<2\) when \(t\rightarrow\infty\). The scaling form of the position reads \[P_{\mathrm{CTRW}}(\xi,t)\sim\int_{-\frac{t}{\langle\tau\rangle}(\overline{t} )^{\frac{1}{\alpha}}}^{\infty}\mathcal{L}_{\alpha}(y)\frac{\exp(-\frac{(\xi-y)^{ 2}}{2gg(y,t)})}{\sqrt{2\pi gg(y,t)}}\mathrm{d}y \tag{16}\] with \[\xi=(x-at/\langle\tau\rangle)/(a(t/\overline{t})^{1/\alpha}) \tag{17}\] and \[gg(y,t)=\frac{\sigma^{2}(t/\langle\tau\rangle+y(t/\overline{t})^{1/\alpha})} {a^{2}(t/\overline{t})^{2/\alpha}}. \tag{18}\] In the long time limit, Eq. (16) reduces to the well-known result [73, 74] \[P_{\mathrm{CTRW}}(\xi)\sim\mathcal{L}_{\alpha}(\xi), \tag{19}\] where we used the fact that \(\exp[-z^{2}/(2\rho^{2})]/\sqrt{2\pi\rho^{2}}\) tends to \(\delta(z)\) when \(\rho\to 0^{+}\) from Eq. (19). More precisely, Eq. (19) is valid for the fixed \(\sigma\), \(a\), and the long observation time \(t\). At the same time, we will consider a second dimensionless parameter \(\overline{\sigma}^{2}\), \[gg(y,t)\sim\frac{\sigma^{2}}{a^{2}}\left(\frac{t}{\tau_{0}}\right)^{1-\frac{2} {a}}\left(\frac{\alpha}{\alpha-1}\right)^{\frac{2+\beta}{\beta}}|\Gamma(1- \alpha)|^{-\frac{2}{a}}=\overline{\sigma}^{2}, \tag{20}\] which quantifies the convergence to the Levy distribution. We will then discuss three limits. If \(\overline{\sigma}^{2}\to 0\), Eq. (16) gives Eq. (19). However, assume a situation when the bias is weak meaning \(\sigma^{2}/a^{2}\) is very large, this is important in many applications that we consider, and corresponds to the linear response regime when the external driving force determined by \(a\) is small. Then the last limit will be to consider a long time but \(\bar{\sigma}^{2}\) finite. In this case, Eq. (14) is valid. Refer to A for more detailed discussions on \(\overline{\sigma}^{2}\). ### Derivation of Fractional-Space Advection Diffusion asymmetry Equation Rewriting Eq. (14) with a lower limit \(-\infty\), in the long time limit we get \[\mathcal{P}(x,t)=\int_{-\infty}^{\infty}\mathcal{L}_{a}(y)\frac{\exp\left(- \frac{(x-a\frac{t}{\tau_{0}}-ay(\frac{t}{\tau})^{\frac{1}{2}})^{2}}{2\sigma^{ 2}t/(\tau)}\right)}{\sqrt{2\sigma^{2}\pi t/\langle\tau\rangle}}\mathrm{d}y; \tag{21}\] see Fig. 2. Here we use the expression \(\mathcal{P}(x,t)\) to denote the positional distribution \(P_{\mathrm{CTRW}}(x,t)\) in the long time limit. The statistic of \(\mathcal{P}(x,t)\) at a short time \(t\) is now discussed. For this case, Figure 2: PDF of \(\xi\) for various \(\sigma\) listed in figure. Here the first moment of waiting times, \(\langle\tau\rangle=0.3\), implies that \(t/\langle\tau\rangle\approx 3333.3\gg 1\). The simulation results, represented by the blue symbols, were obtained by averaging \(5\times 10^{5}\) particles with \(\alpha=1.5\) and \(t=1000\). The corresponding theoretical prediction, shown by the solid red lines, was derived from Eq. (21), i.e., the exact solution of Eq. (3), and it exhibits a perfect match with no fitting. Notice that what the limit theorem Eq. (19) predicts here is the top dashed black line, while our results valid for a variety of \(a\) and \(\sigma\) show the deviations from the Lévy stable distribution \(\mathcal{L}_{a}(\xi)\) when \(\sigma=2/\sqrt{2},4/\sqrt{2},6/\sqrt{2}\), and \(16/\sqrt{2}\). See further discussion in A. Eq. (21) reduces to \[\mathcal{P}(x,t)\simeq\frac{\exp(-\frac{x^{2}}{2\sigma^{2}t/\langle\tau\rangle})}{ \sqrt{2\pi\sigma^{2}t/\langle\tau\rangle}}, \tag{22}\] where we ignored the terms \(at/\langle\tau\rangle\) and \(ay(t/\overline{t})^{1/\alpha}\) in Eq. (21). It implies that for small \(t\) the Gaussian distribution is found. Besides, using \(\delta(x)=\lim_{p\to 0^{+}}\exp(-x^{2}/(2\rho^{2}))/\sqrt{2\pi\rho^{2}}\), we can see that the initial position of the process is \(\mathcal{P}(x,t\to 0)=\delta(x)\). Let us proceed with the discussion of the FADAE. Taking the Fourier transform of Eq. (21) with respect to \(x\) gives \[\begin{array}{ll}\widetilde{\mathcal{P}}(k,t)&=\int_{-\infty}^{\infty} \mathcal{L}_{\alpha}(y)\exp\left(-i\left(a\frac{t}{\langle\tau\rangle}+ay\left( \frac{t}{t}\right)^{1/\alpha}\right)k-k^{2}\frac{\sigma^{2}t}{2\langle\tau \rangle}\right)dy\\ &=\exp\left(-\frac{t\sigma^{2}}{2\langle\tau\rangle}k^{2}-iak\frac{t}{ \langle\tau\rangle}+(-ik)^{\alpha}a^{\alpha}\frac{t}{t}\right).\end{array} \tag{23}\] Let \(t\) be zero, Eq. (23) results in \(\widetilde{\mathcal{P}}(k,t\to 0)=1\), illustrating the fact that \(\mathcal{P}(x,t\to 0)=\delta(x)\). By application of the differential operator \(\partial/\partial t\), we find FADAE in Fourier space \[\frac{\partial}{\partial t}\widetilde{\mathcal{P}}(k,t)=-\frac{\sigma^{2}}{2 \langle\tau\rangle}k^{2}\widetilde{\mathcal{P}}(k,t)-ik\frac{a}{\langle\tau \rangle}\widetilde{\mathcal{P}}(k,t)+(-ik)^{\alpha}\frac{a^{\alpha}}{\overline {t}}\widetilde{\mathcal{P}}(k,t). \tag{24}\] Taking the inverse Fourier transform, in \((x,t)\) space, we get Eq. (3) given in the introduction [53]. The transport constants of Eq. (3) are as follows \[D=\frac{\sigma^{2}}{2\langle\tau\rangle},\quad V=\frac{a}{\langle\tau\rangle},\quad S=\frac{a^{\alpha}}{\overline{t}}. \tag{25}\] Here \(D\) carrying the dimension \([\sigma^{2}/\langle\tau\rangle]=\mathrm{cm^{2}s^{-1}}\) is the diffusion constant. The drift of the process is \(V\). The last constant \(S\) on the right-hand side of Eq. (3) describes the asymmetric property of the process. The operator \(\partial^{\alpha}/\partial(-x)^{\alpha}\) is the right Riemann-Liouville derivative [34, 35] with respect to space whose Fourier transform is \(\mathcal{F}[\frac{\mathrm{d}^{\alpha}}{\mathrm{d}(-x)^{\alpha}}g(x)]=(-ik)^{ \alpha}\widetilde{g}(k)\). In real space \[\frac{\mathrm{d}^{\alpha}g(x)}{\mathrm{d}(-x)^{\alpha}}=\frac{(-1)^{n}}{ \Gamma(n-\alpha)}\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\int_{x}^{\infty}(y-x) ^{n-\alpha-1}g(y)\mathrm{d}y, \tag{26}\] where \(n\) is the smallest integer larger than \(\alpha\). We have used \(a>0\) in Eq. 3, but as discussed below, when \(a<0\), the left Riemann-Liouville derivative comes into being. We note by passing that mathematically the left Riemann-Liouville derivative is defined as [34, 35] \[\frac{\mathrm{d}^{\alpha}}{\mathrm{d}x^{\alpha}}g(x)=\frac{1}{\Gamma(n-\alpha )}\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\int_{-\infty}^{x}(x-y)^{n-\alpha-1}g (y)\mathrm{d}y \tag{27}\] with \(\mathcal{F}[\frac{\mathrm{d}^{\alpha}}{\mathrm{d}x^{\alpha}}g(x)]=(ik)^{ \alpha}\widetilde{g}(k)\). Here we present a summary of key insights extracted from Eq. (3): 1. The first two constants, i.e., \(D\) and \(V\), are standard relations describing classical advection-diffusion equation. If \(a\to 0\), both the asymmetry parameter \(S\) and the fractional space derivative become negligible, see Eq. 25. The same holds for the drift 2. Note that the solution \(\mathcal{P}(x,t)\) of Eq. (3) is normalized for any observation time \(t\) since \(\widetilde{\mathcal{P}}(k=0,t)=1\) according to Eq. (23). 3. We can check that the exact solution of Eq. (3) is Eq. (21). To be more exactly, using Eq. (23), we find that the solution is the convolution of Levy distribution and Gaussian distribution, namely \[\mathcal{P}(x,t)=\frac{\mathcal{L}_{\alpha}\left(\frac{x}{(ST)^{\alpha}} \right)}{(St)^{\alpha}}\bigotimes\frac{\exp\left(-\frac{(x-Vt)^{2}}{4Dt} \right)}{\sqrt{4\pi Dt}},\] (28) where \(\bigotimes\) is the convolution symbol of the Fourier transform with respect to \(x\). 4. When particles are only allowed to move in one direction, for example, \(f(\chi)=\delta(\chi-a)\). For this case, \(\chi_{N}(x)\) behaves as \(\chi_{N}(x)=\delta(x-aN)\). Utilizing Eqs. (8) and (10), we have \[P_{\mathrm{CTRW}}(x,t)\sim\frac{1}{a(t/\tilde{t})^{1/\alpha}}\mathcal{L}_{ \alpha}\left(\frac{\frac{x}{a}-\frac{t}{\langle\tilde{t}\rangle}}{(t/\tilde{ t})^{1/\alpha}}\right).\] This is the same as Eq. (19) after changing variables. Here \(\chi_{N}(x)\) can be treated as a delta function when the variance of displacement is finite. The fractional equation related to this case is given by Eq. (3), where the diffusion term is absent, indicating a strong bias scenario. 5. Under the influence of the bias, the fractional derivative operator \(\frac{\mathrm{d}^{\alpha}}{\mathrm{d}(-x)^{\alpha}}\) is with respect to space, though the distribution of the step length is Gaussian. In that sense, the fat-tailed displacement is not a basic requirement for the fractional space derivative operator. 6. The order of taking limits of \(t\) approaching infinity and \(a\) going to zero cannot be interchanged. As the bias of the system approaches zero, the solution of Eq. (3) becomes a Gaussian distribution for any observation time \(t\), indicating normal diffusion. However, in the presence of any disturbance, we observe super-diffusion in the long-time limit. 7. According to Eq. (3) or (23), we get the asymptotic behavior of the mean of the position \(\langle x(t)\rangle\sim at/\langle\tau\rangle\), growing linearly with time \(t\). As expected, Eq. (3) gives an infinite MSD, which is certainly not possible. In Ref. [68] we find that the MSD is sensitive to rare events, while Eq. (3) deals with typical fluctuations describing the central part of the positional distribution. A detailed discussion about the mean squared displacement (MSD) and fractional moments will be given in B. 8. According to Eqs. (23) and (25), we have \[\mathcal{P}(k,t)=\exp\left[-Dk^{2}t-ikVt+S(-ik)^{\alpha}t\right].\] (29) Instead of considering \(\mathcal{P}(x,t)\), we investigate \[\mathcal{Q}(k,t)=\exp(ikVt)\mathcal{P}(k,t)=\exp\left[-Dk^{2}t+S(-ik)^{\alpha }t\right].\] (30) Performing the Laplace transform with respect to \(t\), we find \[\mathcal{Q}(k,s)=\mathcal{P}\left(k,s-iVk\right)=\frac{1}{s+\frac{\sigma^{2}}{ 2(\tau)}k^{2}-\frac{\sigma^{\prime\prime}}{\tilde{t}}(-ik)^{\alpha}}.\] (31) The last term of Eq. (31) demonstrates the competition between normal diffusion and sup-diffusion. It can be seen that when \(\sigma^{2}k^{2}/(2(\tau))\gg|(-ik)^{\alpha}|\alpha^{\alpha}/\bar{t}\), i.e., \(|k|\gg(a^{\alpha}2\langle\tau\rangle/(\bar{t}\sigma^{2}))^{1/(2-\alpha)}\), as mentioned we get the Gaussian distribution. For the opposite limit, the Levy stable law is derived. * The Levy stable law and Gaussian distribution, which are the limit laws for solving Eq. (3), are controlled by the dimensionless parameter \(\overline{\sigma}^{2}\). More details on this can be found in Sec. 5 in terms of CTRW. * We further investigate the situation when the bias is negative, i.e., \(a<0\). Substituting Eq. (25) into Eq. (29), we have \[\widetilde{\mathcal{P}}(k,t)=\exp\left(-\frac{\sigma^{2}}{2\langle\tau\rangle} k^{2}t-ik\frac{a}{\langle\tau\rangle}t+\frac{a^{\alpha}}{\bar{t}}(-ik)^{ \alpha}t\right).\] Arranging the above equation, we find \[\widetilde{\mathcal{P}}(k,t)=\exp\left(-\frac{\sigma^{2}k^{2}t}{2\langle\tau \rangle}-ik\frac{at}{\langle\tau\rangle}+\frac{(-a)^{\alpha}}{\bar{t}}(ik)^{ \alpha}t\right).\] (32) Taking derivative with respect to \(t\) and then performing the inverse Fourier transform, Eq. (32) leads to \[\frac{\partial}{\partial t}\mathcal{P}=\frac{\sigma^{2}}{2\langle\tau\rangle }\frac{\partial^{2}}{\partial x^{2}}\mathcal{P}-\frac{a}{\langle\tau\rangle} \frac{\partial}{\partial x}\mathcal{P}+\frac{(-a)^{\alpha}}{\bar{t}}\frac{ \partial^{\alpha}}{\partial x^{\alpha}}\mathcal{P}.\] (33) In the case of a positive bias, we have the right Riemann-Liouville derivative in Eq. (3). While, if the bias is negative, the fractional operator in Eq. (3) is replaced by the left Riemann-Liouville derivative [see Eq. (27)], controlling the right fat tail of the positional distribution. * When waiting times have an infinite mean, i.e., Eq. (4) with \(0<\alpha<1\), and the displacement is still drawn from Gaussian distribution Eq. (6), the corresponding fractional advection diffusion equation [4, 30, 32, 33, 75] is \[\frac{\partial}{\partial t}\mathcal{P}(x,t)={}_{0}\mathcal{D}_{t}^{1-\alpha} \Big{[}D\frac{\partial^{2}}{\partial x^{2}}-V\frac{\partial}{\partial x} \Big{]}\mathcal{P}(x,t),\] (34) where \(D=\sigma^{2}/b_{\alpha}\) and \(V=a/b_{\alpha}\) with \(b_{\alpha}=\tau_{0}^{\alpha}\Gamma(1-\alpha)\). In Eq. (34), the fractional operator \({}_{0}\mathcal{D}_{t}^{1-\alpha}\) is in reference to time rather than the space. The fractional time operator \({}_{0}\mathcal{D}_{t}^{1-\alpha}\) called the Riemann-Liouville time derivative is as follows [37, 76] \[{}_{0}\mathcal{D}_{t}^{1-\alpha}g(t)=\frac{1}{\Gamma(\alpha)}\frac{\partial}{ \partial t}\int_{0}^{t}\frac{g(\tau)}{(t-\tau)^{1-\alpha}}d\tau\] (35) with \(0<\alpha<1\). Further, when \(0<\alpha<1\), the positional distribution has a right fat tail. While, in the context of \(1<\alpha<2\), the positional distribution has a left one. It can be seen that the forms of fractional equations with different \(\alpha\) show great differences. * Subordination discussed here is vastly different from the case for \(0<\alpha<1\). When \(1<\alpha<2\), the number of renewals follows the two-sided Levy stable law with an infinite variance, instead of the one-sided Levy distribution (also called the Mittag-Leffler distribution). 3. Solutions we discussed are valid for free boundary conditions. One can use the subordination scheme Eq. (13) to consider other cases. The key idea is that the number of steps in the long time limit is distributed according to the Levy law given in Eq. (10). Then a normal process is considered, for example diffusion in a finite medium, with the operational time \(N\). The Levy central limit theorem is then used to transform the operational time \(N\) (also the number of steps) to the laboratory time \(t\). 4. Benson et al. proposed an equation called fractional advection dispersion equation to describe contaminant source [44, 46] \[\frac{\partial}{\partial t}\mathcal{P}=-V\frac{\partial}{\partial x}\mathcal{ P}+K\left[p\frac{\partial^{\alpha}}{\partial x^{\alpha}}\mathcal{P}+q\frac{ \partial^{\alpha}}{\partial(-x)^{\alpha}}\mathcal{P}\right],\] (36) which is obtained from Levy flights; see Fig. 1 (a). If \(D=0\) and \(p=0\), Eq. (3) is consistent with Eq. (36). In Ref. [77], the authors use \(V=0.8\)m/h, \(D=0\), \(K=2.8\)m\({}^{1.51}\)/h, \(q=1\), and \(p=0\) to fit experimental contaminant data in terms of the breakthrough curves. According to this, one may interpret the data as coming from Levy flights. On the other hand, here we want to stress that these data may stem from CTRW model with a strong bias in the long time limit, i.e., these experimental data are consistent with CTRW with a fat-tailed waiting time distribution. In that sense, it indicates that both Levy flights and CTRW can be used as a tool to describe the contaminant in disorder systems. This holds as long as we have information on the spreading packets and no physical insight into the underlying trajectories. ## 4 Applications As mentioned before, fractional diffusion equations are widely used as a tool to describe the dispersion of contaminants and other phenomena observed in non-equilibrium systems. For that, we explore several applications of FADAE (3). These applications encompass calculating FADAE in two dimensions, determining the positional distribution with the time-dependent bias and variance, analyzing breakthrough curves [66, 78], estimating the first passage time, and comparing them to CTRW dynamics. ### FADAE in two dimensions Here we consider the two dimensional FSDAE which is not only of theoretical significance, but also potential application value [66, 79, 80, 81]. For simplification, the bias of the system is only in \(x\) direction and the equation reads \[\frac{\partial}{\partial t}\mathcal{P}(x,y,t)=D_{x}\frac{\partial^{2}}{ \partial x^{2}}\mathcal{P}(x,y,t)-V_{x}\frac{\partial}{\partial x}\mathcal{P }(x,y,t)+S_{x}\frac{\partial^{\alpha}}{\partial(-x)^{\alpha}}\mathcal{P}(x,y, t)+D_{y}\frac{\partial^{2}}{\partial y^{2}}\mathcal{P}(x,y,t). \tag{37}\] Here \(D_{x}\), \(V_{x}\), \(S_{x}\), and \(D_{y}\) are constants. Taking double Fourier transforms, \(x\to k_{x}\) and \(y\to k_{y}\), we get \[\widetilde{\mathcal{P}}(k_{x},k_{y},t)=\exp\left(-D_{x}(k_{x})^{2}t-iV_{x}k_{x }t+(-ik_{x})^{\alpha}tS_{x}-D_{y}(k_{y})^{2}t\right). \tag{38}\] Using convolution properties of Fourier transform, Eq. (38) yields \[\mathcal{P}(x,y,t)=\int_{-\infty}^{\infty}\mathcal{L}_{\alpha}(z)\frac{\exp \left(-\frac{(x-y_{x}t-(S_{x})^{2})^{2}}{4D_{x}t}\right)}{\sqrt{4\pi D_{x}t}} \frac{\exp\left(-\frac{y^{2}}{4D_{y}t}\right)}{\sqrt{4\pi D_{y}t}}dz, \tag{39}\] where \(\mathcal{L}_{\alpha}(z)\) is the non-symmetric Levy stable distribution Eq. (12). Below we obtain four transport coefficients given in Eq. (37). In the language of the CTRW model, the displacement follows \[f(x,y)=\frac{1}{\sqrt{2(\sigma_{x})^{2}}\pi}\exp\left(-\frac{(x-a_{x})^{2}}{2( \sigma_{x})^{2}}\right)\times\frac{1}{\sqrt{2(\sigma_{y})^{2}}\pi}\exp\left(- \frac{y^{2}}{2(\sigma_{y})^{2}}\right), \tag{40}\] where \(a_{x}\), \(\sigma_{x}\neq 0\), and \(\sigma_{y}\neq 0\) are constants. In double Fourier spaces, \(x\to k_{x}\) and \(y\to k_{y}\), we get \[\widetilde{f}(k_{x},k_{y})=\exp\left(-ik_{x}a_{x}-\frac{1}{2}(\sigma_{x})^{2} (k_{x})^{2}-\frac{1}{2}(\sigma_{y})^{2}(k_{y})^{2}\right). \tag{41}\] For the waiting time, we continue to utilize the fat-tailed power law distribution Eq. (4). Using Eq. (25) and the same arguments as before, we find \[D_{x}=\frac{\sigma_{x}^{2}}{2\langle\tau\rangle},\ \ \ V_{x}=\frac{a_{x}}{ \langle\tau\rangle},\ \ \ S_{x}=\frac{(a_{x})^{\alpha}}{\overline{t}},\ \ \ D_{y}=\frac{\sigma_{y}^{2}}{2\langle\tau\rangle}. \tag{42}\] It can be seen that when the external force is only in the \(x\) direction, the fat tail of the packet of spreading particle \(\mathcal{P}(x,y,t)\) is with respect to \(x\). Note that the exact solution of Eq. (37) is Eq. (39), i.e., \(P_{\mathrm{CTRW}}(x,y,t)\) in the long time limit. What we want to stress is that when the bias is also in the \(y\) direction, the fractional space operator with respect to \(y\) should be added. See the solution of Eq. (37) in Fig. 3 and the marginal distribution in Fig. 4. ### Propagator with the time-dependent bias and variance Motivated by [66], we consider the time-dependent bias and variance determined by four states. In some sense, assumptions are more closely aligned with the real-world spread Figure 3: The dynamic of \(\mathcal{P}(x,y,t)/\mathrm{max}\mathcal{P}(x,y,t)\) obtained from Eq. (37) for the fractional advection-diffusion process in two dimensions. However, in the inset, the results are projected. The parameters of CTRW are \(\tau_{0}=0.1\), \(\alpha=1.5\), \(a_{x}=2\), \(t=1000\) and \(\sigma_{x}=\sigma_{y}=5\). of pollutants. This idea yields some interesting results, for example complex structure of breakthrough curves; see below. We suppose that the rapid injection of particles is done immediately after starting observing the process. In other words, the initial positions of all particles are \(\mathcal{P}(x,0)=\delta(x)\). Here we simulate the particles consisting of four states: (i) after the injection of the particles at time \(t=0\), they undergo CTRW processes with a constant bias determined by \(a\) in time interval \((0,t_{1})\) (here \(a\) is the mean of displacements), (ii) during the time interval \(t_{1}<t<t_{2}\) we increase the force, namely replace the bias \(a\) with \(4a\mu/(\mu+1)\) for \(\mu\geq 1/3\), (iii) we further decrease the bias to \(a/(2/3+\mu)\) from time \(t_{2}\) to \(t_{3}\), and (iv) finally we return to the state (i) from time \(t_{3}\); see Fig. 5 for the statistics of the bias versus time. Here \(\mu\) is a constant that controls the strength of the force. Let \(\sigma=5\) for states (i), (ii), and (iv), i.e., \(\sigma_{1}=\sigma_{2}=\sigma_{4}=5\). While, for state (iii), we use \(\sigma_{3}=0.1\). In particular, when \(\mu=1/3\), the bias of all states mentioned above is the same but the variance is still time-dependent. Figure 4: Plot of marginal distribution Eq. (37) compared with CTRW simulations. Here we denote \(\mathcal{P}_{2}(x,t)\) and \(\mathcal{P}_{2}(y,t)\) as the marginal distribution of \(\mathcal{P}(x,y,t)\) with respect to \(x\) and \(y\), respectively. It can be seen that \(\mathcal{P}_{2}(x,t)\) is the same as the solution of Eq. (3) and \(\mathcal{P}_{2}(y,t)\) is a Gaussian distribution. The parameters are the same as in Fig. 4. Figure 5: Illustrations of the time-dependent bias, e.g., \(a(t)\), when the process begins at time \(t=0\) and ends at time \(t\). In our simulations, the mean displacements of four states, i.e., bias, are determined by \(\{a,4a\mu/(\mu+1),a/(2/3+\mu),a\}\). Note that the elapsed time of each state should be a bit long, otherwise one can find that the difference between them is not very large. In Fourier space, the initial condition satisfies \(\widetilde{\mathcal{P}}(k,0)=1\). In view of the special expression of \(\widetilde{\mathcal{P}}(k,t),\widetilde{\mathcal{P}}(k,t)\) for different states is as follows \[\widetilde{\mathcal{P}}(k,t)=\left\{\begin{array}{ll}\exp(-c_{11}k^{2}-ic_{ 12}k+c_{13}(-ik)^{\alpha}),&0<t\leq t_{1};\\ \exp(-c_{21}k^{2}-ic_{22}k+c_{22}(-ik)^{\alpha}),&t_{1}<t\leq t_{2};\\ \exp(-c_{31}k^{2}-ic_{32}k+c_{33}(-ik)^{\alpha}),&t_{2}<t\leq t_{3};\\ \exp(-c_{41}k^{2}-ic_{42}k+c_{43}(-ik)^{\alpha}),&t_{3}<t,\end{array}\right. \tag{43}\] with \[c_{m1}=\left\{\begin{array}{ll}t\frac{\sigma_{1}^{2}}{2(\gamma)},&m=1;\\ t\frac{\sigma_{2}^{2}}{2(\gamma)}+\frac{t_{1}(\sigma_{1}^{2}-\sigma_{2}^{2})} {2(\gamma)},&m=2;\\ t\frac{\sigma_{3}^{2}}{2(\gamma)}+\frac{t_{1}(\sigma_{1}^{2}-\sigma_{2}^{2})} {2(\gamma)}+\frac{t_{2}(\sigma_{2}^{2}-\sigma_{3}^{2})}{2(\gamma)},&m=3;\\ t\frac{\sigma_{4}^{2}}{2(\gamma)}+\frac{t_{1}(\sigma_{1}^{2}-\sigma_{2}^{2})} {2(\gamma)}+\frac{t_{2}(\sigma_{2}^{2}-\sigma_{3}^{2})}{2(\gamma)}+\frac{t_{ 3}(\sigma_{1}^{2}-\sigma_{4}^{2})}{2(\gamma)},&m=4,\end{array}\right. \tag{44}\] \[c_{m2}=\left\{\begin{array}{ll}a_{1}\frac{t}{(\gamma)},&m=1;\\ \left[a_{1}t_{1}+a_{2}(t-t_{1})\right]\frac{1}{\langle\gamma\rangle},&m=2;\\ \left[a_{1}t_{1}+a_{2}(t_{2}-t_{1})+a_{3}(t-t_{2})\right]\frac{1}{\langle \gamma\rangle},&m=3;\\ \left[a_{1}t_{1}+a_{2}(t_{2}-t_{1})+a_{3}(t_{3}-t_{2})+a_{4}(t-t_{3})\right] \frac{1}{\langle\gamma\rangle},&m=4,\end{array}\right. \tag{45}\] and \[c_{m3}=\left\{\begin{array}{ll}a_{1}^{\alpha}t\frac{1}{\bar{t}},&m=1;\\ \left[a_{1}^{\alpha}t_{1}+a_{2}^{\alpha}(t-t_{1})\right]\frac{1}{\bar{t}},&m=2; \\ \left[a_{1}^{\alpha}t+a_{2}^{\alpha}(t_{2}-t_{1})+a_{3}^{\alpha}(t-t_{2}) \right]\frac{1}{\bar{t}},&m=3;\\ \left[a_{1}^{\alpha}t_{1}+a_{2}^{\alpha}(t_{2}-t_{1})+a_{3}^{\alpha}(t_{3}-t_{ 2})+a_{4}^{\alpha}(t-t_{3})\right]\frac{1}{\bar{t}},&m=4.\end{array}\right. \tag{46}\] Here \(m=1,2,3,4\) is related to the number of states. Recall that \(\bar{t}=\langle\tau\rangle^{1+\alpha}/[(\tau_{0})^{\alpha}|\Gamma(1-\alpha)|]\). In calculations, the main idea is that the final position of each state is treated as the initial position of the next stage. The inverse Fourier transform of Eq. (43) yields \[\mathcal{P}(x,t)=\int_{-\infty}^{\infty}\frac{1}{\sqrt{4\pi c_{m1}}}\exp\left[ -\frac{(x-y-c_{m2})^{2}}{4c_{m1}}\right]\frac{1}{(c_{m3})^{1/\alpha}}\mathcal{ L}_{\alpha}\left[\frac{y}{(c_{m3})^{1/\alpha}}\right]dy, \tag{47}\] describing the positional distribution of the mentioned four states. Note that when \(0<t<t_{1}\), the solution Eq. (47) is the same as Eq. (21) or that of Eq. (3). Below, we use Eq. (47) to predict breakthrough curves [82]. ### Breakthrough curves Contamination spreading, as one of the most crucial problems ranging from environment to agriculture, has attracted a lot of attention [66, 80, 81, 83, 84]. In real systems, how to quantity the contaminant transport is quite a challenge for researchers since the processes are so complex. In [66], Nissan, Dror and Berkowitz considered the spreading particle with changing conditions to illustrate the positive and negative effects of the ambient environment. It is no wonder that the changing force fields are quite common and vital in the real world. Theoretical predictions about breakthrough curves, which can be directly measured in experiments, were made using the CTRW particle tracking approach in [66]. While, here we choose Eq. (47) as a tool to describe breakthrough curves. The breakthrough curves are measured in the sense of the distribution, indicating the probability of particles being at \(x_{b}\) at a specific time \(t\). The differences and similarities between the constant and the time-dependent force are quite interesting. When particles experience small disturbances or the total time of the first three states is short, the response of time-dependent force, calculated by the density of the position, is weak except for a'shift'. However, when we increase the total time of the first three states, the mentioned two cases exhibit great differences; see Fig. 6. Note that in our discussions the detection point is \(x_{b}=1600\). Choosing the suitable detection point, i.e., \(x_{b}\), is a critical factor in observing the distinctive structure of the particle packet. For a short time \(t\), such as \(t=10\), it can be seen that \(\mathcal{P}(x_{b},t)\to 0\) since all the particles are on the way to the site \(x_{b}\) pushed by the force and need more time to visit the position \(x_{b}\). As particles enter the second state, they are moving faster than the state (i) for \(\mu=6\), resulting in a rapid increase in the probability of the position. During the time interval \((t_{2},t_{3})\), the disturbance and the variance are weak, leading to a more or less flat breakthrough curve. At the end of this state, there are still numerous particles being located at the left side of \(x_{b}\). In the last state, \(\mathcal{P}(x_{b},t)\) slowly tends to zero. ### First passage time So far we considered problems with free boundary conditions. The boundary conditions for fractional space diffusion equations are generally non-trivial. This is because fractional Figure 6: Theoretical predictions of breakthrough curves together with the simulations plotted by the symbols. The solid lines are the theories based on Eq. (47). Time intervals of the four states are \(100,100,100,700,\) respectively. Packets of particles are asymmetric in the long time limit, which are the same for both constant and time-dependent cases. Semi-log scales are represented to show asymmetric properties and the heavy tail; see the inset. Breakthrough curves are observed at the position \(x_{b}=1600\). Here the parameters are \(\mu=1/3\) (red line), \(\mu=6\) (black line), \(\tau_{0}=0.1,\alpha=1.5,\)\(\sigma_{1}=\sigma_{2}=\sigma_{4}=5\), and \(\sigma_{3}=0.1\). operators are non-local in space. Our approach to the problem is to use the subordination method, namely rely on Eqs. (8,10). The idea is as follows: We showed already that fluctuations of the number of renewals \(N\) is given by Levy statistics. We may think about the whole problem as a normal process, where \(N\) is an operational time. Then this normal process is transformed from \(N\) to the laboratory time \(t\). By normal process, we mean, for example, Brownian diffusion with advection in a finite domain, with reflecting or absorbing boundary conditions. Here we consider as an example the case of the first passage time problem [85, 86, 87, 88, 89]. The first passage time serves as a significant tool for describing the time required by a particle to reach a specific point or an absorbing barrier. For standard diffusion this means that we use an absorbing boundary condition (see below). We will later show how to use subordination to obtain the solution for the anomalous process. Before doing so we will briefly delve into the method of images. It was shown previously, that for Levy flights the image method fails [90, 91, 92, 93]. However, in our case, we do not have any Levy flight as an underlying process. Instead, we use fat-tailed distributions for waiting times. We will first show that the method of images generally fails in our case, however when the bias is weak it is a reasonable approximation. We will then turn to the more general solution using subordination. #### 4.4.1 Recap for the image method and it's failure Let us determine the first passage probability based on Eq. (3) for a diffusing particle that starts at \(x_{0}=0\). As mentioned, we use the image method and show that while it is generally not a good approximation, it works well when the bias is small. Here we assume that the absorbing boundary condition is \(x_{ab}\) and use \(C(x,t)\) to denote the concentration. Thus, when \(x\geq x_{ab}\), we have \(C(x,t)=0\). The most appealing approach, dealing with the absorbing boundary condition, is the method of images, which stems from electrostatics. As mentioned we will soon show, following Eq. (3), that this method does not work when the bias is not terribly small. In real space, the image method leads to [85, 33], \[C(x,t)=\mathcal{P}(x,t)-W\mathcal{P}(x-2x_{ab},t),\ -\infty<x\leq x_{ab}, \tag{48}\] where the unknown parameter \(W\) is defined below. Here \(\mathcal{P}(x,t)\) denotes the positional distribution without absorbing condition, for example, below we use the solution of Eq. (3) to check the validity of the image method. It can be seen that the concentration is the difference between \(\mathcal{P}(x,t)\) and \(\mathcal{P}(x-2x_{ab},t)\) with a weight \(W\) determined by \(x_{ab}\). Recall that \(C(x,t)\) vanish on \(x_{ab}\), i.e., \(C(x,t)=0\). Then \(W\) obeys \[\mathcal{P}(x_{ab},t)-W\mathcal{P}(-x_{ab},t)=0.\] Thus, we get with the image method \[W=\frac{\mathcal{P}(x_{ab},t)}{\mathcal{P}(-x_{ab},t)}. \tag{49}\] Consider the well-studied case of a particle undergoing normal diffusion (Eq. (3) with \(S=0\)) with an absorbing condition at \(x=x_{ab}\). Thus, \(\mathcal{P}(x,t)\) in the absence of the absorbing boundary condition follows Gaussian distribution \[\mathcal{P}(x,t)=\frac{1}{\sqrt{4\pi Dt}}\exp\Big{(}-\frac{(x-Vt)^{2}}{4Dt} \Big{)}, \tag{50}\] which leads to \[W=\exp\Big{(}x_{ab}\frac{V}{D}\Big{)} \tag{51}\] according to Eq. (49). Note that in the general case \(W\) is determined by \(x_{ab}\), \(D\), \(V\) and \(S\). Based on Eqs. (48) and (49), mathematically, the concentration is as follows \[C(x,t)=\mathcal{P}(x,t)-\frac{\mathcal{P}(x_{ab},t)}{\mathcal{P}(-x_{ab},t)} \mathcal{P}(x-2x_{ab},t) \tag{52}\] with \(-\infty<x\leq x_{ab}\). Equation (52) is demonstrated in Fig. 15 in C for different absorbing conditions. We see here that for weak fields the image theory works since the process is almost Gaussian. #### 4.4.2 Subordinating the first passage problem To solve this problem, we use a subordination technique that involves utilizing the method of images on the \(\chi_{N}(x)\). Based on Eq. (8), the discrete form of \(C(x,t)\) follows \[C(x,t)\sim\sum_{N=0}^{\infty}Q_{t}(N)\chi_{N}^{*}(x), \tag{53}\] describing the subordinated process \(x\) as a function of time \(N\). Here \(Q_{t}(N)\) is the PDF of the number of the renewals given in Eq. (10) and \(\chi_{N}^{*}(x)\) is the solution of the ordinary Fokker-Planck equation \[\frac{\partial}{\partial N}P(x,N)=a\frac{\partial}{\partial x}P(x,N)-\frac{ \delta^{2}}{2}\frac{\partial^{2}}{\partial x^{2}}P(x,N) \tag{54}\] with the initial conditions \(P(x,N=0)=\delta(x)\) and different boundary conditions, be specific, absorbing boundary condition at \(x=x_{ab}\). The same approach can be used for other types of boundary conditions. Setting the absorbing condition \(x=x_{ab}\) on Eq. (54) and using the method of images for the solution of Eq. (54) yield \[\chi_{N}^{*}(x)=\frac{\exp\Big{(}-\frac{(x-aN)^{2}}{2\sigma^{2}N}\Big{)}}{ \sqrt{2\pi\sigma^{2}N}}-\exp\left(\frac{2ax_{ab}}{\delta^{2}}\right)\frac{\exp \Big{(}-\frac{(x-2x_{ab}-aN)^{2}}{2\sigma^{2}N}\Big{)}}{\sqrt{2\pi\sigma^{2}N }}. \tag{55}\] In the long time limit, the continuous form of Eq. (53) follows \[C(x,t)=\left(\frac{t}{t}\right)^{\frac{1}{n}}\int_{0}^{\infty}\mathcal{L}_{ \alpha}\left(\frac{N-t/\langle\tau\rangle}{(t/t)^{1/\alpha}}\right)\chi_{N}^{* }(x)\mathrm{d}N. \tag{56}\] Equation (56) is verified in Fig. 7 showing a perfect match. We now compare solution Eq. (56) and that of Eq. (3) with free boundary conditions. For that, the solution of Eq. (3) without absorbing condition was plotted by dashed black lines. As expected, when \(x\) is much smaller than \(x_{ab}\), \(C(x,t)\) is consistent with Eq. (3), i.e., the random particles are not affected by the absorbing condition, or most of particles have not yet reached the position near \(x_{ab}\). In addition, when the bias is strong, Eq. (3) agrees with Eq. (56) for \(x<x_{ab}\) at least to the naked eye. Roughly speaking, this is because when the bias is strong, no particles are coming back. Thus, the absorbing condition under study loses its role. While, if \(x\) approaches to \(x_{ab}\) and the bias is weak, Eq. (3) and Eq. (56) illustrate two different behaviors. See (d) in Fig. 7. We further consider the first passage time using the survival probability. The survival probability \(\mathcal{S}(t)\) describing the probability that the particles do not arrive on the position \(x_{ab}\) until the time \(t\) reads \[\mathcal{S}(t)=\int_{-\infty}^{x_{ab}}C(x,t)dx. \tag{57}\] Let \(t_{f}\) be the time to visit the position \(x_{ab}\) for the first time. Utilizing Eq. (57), the PDF of \(t_{f}\) reads \[\varphi(t_{f})=-\frac{d\mathcal{S}(t_{f})}{dt_{f}}. \tag{58}\] From Eqs. 53 and 58, we have \[\begin{array}{rl}\varphi(t_{f})&\sim-\sum_{N=1}^{\infty}\frac{d}{dt_{f}}Q_{t _{f}}(N)\int_{-\infty}^{x_{ab}}\chi_{N}^{*}(x)dx\\ &=\frac{1}{2}\sum_{N=1}^{\infty}\frac{d}{dt_{f}}Q_{t_{f}}(N)\Big{(}-\mathrm{ erfc}\left(\frac{x_{ab}-aN}{\sqrt{2}\delta\sqrt{N}}\right)+e^{\frac{2a\pi a_{ab}}{ \delta^{2}}}\mathrm{erfc}\left(\frac{aN+x_{ab}}{\sqrt{2}\delta\sqrt{N}}\right) \Big{)}\end{array} \tag{59}\] with \[\frac{d}{dt_{f}}Q_{t_{f}}(N)=-\mathcal{L}_{\alpha}\left(\frac{N-\frac{t_{f}}{ \langle\tau\rangle}}{(\frac{t_{f}}{\bar{t}})^{1/\alpha}}\right)\left(\frac{ \bar{t}}{t_{f}}\right)^{1/\alpha}\frac{1}{\alpha t_{f}}+\left(\frac{t_{f}}{ \bar{t}}\right)^{1/\alpha}\frac{d}{dt_{f}}\mathcal{L}_{\alpha}\left(\frac{N- \frac{t_{f}}{\langle\tau\rangle}}{(\frac{t_{f}}{\bar{t}})^{1/\alpha}}\right).\] Here \(\mathrm{erfc}(z)\) denotes the complementary error function, i.e., \(\mathrm{erfc}(z)=1-\mathrm{erf}(z)\) with error function \(\mathrm{erf}(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z}\exp(-\tau^{2})d\tau\). Eq. 59 is confirmed in Fig. (8) displaying a perfect match. ## 5 Three limiting laws of the positional distribution Using Eq. (3), we derive three limiting laws that characterize the CTRW model under different conditions: when \(S\) tends towards infinity, when \(D\) approaches infinity, and when both \(S\) and \(D\) are constants. More exactly, these laws include the Levy stable distribution showing the statistics when \(x-Vt\propto(St)^{1/\alpha}\), Gaussian distribution in the context of \(x-Vt\propto\sqrt{Dt}\), and a general expression that encompasses all values of \(D\), \(V\) and \(S\). Taking Laplace transform on Eq. (29) with respect to \(t\), we have \[\mathcal{P}(k,s)=\frac{1}{s+Dk^{2}+ikV-S(-ik)^{\alpha}}. \tag{60}\] We are interested in the statistics of \(x\) in the long time limit, which means that both \(s\) and \(k\) go to zero in Eq. (60). Based on the above equation, the mentioned three laws will be discussed. ### Levy stable distribution when \(S\to\infty\) Let \(S\to\infty\), Eq. (60) yields \[\mathcal{P}(k,s)\sim\frac{1}{s+iVk-S(-ik)^{\alpha}}, \tag{61}\] where we dropped the term \(Dk^{2}\) due \(Dk^{2}\ll iVk,\ S(-ik)^{\alpha}\). The inverse Laplace-Fourier transforms of Eq. (61) lead to \[\mathcal{P}(x,t)\sim\frac{1}{(St)^{1/\alpha}}\mathcal{L}_{\alpha}\left(\frac{ x-Vt}{(St)^{1/\alpha}}\right). \tag{62}\] Figure 8: Behaviors of the first passage time PDF generated by trajectories of particles for various \(a\). The parameters are the same as in Fig. 7. The different lines describe the theoretical result obtained from Eq. (59). Figure 7: Plot of concentration \(C(x,t)\) using Eq. (56) for different biases and locations of absorbing boundary conditions. Here we choose \(t=1000\), \(\tau_{0}=0.1\), \(\alpha=3/2\), and \(10^{7}\) realizations. We use \(a=2,x_{ab}=7300\); \(a=1,x_{ab}=3700\); \(a=1/2,x_{ab}=1850\); and \(a=3/10,x_{ab}=1100\) for subplot (a) to (d), respectively. The Solution of Eq. (3) without absorbing condition is plotted by the dashed lines. Integrating Eq. (62) from negative infinity to positive infinity implies that \(\mathcal{P}(x,t)\) is a normalized density. See also the equivalent expression given in Eq. (19). This is illustrated in Fig. 9, where the PDF of \(\xi=(x-Vt)/(St)^{1/\alpha}\) with \(\alpha=1.5\) is plotted. Obviously, the PDF of \(\xi\) is asymmetric with respect to \(\xi\), whose two tails show two different dynamic behaviors, namely the right-hand side of the tail tends to zero rapidly but the other one decays slowly like a power law; see the inset in Fig. 9. More precisely, based on Eq. (12), we get \(\mathcal{L}(\xi)\sim(-\xi)^{-1-\alpha}/\Gamma(-\alpha)\), being the same as the tail of the symmetric Levy stable distribution, for \(\xi\to-\infty\). Thus, when \(q>\alpha\), the integral \(\int_{-\infty}^{\infty}|\xi|^{q}\mathcal{L}_{\alpha}(\xi)\mathrm{d}\xi\) diverges. This means that Eq. (62) does not give any information to the MSD, which will be discussed in B. ### Gaussian distribution when \(D\to\infty\) Note that Eq. (62) is independent of \(D\), and when \(D\) is large, we expect this approximation to fail for a large but finite \(t\). Here we focus on the case when \(s+ikV\propto k^{2}\). As previously discussed, it is related to the linear response regime. In this limit, according to Eq. (60) \[\mathcal{P}(k,u)\sim\frac{1}{s+Dk^{2}+ikV}. \tag{63}\] By inversion, using the shifting property of the inverse Fourier transform, yields the limiting distribution of \(x\). The scaling form of \(\varsigma=(x-Vt)/\sqrt{2Dt}\) gives the Gaussian distribution with mean zero and variance one \[\mathcal{P}(\varsigma)\sim\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\varsigma^{2 }}{2}\right); \tag{64}\] Figure 9: PDF \(\mathcal{L}_{\alpha}(\xi)\) versus \(\xi=(x-Vt)/(a(St)^{1/\alpha})\) for a biased CTRW model with the waiting time distribution Eq. (4) and Gaussian distribution of the step length Eq. (6). The red line is the theoretical prediction Eq. (62), showing the limit of typical fluctuations when \(x-Vt\) is of the order of \(t^{1/\alpha}\). The corresponding simulation result, represented by symbols ‘\(\circ\)’, is obtained by averaging \(5\times 10^{6}\) trajectories. Here the parameters are \(t=1000\), \(a=2\), \(\sigma=0.07\), and \(\tau_{0}=0.1\). We will show how the limiting law Eq. (62) fails for a finite time \(t\) or a large \(D\) and give a general law, together with a slow convergence; see Fig. 2 and below discussions. see Fig. 10. As expected, the simulations are consistent with the theoretical result Eq. (64) for a large \(D\). Recall that in Figs. 9 and 10, we used the same observation time \(t\), i.e., \(t=1000\), but propagators are totally different (one is the Levy stable law and the other is Gaussian distribution). It indicates that these laws are determined by the relationship between \(D\) and \(S\). The current challenge is to determine a universal form that can represent both of the mentioned scalings. ### Typical fluctuations We have discussed that under certain conditions the positional distribution follows Levy stable distribution Eq. (62) or Gaussian distribution Eq. (64), determined by the relation between \(D\) and \(S\). Therefore, it would be beneficial to acquire a universal law that applies to the aforementioned scenarios. Based on Eq. (60), we have \[\mathcal{P}(x,t)=\frac{1}{\sqrt{4\pi DS}^{1/\alpha}t^{1/\alpha+1/2}}\int_{- \infty}^{\infty}\mathcal{L}_{\alpha}\left(\frac{y}{(St)^{1/\alpha}}\right) \exp\left(-\frac{(x-Vt-y)^{2}}{4Dt}\right)dy. \tag{65}\] We denote Eq. (65), i.e., the solution of Eq. (3), as typical fluctuations describing the central of the positional distribution. As mentioned before, in the long time limit, Eq. (65) leads to the Levy stable distribution Eq. (62). Figure 11 demonstrates the validity of Eq. (65) for various values of \(t\). For comparison, we also plot the solution Eq. (16) in Fig. 12. Eq. (16) agrees with Eq. (65) for large \(x\), it rapidly approaches zero as \(x\) approaches zero, whereas Eq. (65) exhibits a cutoff at the tail. Therefore, the primary difference between the two equations lies in small values of \(x\). To summarize, typical fluctuations Eq. (65), giving the information when \(x-Vt\propto t^{1/\alpha}\), are valid for numerous \(D\), \(V\), \(S\) and \(t\). One may wonder "why Eq. (65) is more useful than Figure 10: The dynamic of \(\mathcal{P}(\varsigma)\) with scaling \(\varsigma=(x-Vt)/\sqrt{2Dt}\) for different observation times \(t\). Here we choose \(\alpha=1.5\), \(a=1\), \(\sigma=11\), \(\tau_{0}=0.1\), and \(\langle\tau\rangle=0.3\). These parameters suggest that the process can be approximated as nearly Gaussian, with \(D\) being approximately 201.7 and \(S\) being approximately 2.3. The full red line depicts the Gaussian distribution Eq. (64), which represents the behaviors when \(x-Vt\) is of the order of \(\sqrt{t}\). On the other hand, the symbols correspond to simulation results obtained by averaging \(10^{6}\) trajectories. Eq. (19)?". The reason is as follows: Mathematically, Eq. (19) is valid under the condition that \(gg(y,t,\sigma,a,\alpha)\to 0\). This implies that the observation time \(t\) should be especially long, for example when \(a\) is small and \(\sigma\rightarrow\infty\). While, Eq. (65) avoids this, this means that the condition reduces to \(\langle N\rangle\propto t/\langle\tau\rangle\). ## 6 Conclusion and discussion In the context of the CTRW model, if the displacement follows a narrow distribution with a non-zero mean and the waiting time has an infinite mean, the fractional operator of the Figure 11: Graph of densities \(\mathcal{P}(\xi,t)\) with \(\xi=(x-at/\langle\tau\rangle)/(a(t/\bar{t})^{1/\alpha})\) for different times \(t\). The red solid lines represent the analytical scaling result obtained from Eq. (65) and the corresponding blue symbols are simulations of \(5\times 10^{6}\) trajectories. The dashed black line is the limiting law \(\mathcal{L}_{\alpha}(\xi)\) predicted by Eq. (62). Just as the figure shows, with the increase of the observation time \(t\), the PDF \(\mathcal{P}(\xi,t)\) tends to \(\mathcal{L}_{\alpha}(\xi)\) slowly. The parameters are \(\alpha=1.5\), \(\tau_{0}=0.1\), \(a=1\), \(\langle\tau\rangle=0.3\) and \(\sigma=1/\sqrt{2}\). See also discussions in Appendix A. Figure 12: Comparison between Eq. (16) and Eq. (65) for different forces. The symbols are the plot of Eq. (16) and the solid lines describe the solution of Eq. (3) or Eq. (65). Here we choose \(t=1000\), \(t_{0}=0.1\), \(\alpha=1.5\), and \(\delta=1\). diffusion equation is linked to the Riemann-Liouville time derivative, as discussed in Refs. [4, 33]. However, nearly two decades later, the fractional-space advection-diffusion equation was developed for situations in which waiting times possess a finite mean and an infinite variance, together with the mentioned displacements [53]. As an extension of work [53], here we demonstrated further evidence of Eq. (3), gave more applications, and showed limiting laws of the kinetic equation, through our in-depth explanations. At the same time, theoretical predictions are checked by simulations of the CTRW model. An issue here is that for Levy flights, with a fat-tailed distribution of jump lengths, the method of images was shown to lead to wrong results, due to the non-local behavior [90, 91, 92]. One may wonder, whether or not the method of images will work for our process which does not include any fat-tailed jump length distribution. We show that generally, it does not work, unless for the case when the bias is small, then the image method is a valid approximation. For that, a subordination method was used. This method holds in general beyond the first passage problem, and it is also very different if compared to the subordination method used for \(0<\alpha<1\). In practice we use the image method on the conditional probability \(\chi_{N}(x)\), instead of the positional distribution \(\mathcal{P}(x,t)\). In particular, for a free particle, we assume Eq. (54) has a free boundary condition. In that sense, \(\chi_{N}^{*}(x)\) Eq. (55) reduces to \(\chi_{N}(x)\) given in Eq. (9). The mentioned strategy can also be extended to a much more general case, for example, the diffusion of particles on a finite domain. For applications, we also discussed the two-dimensional diffusion equation and breakthrough through curves with time-dependent bias and variance. We demonstrated that the solution of Eq. (3) is obtained by convolving the Gaussian distribution with the Levy stable distribution, see Eq. (65). Here we denote the solution of Eq. (3) as typical fluctuations of CTRW describing a wide range of \(D\), \(V\), and \(S\). In the language of the biased CTRW, the Levy stable distribution stems from the PDF of the number of renewals, and the Gaussian distribution comes from the conditional distribution \(\chi_{N}(x)\). When the bias is strong, i.e., \(S\to\infty\), the diffusion term in Eq. (3) can be ignored and the solution is an asymmetric Levy stable distribution Eq. (62). For the CTRW model, this limit is achieved when the ratio \(\sigma^{2}/a^{2}\) is finite and \(t\to\infty\) or \(\sigma^{2}/a^{2}\to 0\) with a finite \(t\). On the contrary, when \(S\to 0\), the spreading packet follows Gaussian distribution. It indicates that when \(\sigma^{2}/a^{2}\gg 1\), the well-known Eq. (62) is not useful unless \(t\) is extremely long. Furthermore, we provide a parameter, denoted as \(\overline{\sigma}^{2}\), in Eq. (20) to characterize the convergence. There are still unanswered questions that require addressing. For example, in the simulation of particle trajectories, generally, we first generate waiting times and displacements. Then, upon the completion of the total observation time, denoted as \(t\), we obtain positional statistics. However, when the observation time is long and the number of realizations is large, such as \(10^{7}\), running codes may require several days to complete. The problem now is whether we can generate particle statistics based on Eq. (21). More precisely, we first generate a variable drawn from the Levy stable distribution \(\mathcal{L}(y)\), and then use it to generate the position at time \(t\) according to a Gaussian distribution. This method seems fine if we are only interested in the positional distribution, but it fails as expected when we focus on the MSD. Solving this problem, i.e. formulating Langevin paths, is a matter that will be considered in the future. W.W. is supported by the National Natural Science Foundation of China under Grant No. 12105243 and the Zhejiang Province Natural Science Foundation LQ22A050002. E.B. acknowledges the Israel Science Foundations grant 1614/21. ## Appendix A Additional discussions on Fig. 2 From simulations of typical fluctuations in Fig. 2, we can see that we have \(\overline{\sigma}^{2}\simeq 0.1115\sigma^{2}/a^{2}\) defined by Eq. (20) with \(t=1000\), \(\alpha=1.5\), and \(\tau_{0}=0.1\). In the particular case \(\sigma=0.1/\sqrt{2}\), from Eq. (20) we get a small \(\overline{\sigma}^{2}\simeq 5.57\times 10^{-4}\), this is the reason why Eq. (16) tends to the limit theorem Eq. (19) or (62) as shown by the black dashed line in Fig. 2. However, when \(\sigma=16/\sqrt{2}\), \(\sigma^{2}/a^{2}=128\) and then \(\overline{\sigma}^{2}=14.27\), which is certainly not a small number if compared with \(5.57\times 10^{-4}\). Thus even though the average of renewals \(\langle N(t)\rangle\sim t/\langle\tau\rangle=1000/0.3\approx 3333\), we cannot say that the limit of \(t\to\infty\) is reached, and indeed, just as the bottom line in Fig. 2 shows, we see for this case nearly Gaussian behavior. For asymmetry breaking properties, one choice is to consider \(\overline{\sigma}^{2}\). This means that when \(\overline{\sigma}^{2}\to 0\), the corresponding density is asymmetric. While, for a large \(\overline{\sigma}^{2}\), the opposite situation emerges. ## Appendix B Fractional moments As mentioned before, Eq. (3) fails to estimate MSD of the CTRW model. Recently, we give a way to compute the MSD using infinite densities [68]. Here we show that infinite density frameworks can also used to calculate the fractional moments in an asymptotic sense. For the sake of completeness, first let us concentrate on the variance of the walk's displacement, called MSD, which is a measure of the deviation of the position of the particle with respect to the position over time. Suppose that \(x=0\) is the initial position of particles and \(N\) is the number of renewals between \(0\) and \(t\). Thus, the first and the second order moment of the position are given by \[\langle x\rangle=\left\langle\sum_{j=1}^{N}\chi_{i}\right\rangle \tag{19}\] and \[\langle x^{2}\rangle=\left\langle\left(\sum_{j=1}^{N}\chi_{i}\right)^{2} \right\rangle, \tag{20}\] respectively. Here \(\chi_{i}\) denotes the displacement of the particle in the \(i\)-th jump and \(\langle\cdot\rangle\) describes the average over \(\chi_{1},\chi_{2},\ldots,\chi_{N}\). Therefore, the variance is \[\mathit{Var}(x)=\langle x^{2}\rangle-\langle x\rangle^{2}=\langle N\rangle( \langle\triangle x^{2}\rangle-\langle\triangle x\rangle^{2})+(\langle N^{2} \rangle-\langle N\rangle^{2})\langle\triangle x\rangle^{2}, \tag{14}\] where we assumed that \(\chi_{i}\) are IID random variables, and \(\langle\triangle x\rangle\) and \(\langle\triangle x^{2}\rangle\) correspond to the first and the second order moments of \(\chi_{i}\), respectively. Let us take the step length distribution as Eq. (6), thus we have \(\langle\triangle x\rangle=a\) and \(\langle\triangle x^{2}\rangle=a^{2}+\sigma^{2}\). To obtain \(\mathit{Var}(x)\), we need to calculate the first and the second moments of \(N\). Using the previous results given in Ref. [59], we have \[\langle\widehat{N}(s)\rangle=\frac{\widehat{\phi}(s)}{s(1-\widehat{\phi}(s))} \tag{15}\] and \[\langle\widehat{N}^{2}(s)\rangle=\frac{\widehat{\phi}(s)(1+\widehat{\phi}(s) )}{s(1-\widehat{\phi}(s))^{2}}. \tag{16}\] Utilizing Eqs. (15) and (16), Eq. (14) gives \[\mathit{Var}(x)\sim\frac{2a^{2}\tau_{0}^{\alpha}}{(2-\alpha)(3-\alpha) \langle\tau\rangle^{3}}t^{3-\alpha}+\sigma^{2}\frac{t}{\langle\tau\rangle}, \tag{17}\] which is plotted in Fig. 11. The term \(t^{3-\alpha}\) becomes the leading term of \(\mathit{Var}(x)\) as we increase of the observation time \(t\); See also discussions in Ref. [68]. When the observation time is not very large, the linear term \(\sigma^{2}t/\langle\tau\rangle\) wins since the spreading packet nearly follows Gaussian distribution; see the dash-dotted line in Fig. 11. For a fixed observation time \(t\), we can find an interesting phenomenon depicting the competition between anomalous diffusion and normal diffusion, which is determined by \(a\), \(\sigma\), and \(t\). According to Eq. (17), the transition point is \[t^{*}\sim\left((2-\alpha)(3-\alpha)\frac{\sigma^{2}\langle\tau\rangle^{2}}{2 a^{2}\tau_{0}^{\alpha}}\right)^{\frac{1}{2-\alpha}}; \tag{18}\] see the magenta vertical lines in Fig. 11. When \(|a|\to 0\), \(t^{*}\) goes to infinity. It indicates that when the bias is weak, the process needs a long observation time \(t\) to exhibit anomalous behaviors. On the other hand, when \(a\) approaches infinity, the value of \(t^{*}\) tends to zero, causing the transition from normal diffusion to anomalous diffusion rapidly. When \(q=4\), the corresponding moment is \[\langle(x-\langle x\rangle)^{4}\rangle=\langle x^{4}\rangle+6\langle x\rangle ^{2}\langle x^{2}\rangle-4\langle x\rangle\langle x^{3}\rangle-3\langle x \rangle^{4} \tag{19}\] Using Eq. (8), the integer order moments are easy to get. Then substituting them into Eq. (19) gives \[\langle(x-\langle x\rangle)^{4}\rangle\sim a^{4}\langle(N-\langle N\rangle)^{ 4}\rangle+6a^{2}\sigma^{2}(\langle N^{3}\rangle+\langle N\rangle^{3}-2 \langle N\rangle\langle N^{2}\rangle) \tag{20}\] Now the calculation of \(\langle(x-\langle x\rangle)^{4}\rangle\) is transformed into the moments of the number of renewals. After some simple arithmetics, we have \[\begin{array}{lll}\langle(x-\langle x\rangle)^{4}\rangle&\sim\frac{4\alpha^{4}b_{o }}{\|\Gamma(1-\alpha)(\tau)^{3}(5-\alpha)(4-\alpha)}t^{5-\alpha}+\frac{6\alpha^ {2}\sigma^{2}b_{o}}{\langle\tau\rangle^{4}}\Big{(}\frac{18}{\Gamma(5-\alpha)} -\frac{8}{\Gamma(4-\alpha)}+\frac{1}{\Gamma(3-\alpha)}\Big{)}t^{4-\alpha}\\ &+\frac{12\alpha^{4}b_{o}}{\langle\tau\rangle^{4}}\Big{(}\frac{1}{\Gamma(5- \alpha)}-\frac{4}{\Gamma(4-\alpha)}+\frac{1}{\Gamma(3-\alpha)}\Big{)}t^{4- \alpha}.\end{array} \tag{10}\] Our next step is to explore \(q\) order absolute moments of \(\epsilon=x-at/\langle\tau\rangle\), defined by \[\langle|\epsilon|^{q}\rangle=\int_{-\infty}^{\infty}|\epsilon|^{q}P(\epsilon, t)\mathrm{d}\epsilon \tag{11}\] with \(q>0\). Let us start from the calculation of low order moments, i.e., \(q<\alpha\). The low order moments can be obtained by the limit of typical fluctuations Eq. (19), which reads \[\langle|\epsilon|^{q}\rangle\sim a^{q}(t/\tilde{t})^{q/\alpha}\int_{-\infty}^ {\infty}y^{q}\mathcal{L}_{\alpha}(y)\mathrm{d}y. \tag{12}\] When \(q<\alpha\), the integral \(\int_{-\infty}^{\infty}y^{q}\mathcal{L}_{\alpha}(y)\mathrm{d}y\) is a finite number which can be evaluated by the asymptotic behaviors of Levy distribution \(\mathcal{L}_{\alpha}(y)\). We can check that for \(0<q<\alpha\), \(\langle|\epsilon|^{q}\rangle\sim t^{q/\alpha}\) is always sublinear in \(t\). Here we would like to consider two special cases: for \(q\to 0\), the zeroth moment of the variable \(\epsilon\) is, evidently, one; the other case is the Gaussian limit, the linear time dependence of the MSD is recovered, namely \(\lim_{\alpha,q\to 2}\langle|\epsilon|^{q}\rangle\sim t\). For \(q>\alpha\), Eq. (12) diverges. It indicates that the normalized density Eq. (19) can not give a valid prediction for high order moments; see Eq. (13). Now we consider high-order moments. As mentioned before, for this case, typical fluctuations does not work. In [68], we demonstrate the existence of an additional limiting law when the second time scale is taken into account. This law, denoted as Eq. (13), characterizes the scaling behavior when \(x-at/\langle\tau\rangle\) is of the order of \(t/\langle\tau\rangle\). Namely, the density of \(\eta=(x-at/\langle\tau\rangle)/(at/\langle\tau\rangle)\) is \[P(\eta,t)\sim\frac{b_{\alpha}t^{1-\alpha}}{\Gamma(-\alpha)\langle\tau\rangle}(- \eta)^{-\alpha-1}\Big{(}1-\frac{1-\alpha}{\alpha}\eta\Big{)} \tag{121}\] with \(-1<\eta<0\). See numerical simulations and discussions in Ref. [68]. As mentioned, this situation for \(q=2\) is especially important in physical and biological experiments [94, 4]. After rescaling Eq. (121) by \(\epsilon=x-at/\langle\tau\rangle\), it can be seen that \(|\epsilon|^{q}\) is integrable with respect to this non-normalized state. So this can be used to get high order moments, i.e., \[\langle|\epsilon|^{q}\rangle\sim\frac{a^{q}q\tau_{0}^{\alpha}}{\langle\tau \rangle^{q+1}(q-\alpha)(q-\alpha+1)}t^{q+1-\alpha}. \tag{122}\] This demonstrates that, in the long time regime, the asymptotic distribution of \(\epsilon\) is broad with a slowly decaying tail; see Eq. (121). Note that when \(t\) is not sufficient large, the correction term is necessary to be added since the packet of particles is Gaussian; see left top panel in Fig. 13. To summarize, the limiting law Eq. (19) does describe the low order moments. The rare fluctuations, predicted by Eq. (121), give the information to the events when \(\epsilon\) is of the order of \(t\). In the long time limit, the results of \(q\) order moments can be generalized to \[\langle|\epsilon|\rangle\sim t^{\varrho(q)}, \tag{123}\] where \(\varrho(q)=q/\alpha\) if \(q<\alpha\), and \(\varrho(q)=q+1-\alpha\) if \(q>\alpha\). ## Appendix C Plot of concentration with a weak bias We plot Eq. (52) obtained from the image method for a weak bias; see Fig. 14. For this case, the asymmetric term \(S\) loses its role and the process is nearly normal. As mentioned in the main text, if the bias is strong, Eq. (52) breaks down.
2309.14960
Do bilayer metasurfaces behave as a stack of decoupled single-layer metasurfaces?
Flat optics or metasurfaces have opened new frontiers in wavefront shaping and its applications. Polarization optics is one prominent area which has greatly benefited from the shape-birefringence of metasurfaces. However, flat optics comprising a single layer of meta-atoms can only perform a subset of polarization transformations, constrained by a symmetric Jones matrix. This limitation can be tackled using metasurfaces composed of bilayer meta-atoms but exhausting all possible combinations of geometries to build a bilayer metasurface library is a very daunting task. Consequently, bilayer metasurfaces have been widely treated as a cascade (product) of two decoupled single-layer metasurfaces. Here, we test the validity of this assumption by considering a metasurface made of TiO2 on fused silica substrate at a design wavelength of 532 nm. We explore regions in the design space where the coupling between the top and bottom layers can be neglected, i.e., producing a far-field response which approximates that of two decoupled single-layer metasurfaces. We complement this picture with the near-field analysis to explore the underlying physics in regions where both layers are strongly coupled. Our analysis is general and it allows the designer to efficiently build a multi-layer metasurface, either in transmission or reflection, by only running one full-wave simulation for a single-layer metasurface.
Alfonso Palmieri, Ahmed H. Dorrah, Jun Yang, Jaewon Oh, Paulo Dainese, Federico Capasso
2023-09-26T14:27:29Z
http://arxiv.org/abs/2309.14960v2
# Do dielectric bilayer metasurfaces behave as a stack of decoupled single-layer metasurfaces? ###### Abstract Flat optics or metasurfaces have opened new frontiers in wavefront shaping and its applications. Polarization optics is one prominent area which has greatly benefited from the shape-birefringence of metasurfaces. However, flat optics comprising a single layer of meta-atoms can only perform a subset of polarization transformations, constrained by a symmetric Jones matrix. This limitation can be tackled using metasurfaces composed of bilayer meta-atoms but exhausting all possible combinations of geometries to build a bilayer metasurface library is a very daunting task. Consequently, bilayer metasurfaces have been widely treated as a cascade (product) of two decoupled single-layer metasurfaces. Here, we test the validity of this assumption for dielectric metasurfaces by considering a metasurface made of titanium dioxide on fused silica substrate at a design wavelength of 532 nm. We explore regions in the design space where the coupling between the top and bottom layers can be neglected, i.e., producing a far-field response which approximates that of two decoupled single-layer metasurfaces. We complement this picture with the near-field analysis to explore the underlying physics in regions where both layers are strongly coupled. We also show the generality of our analysis by applying it to silicon metasurfaces at telecom wavelengths. Our unified approach allows the designer to efficiently build a multi-layer dielectric metasurface, either in transmission or reflection, by only running one full-wave simulation for a single-layer metasurface. 1Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA [Corning Inc., Painted Post, New York 14870, United States ][These authors contributed equally ] [[email protected]; [email protected]] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online][Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online][Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online][Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online] [Online][Online] [Online] [Online] [Online] [Online][Online] [Online The Jones matrix that can be physically realized with a single-layer dielectric metasurface is subject to being unitary and symmetric [15, 25]. This sets a fundamental constraint on the retardance and di-attenuation values (i.e., possible polarization transformations) which can be imparted on incoming light. Retardance and di-attenuation respectively refer to the relative phase and amplitude imparted on two input orthogonal polarizations. Notably, matrix symmetry is a fundamental constraint that cannot be surmounted by design. It originates from the linear shape-birefringence of dielectric metasurfaces which fails to realize circular or elliptical birefringence. Hence, a single-layer metasurface cannot be used to build a circular polarizer/retarder -- its eigen polarizations must be linear. This limitation exists in any single-layer meta-atom with vertical sidewalls (regardless of its geometry) as long as it is reciprocal [26]. Surmounting this constraint requires breaking the in-plane symmetry; either using slanted or a bilayer stack of meta-atoms. Unitarity, on the other hand, is a less fundamental constraint that stems from the lossless nature of dielectric meta-atoms and the typical choice to operate off-resonance to realize higher efficiencies. Nevertheless, a unitary metasurface can still modulate both the amplitude and phase of incoming light by dumping light onto the diffraction orders (which behave as loss channels) as is standard in holography [27, 28]. By combining the propagation phase (which arises from varying the dimensions of the nanofins/meta-atoms) and geometrical phase (related to the rotation angle of the nanofin about its axis), it is possible to implement a symmetric Jones matrix with 3 independent DOF--namely, two different phase terms on the diagonal and two identical off-diagonal phase terms. With this functionality, a single metasurface can impart two independent phase [11, 29] or amplitude [30] profiles on any two input orthogonal polarizations (linear or elliptical), with the caveat that input elliptical polarizations will flip their handedness at the output. Using clever arrangements of metasurface unit cells, a Jones matrix with more DOFs can be realized. For instance, super-cell based single-layer metasurfaces can offer six DOFs for the Jones matrix, enabling complex amplitude modulation on two orthogonal polarization bases in the far field [27, 28]. Another variation of super-cell metasurfaces have recently been used to generate multiple polarization-sensitive holograms (exceeding 10 channels) by exploiting higher diffraction orders as energy loss channels [31]. In all these works, however, one cannot freely decouple the input and output polarization states, hindered by matrix symmetry. To tackle this limitation, and access all 8 DOFs of the Jones matrix, the in-plane symmetry must again be broken by constructing a multi-layer system [32]. In principle, a bilayer metasurface can impart arbitrary and independent amplitude and phase control on any set of two orthogonal polarizations, while completely decoupling the input and output polarization states [33]. Multi-layer metasurfaces also suggest new ways for achieving lossless polarization transformations [34] as well as non intuitive ways for wavefront shaping by utilizing more general configurations of the Pancharatnam-Berry phase [35]. With recent advances in nanofabrication, multi-layer metasurfaces have now become feasible. For instance, by cascading two single-layer metasurfaces made of silicon at design wavelength of 808 nm, each with six DOFs in their Jones matrix, a spatially varying Jones matrix with full parameters of eight DOFs has been realized [36]. The latter has been utilized in polarization-selective holography with 16 different channels. In addition, compound meta-optics made of \(\alpha\)-silicon in the near infrared (in combination with inverse design) have been utilized in spatial mode multiplexing, optical mode conversion, and vectorial holography with very high diffraction efficiencies [37, 38]. On the other hand, the design of compound metasurfaces (made of either bilayer or cascaded meta-atoms) is computationally intensive compared to single-layer devices. For example, building an exhaustive bilayer library is not a straight-forward process due to the huge size of the parameter space which requires varying the nanofins dimensions (\(D_{x}\) and \(D_{y}\), assuming a rectangular geometry) for both the top and bottom layers while performing the simulation at different angular rotations between the two. Furthermore, assuming a library is built, mapping a target 2-by-2 Jones matrix profile, pixel by pixel, to this massive library is a challenging and time consuming task. Accordingly, bilayer dielectric metasurfaces have been widely treated as a stack of two decoupled metasurfaces. A rigorous validation of this general assumption, however, has not been presented in the literature. To address this gap, here we use full-wave simulations to build a bilayer metasurface library and we study the coupling between the top and bottom nanofins by varying the full design space and analyzing the far-field and near-field responses. Our aim is to provide a systematic recipe that allows the designer to narrow down regions in the parameter space where the top and bottom nanofins are effectively decoupled while avoiding the geometries that suffer from strong bilayer coupling. This viewpoint simplifies the construction of a bilayer metasurface as it only requires one full-wave simulation for a single-layer nanofin in transmission. Using simple Jones calculus, the output response of the former can then be cascaded (for e.g., using matrix multiplication) to build a multi-layer structure. We perform our analysis for bilayer metasurfaces operating in transmission and reflection. Although we studied TiO\({}_{2}\) at the visible wavelength range because of the limited literature on bilayer metasurfaces in that regime, our analysis provides a holistic guideline to the metasurface community which can also be applied to various wavelengths and material platforms as will be shown. ## 2 Analysis ### Operating in transmission We start by creating a library for a single-layer metasurfaces using the finite-difference time-domain (FDTD) and represent its response in terms of a Jones Matrix [11]. A model of the simulated structure is shown in Figure 1(a). It is made of a titanium dioxide (TiO\({}_{2}\)) rectangular nanofin on top of a fused silica substrate and can impart two different phase delays along its major and minor axes; hence the shape-birefringence. The boundary condition applied at the edges of the simulation box is the Periodic Boundary Condition (PBC) which emulates an infinitely periodic array of the same rectangular nanofins. By sweeping the nanofin dimensions and recording the phase and transmission response in the far-field, a single-layer metasurface library can be built (as depicted in Supplementary Figure 1). The response of each nanofin to \(x\)- and \(y\)-polarized light can then be mathematically cast in a 2-by-2 Jones matrix. As a next step, we extend our analysis to the case of a bilayer metasurface. We explore the possibility of computing the Jones matrix of the bilayer as the product of two decoupled (single-layer) Jones matrices. This requires that the coupling between the two nanofins is negligible regardless of the dimensions of the two nanofins and their relative rotation. To test this assumption, we compare the results of the FDTD simulations of the bilayer with the "analytical" results obtained from the product of two single-layer metasurfaces. The model of the simulated structure is shown in Fig. 1(b). Initially, we fixed the dimension of the bottom nanofin at 134 nm x 202 nm, an arbitrarily chosen geometry which emulates a quarter-wave plate. The Jones matrix of this geometry was then extracted from the single-layer library reported in Supplementary Figure 1. Afterwards, we performed a parameter sweep for the dimensions of the top nanofin without introducing a rotation angle between the nanofins; i.e., keeping the angular orientation of both nanofins at 0\({}^{\circ}\). This simulation helps us verify if the two layers can be considered decoupled for all the geometries. If that is true, then the Jones matrix of the bilayer could be expressed as: \[J_{\text{bilayer}}=J_{\text{top}}\cdot J_{\text{bottom}}=\begin{bmatrix}e^{ i\phi_{x,1}}&0\\ 0&e^{i\phi_{y,1}}\end{bmatrix}\cdot\begin{bmatrix}e^{i\phi_{x,2}}&0\\ 0&e^{i\phi_{y,2}}\end{bmatrix}. \tag{1}\] The transmission and phase responses for this bilayer structure under \(x\)-polarized illumination are depicted in Figs. 1(c-d), respectively. From these plots, one can observe transmission dips which can be attributed to resonances. Here, we refer to all geometries whose transmission is lower than 80 % to be in the regime of resonance. In Fig. 1(e), we plot the error between the phase response (from the full-wave simulation above) and the analytical phase response calculated using Eq. (1). Since the rotation angle between the two nanofins is zero (\(\theta=0\)) the Jones matrix that describes the structure is diagonal. Hence, in the plots we only show the results related to the element \(J_{11}\). The error plots for element \(J_{22}\) (under incident y-polarization) are similar due to the rectangular geometry and are thus not included. The average absolute phase error here is less than 3\({}^{\circ}\). For most of the geometries, the simulation results coincide with the analytical ones. The few geometries that exhibits a larger error (only 2% of the geometries have an error higher than 10\({}^{\circ}\)) are the same ones that correspond to the resonance lines in the top right of Fig. 1(d). Therefore, for these geometries and others with large phase error, a full-wave simulation of the bilayer structure is needed to accurately capture its Jones matrix. However, given that full 0-2\(\pi\) phase coverage can be achieved with enough geometries away from resonance, these resonant elements can simply be filtered out from the library. In the next section, we will provide a more in depth investigation of these resonances (and their types) and demonstrate cases in which the coupling effects in a bilayer meta-atom due to resonance can be neglected. To study the case of a non-diagonal Jones matrix, we introduce a relative rotation between the top and bottom nanofins. In Supplementary Note 2, we considered two cases of bilayer structures made of identical top and bottom nanofins and we rotated the top nanofin at increments of 15\({}^{\circ}\) to study the effect of angular orientation on coupling. The two bilayer structures are made of Figure 1: Bilayer dielectric metasurface with fixed bottom nanofin, operating in transmission. **(a)** Model of the unit cell of a transmissive single-layer metasurface. **(b)** Model of the unit cell of the simulated bilayer. The dimensions of the bottom nanofin are fixed and set to be 134 nm \(\times\) 202 nm. The dimensions of the top nanofin are swept from 50 to 250 nm. The Power transmission and phase response of the bilayer metasurface are shown in **(c)** and **(d)**, respectively. **(e)** Absolute error in phase shift between the simulation results shown in (d) and the results analytically obtained from the matrix product of two cascaded single-layer metasurfaces. identical nanofin with dimensions \(134\) nm \(\times\) 202 nm and \(114\) nm \(\times\) 154 nm, respectively. For each of the two structures, we tabulated the amplitude and phase error for each element in their corresponding Jones matrices. Our analysis suggests an average phase error -- between FDTD simulations and analytical calculation of Eq. (1) -- on the order of \(5^{\circ}\). The phase error plots are shown in Supplementary Figure 2 for one of the two geometries confirming that 0 to \(2\pi\) phase coverage can be achieved with acceptable errors. As a next step, we extend our analysis by performing a full sweep for the dimensions of both the top and bottom nanofins. We then examine the effect of bilayer coupling by comparing these FDTD simulation results with the analytical calculation of Eq. (1), as before. Figures 2(b-c) depict the phase shift and power transmission obtained from FDTD. Each figure exhibits a grid that is made of 121 subplots (cells). Each cell represents one parameter sweep where the top nanofin has the dimensions reported on the horizontal/vertical axes while the dimensions of the bottom nanofin are swept from 50 nm to 250 nm. Interestingly, the response of each subplot mimics the behaviour of the entire plot, as if it is of fractal nature. This behavior can be reconciled with sampling theorem since the figure is compiled from a Jones matrix product, between the top and bottom nanofin, that is reminiscent of a convolution between their respective phase plots. To quantify the bilayer coupling effects, we plot the phase and transmission errors (with respect to the analytical prediction of Eq. (1)) as shown in Figs. 2(d-e), respectively. The axes of the two plots can be interpreted the same as described above. This analysis confirms the possibility of expressing the Jones matrix of a bilayer dielectric metasurface as the product of the single-layer Jones matrices over most of the geometries included in the considered parameter space. The average phase error in this case is \(3.19^{\circ}\). Therefore, for a large number of geometries, the hypothesis that the two layers can be treated as decoupled is a valid one. Notably, the phase redundancy afforded by the metasurface library offers a sufficiently large number of decoupled geometries with full 0 to \(2\pi\) phase coverage. As a rule of thumb, the geometries that exhibit larger errors fall within two main categories: a) structures in which one of the two nanofins is operating near resonance, and b) structures with significant reflections where the top nanofin is much larger than the bottom one. The latter can specially be inferred by examining the top right part of Fig. 2(c). This can be attributed to the multiple (Fabry-Perot like) reflections that occur between the substrate and the lower base of the top nanofin. To gain more insights into the bilayercoupling, we consider different geometries and perform a near-field analysis to examine the cases with large errors. The underlying physics of this problem is discussed next. ### Near-field analysis In this section, we complement the far-field Jones matrix calculations presented before with a near-field analysis. Our goal is: a) to evaluate the accuracy of our Jones matrix analysis in predicting the response of the bilayer structure, and b) to gain more insights into cases where the far-field response obtained from FDTD deviates from the Jones matrix analysis. Towards these aims, we adopted the scattering matrix (S-matrix) and transmission matrix (T-matrix) approaches while calculating the fields using the eigenmode expansion (EME) method [39, 40, 41, 42]. Since we are interested in understanding the bilayer coupling in a transmissive metasurface, our ports are defined at the input of the bottom nanofin and at the output of the top nanofin, respectively. To assess the effects of guided and evanescent modes as well as back reflections, separately, we adopt four different calculations: a) The full S-martrix which captures the contributions of guided, evanescent, and back reflected fields (hence, it is the most accurate calculation or ground truth). b) The full T-matrix is similar to (a) but without recording the back reflections; only forward propagating guided and evanescent fields. (c) The \((2\times 2)\) S-matrix which only considers the fundamental propagating mode and back reflections while ignoring the evanescent fields. (**d**) The \((2\times 2)\) T-matrix which only captures the forward propagating modes without recording neither evanescent fields nor back reflections. Hence, by definition, the \((2\times 2)\) T-matrix calculation coincides with our previous far-field Jones matrix analysis. Comparing between these four calculations will help quantify the effects of bilayer coupling (which can be inferred from the strength of evanescent fields) and back reflections for various meta-atom geometries. To further examine the effects of evanescent coupling and impedance mismatch, we introduce a small air gap between the top and bottom nanofins. By varying the gap size between the two nanofins and recording the amplitude variation one can identify the regimes where bilayer meta-atoms no longer behave as a stack of two decoupled single-layer metasurfaces. We consider four main categories of bilayer dielectric meta-atom geometries under x- and y-polarizations, respectively. Figure 3(a) depicts the first case: a bilayer meta-atom composed of two off-resonance nanofins* with a larger nanofin at the bottom. By looking at the amplitude response under the input polarizations, \(E_{x}\) (11) and \(E_{y}\) (00), one can observe that the S-matrices transmission oscillate only slightly around a mean value that matches the T-matrices result. Here, the deviation in amplitude between the S- and T-matrices calculations is on the order of 0.01%. This implies that the \((2\times 2)\) T-matrix (and so the Jones matrix) correctly describe the nanofin Figure 2: Bilayer dielectric metasurface with top and bottom nanofins of variable dimensions, operating in transmission. **(a)** Model of simulated bilayer metasurface: the arrows refer to sweeping the dimensions of top and bottom nanofins. The phase shift and power transmission of the structure in (a) is recorded using FDTD simulation and is shown in **(b)** and **(c)**, respectively. Each cell in these grids refers to a separate simulation in which the bottom nanofin’s dimensions are swept while fixing the dimensions of the top nanofin. The top nanofin’s dimensions are depicted on both axes. **(d)** Error in phase shift between the simulation results shown in (b) and the results analytically obtained from the single-layer library. **(e)** Error in power transmission between the simulation results shown in (c) and the results analytically obtained from the single-layer library. Figure 3. Scattering and transmission matrix analysis for transmissive bilayer metasurfaces with aligned top and bottom nanofins and variable air gap in between. Four cases are considered: **(a)** Two nanofins operating off-resonance. The top nanofin is smaller than the bottom one. The transmission amplitude response from the S- and T-matrices under two orthogonal polarizations, \(E_{x}\) (11, blue curve) and \(E_{y}\) (00, red curve), is plotted. **(b)** A birefringent bilayer meta-atom with different dimensions along x and y. The two nanofins are off-resonance and the corresponding amplitude response under x- and y-polarizations is shown. **(c)** Bilayer meta-atom composed of two identical nanofins operating at resonance. **(d)** Only the bottom nanofin of the bilayer meta-atom is at resonance and is larger than the top nanofin. dynamics accurately; the evanescent coupling and back reflections in this type of geometry are small enough to be ignored. Figure 3(b) represents the second case which is anisotropic: the two nanofins have different dimensions along the x and y directions but neither of the two is at resonance. The top nanofin is larger along the x direction (D\({}_{x,\text{top}}\) > D\({}_{x,\text{bottom}}\)) and is smaller along y (D\({}_{y,\text{bottom}}\) > D\({}_{y,\text{top}}\)). Hence, under x-polarized illumination, light will be reflected due to the size mismatch between the two nanofins. In this case, a Fabry-Perot like cavity will be created between the substrate and the top nanofin. As these back reflections are not captured by the T-matrices, a large deviation in the S- and T- matrix amplitude response is observed. On the other hand, when the same nanofin is illuminated by y-polarized light (blue curves), the amplitude response evaluated by the T-matrices and S-matrices are in good agreement; the back reflections and evanescent coupling in this geometry is minimal. These observations can be reconciled with waveguide theory. Both geometries involve impedance mismatch between the top and bottom nanofins (i.e., the modes are characterized by two different effective indices). However, since the small nanofin will have a smaller effective index (close to the cladding material--air), its placement above the large nanofin is already captured by the Jones matrix of the large nanofin. This is not the case when the small nanofin is terminated by the large one on top. The impedance mismatch in the latter is more significance and is thus not fully captured by the Jones matrix of the small nanofin. Figure 3(c) shows the case of two identical nanofins operating at resonance. In this case it is not expected that back reflection between the top nanofin and the substrate can occur (given the agreement in cross sectional area) and indeed the discrepancy between the full S-matrix and full T-matrix is insignificant. However, the large discrepancy between the full S- and full T-matrices versus the \((2\times 2)\) S- and \((2\times 2)\)T-matrices suggest that the higher order evanescent modes play a fundamental role in the amplitude response. Here, the evanescent field coupling between the two nanofins is very significant (due to the operation at resonance) and, hence, cannot be neglected. Additional categories of nanofins that feature back reflections and that exhibit coupling through evanescent fields are shown in Supplementary Figure 3. The former is typically observed when the top nanofin is larger in size whereas the latter occurs when at least one of the two nanofins is at resonance. Figure 3(d) highlights another case of resonant geometries. Here, only the bottom nanofin is at resonance whereas the top nanofin is of much smaller dimensions. In this case, due to the small spatial overlap between the two nanofins, the evanescent coupling is not significant. The smaller dimensions of the top nanofin also suggest that the back reflections are minimal. These two behaviors--i.e., weak evanescent coupling and reflections--are indeed suggested by the response of the \((2\times 2)\) T-matrix which captures the amplitude transmission fairly accurately. Hence, the full S-matrix and full T-matrix are in close agreement. In short, our near-field analysis confirms that (under some circumstances) the S- and T-matrices responses coincide. Cases with disagreement correspond to geometries that involve large back reflections (small spatial overlap between top and bottom nanofins) or geometries that lie close to resonance. By avoiding these regions in the parameter space, one can populate a set of bilayer meta-atoms which effectively behave as a stack of two decoupled single-layer metasurfaces. Thanks to the redundancy afforded by the metasurface library such a task is possible to achieve. Figure 4(a) depicts a schematic which visualizes the nanofin categories. The set of allowable (decoupled) nanofin geometries are represented in the first row (green zone). After filtering out all the geometries with bilayer coupling, the remaining meta-atoms (around 50% of the parameter space) still densely span from 0 to \(2\pi\) phase shift as shown in Fig. 4(b). Note that thus far we have chosen a specific material, namely titanium dioxide, as a platform for conducting our analysis. Nevertheless, we expect the physical dynamics associated with evanescent coupling and back reflection, to be universal across other dielectric metasurface libraries using different material platforms or design wavelengths. For instance, silicon nitride (SiN) is another widely used material in the visible, albeit with less index contrast [43]. Therefore, due to the less mode confinement, the coupling strength can be more significant causing all resonant geometries to be considered coupled (including for e.g., the category in Fig. 4(a)-iii. Other effects, such as back reflections, should preserve their behavior. In addition, other material platforms also exist for the near infrared and telecom regimes including, for e.g., silicon. To validate the generality of our analysis, we study bilayer dielectric metasurfaces based on silicon, at a design wavelength of 1550 nm, and show that the coupling effects are governed by the same physical dynamics. We summarize the results of this analysis in Supplementary Section 7. Figure 4: **(a)** Schematic of different categories of bilayer dielectric meta-atoms. The first row corresponds to the geometries for which the coupling between the nanofins is negligible: **(i)** neither the bottom nor the top nanofins are at resonance and bottom nanofin is larger than the top, **(ii)** only the bottom nanofin is at resonance and is larger than the top, **(iii)** only the top nanofin is at resonance but the bottom nanofin is much larger than the top. The second row depicts the cases for which the two nanofins in the bilayer are strongly coupled: **(iv)** neither the bottom nor the top nanofins are at resonance and the top nanofin is larger than the bottom, **(v)** only the top nanofin is at resonance and larger than the bottom, **(vi)** only the top nanofin is at resonance and slightly smaller than the bottom, **(vii)** both nanofins are at resonance. These cases are detailed more fully in Supplementary Section 4. **(b)** Complex transmission of the dashed region in (a). The electric field amplitude \(t_{x}e^{i\phi_{x}}\) is plotted on the complex plane demonstrating the 0-2\(\pi\) phase coverage afforded by the considered geometries while maintaining low loss. The red circle is the unit circle. The black circle is the average of \(t_{x}\) over all considered geometries. ### Operating in reflection Thus far we have shown that we can evaluate the Jones matrix of a transmissive bilayer metasurface starting from the Jones matrix of a single-layer metasurface under some constraints. This allows the designer to build a bilayer metasurface by only utilizing the single-layer metasurface library, thereby simplifying the design process. In this section, we investigate the validity of this assumption for reflective dielectric metasurfaces. We examine if a birefringent metasurface operating in reflection can be expressed as the product of four Jones matrix (each describing a single-layer nanofin). In this case, the product of one unit cell (pixel) on the bilayer metasurface, assuming no rotation, is given by: \[J_{\text{bilayer}}=J_{\text{top}}\cdot J_{\text{bottom}}\cdot J_{ \text{mirror}}\cdot J_{\text{bottom}}\cdot J_{\text{top}}=\\ \begin{bmatrix}e^{i\phi_{x,\text{top}}}&0\\ 0&e^{i\phi_{y,\text{top}}}\end{bmatrix}\cdot\begin{bmatrix}e^{i\phi_{x,\text{ bottom}}}&0\\ 0&e^{i\phi_{y,\text{bottom}}}\end{bmatrix}\cdot\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}.\] \[\begin{bmatrix}e^{i\phi_{x,\text{bottom}}}&0\\ 0&e^{i\phi_{y,\text{bottom}}}\end{bmatrix}\cdot\begin{bmatrix}e^{i\phi_{x,\text {top}}}&0\\ 0&e^{i\phi_{y,\text{top}}}\end{bmatrix}. \tag{2}\] Equation (2) describes the path that light makes when impinging on the metasurface as it passes through the two nanofins, gets reflected by the mirror, before traversing the two nanofins again, in the reverse order. To test the validity of Eq. (2), we start by simulating a unit cell consisting of a single-layer in reflection and then we build on it by considering the full bilayer metasurface in reflection. Following the same approach used for a transmissive metasurface, we consider a single-layer reflective metasurface first to verify if, in presence of a mirror, one can express the Jones matrix of the system as the following product: \[J_{\text{reflection}}=J_{\text{transmission}}\cdot M\cdot J_{\text{ transmission}}=\begin{bmatrix}e^{i\phi_{x}}&0\\ 0&e^{i\phi_{y}}\end{bmatrix}\cdot\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\cdot\begin{bmatrix}e^{i\phi_{x}}&0\\ 0&e^{i\phi_{y}}\end{bmatrix}. \tag{3}\] Supplementary Figure 4 shows a comparison between the phase responses of the the reflective (single-layer) metasurface obtained from FDTD and and the one calculated analytically using Eq. (2), suggesting significant discrepancy between the two. Note that placing the nanofin on top of a mirror perturbs the phase response of the structure, possibly introducing standing wave patterns (similar to terminating a waveguide with a complex load). The mirror-dielectric interface effects cannot be accurately captured by Eq. (2). Therefore, in contrast to the case of transmissive bilayer metasurfaces, where the analytical approach was fairly accurate in predicting the amplitude and phase responses, the requirements are more stringent when operating in reflection. To bypass this challenge, we introduce a simple modification to the structure by inserting a layer of silica (named "spacer") between the mirror and the nanofin, essentially matching the impedance between the two, as depicted in Fig. 5(a). To compensate for the effective index perturbation of the TiO\({}_{2}\) in which arose from the mirror-dielectric interface, the spacer dimensions need to be optimized depending on nanofin size. The goal is obtain an overall response for the reflective structure that matches the behaviour of the transmissive nanofin as if it was in direct contact with the substrate. This allows us to build a reflective bilayer metasurface using the Jones matrix product by starting from the same single-layer metasurface library in transmission. To optimize the spacer, we sweep its thickness between 20 nm and 165 nm, and for each thickness, we perform full sweep on the dimensions of the nanofin. For each nanofin geometry, we find the optimized spacer dimension that minimizes the phase error compared to the analytical one of Eq. Figure 5: Operation of a reflective metasurface. **(a)** Model of the unit cell of a single-layer metasurface operating in reflection. A layer of silica has been inserted between the mirror and the nanofin. **(b)** Phase shift obtained from the FDTD simulation of the structure shown in (a). For each geometry the spacer thickness has been optimized so that the phase difference between the simulation results and the analytical product is minimized. **(c)** Difference between the simulated and analytical phase shifts for the structure shown in (a) while optimizing the spacer thickness for each geometry. **(d)** Optimum spacer dimension as a function of the nanofin geometry. The vertical axis depicts the phase error for each optimum spacer thickness. **(e)** Model of the unit cell of bilayer metasurface working in reflection. **(f)** Phase shift obtained from the FDTD simulation of the structure shown in (e). **(g)** Phase shift error for the structure shown in (e) with optimized spacer thickness for each geometry. **(h)** Power transmission obtained from FDTD simulation for the structure shown in (e). **i** Error in power transmission for the structure in (e) when the spacer thickness is optimized for each geometry. (4). Figure 5(b) shows the phase shift obtained for each geometry when its optimized spacer thickness is selected. The error between the simulated results and the analytical ones is \(0.5^{\circ}\), on average, provided that the spacer thickness is optimized, as depicted in Fig. 5(c). In addition, Fig. 5(d) shows the optimum spacer thickness as a function of the nanofin dimensions. Here, the vertical axis depicts the phase error between FDTD and the analytical product of Eq. (4). Therefore, it is now possible to express a single-layer reflective metasurface as the Jones matrix product of single-layer decoupled nanofins. This intermediate step is essential as it enables us to design a bilayer metasurface by only relying on a single-layer library. However, the insertion of the matching spacer obviously imposes a constraint on the design of the metasurface; as it is not possible (at least with our current fabrication techniques) to build a device with spatially varying spacer thickness. Instead, we envision the final device to be made of a bilayer meta-atom where the bottom nanofin is of fixed dimensions while only sweeping the top nanofin. A metasurface unit cell of this kind can still realize an asymmetric (yet unitary) Jones matrix, point-by-point, across the structure. By making use of super-cell metasurfaces or one can break the unitarity constraint as well. To design a bilayer metasurface in reflection we repeat the previous analysis performed in transmission. We fix the dimension of the bottom nanofin at 134 nm \(\times\) 202 nm. This is justified because a stack of two nanofins provides 6 degrees-of-freedom, whereas an arbitrary Jones matrix requires only 4 [36]. Hence, by fixing the bottom nanofin and varying the top one, all 4 degrees-of-freedom can still be accessed. We set the spacer thickness 100 nm which is the optimized value for that selected geometry as suggested by Fig. 5(d). We performed a parameter sweep of the dimensions of the top nanofin without introducing a rotation angle between the fins. The previous analysis on the spacer optimization was needed to select the spacer dimension that optimizes the response of the fixed bottom geometry. This allow us to verify the assumption of decoupling in reflection for all the geometries. We also confirm that the spacer dimension from the single-layer is valid for the design of a bilayer metasurface in reflection; in which case the Jones matrix of the bilayer can be written as shown in Eq. (2). The results of the simulation are shown in Fig. 5(f) and Fig. 5(g). For most of the geometries, the spacer selection rule defined above allows to accurately build a reflective bilayer starting from the library of a transmissive single-layer metasurface given the low phase error reported in the color map of Fig. 5(i). The latter represents the phase difference between the FDTD simulations and the analytical product of Eq. (2). The average absolute phase error in this case is \(5.3^{\circ}\). The cases that show larger errors are the ones composed of a top nanofin that is much larger than the the bottom one. This can be due to the reflections that occur between the mirror and the base of the top nanofin. This is reminiscent of the observation we made for the case of transmissive bilayer metasurfaces. ## 3 Conclusion We showed that a bilayer dielectric metasurface operating in transmission can be expressed as the product of two decoupled single-layer metasurfaces under some constraints. In this process, we distinguished regions in which the bilayer coupling is governed by resonance versus back reflections and we provided systematic recipes to avoid operation in both regimes. Furthermore, we demonstrated that it is also possible to express a reflective bilayer metasurface as the product of five matrices which describe the nanofins composing the structure, the reflective mirror, and a matching spacer in between. By combining our near and far-field analysis, we narrow down the design space to a smaller subset of geometries that are essentially decoupled. Notably by excluding the meta-atoms with strong coupling from the design library--either by avoiding resonant structures or bilayer geometries with very large top nanofins--one can efficiently build a multi-layer metasurface as a cascade of single-layer meta-atoms. In this case, fitting a target profile to a library will entail decomposing it into a product of two matrices and fitting each one following the same selection criteria of a single-layer nanofin. We validated the applicability of our approach to a wide range of libraries by considering titanium dioxide platform at a design wavelength of 532 nm in addition to silicon at 1550 nm, showing the generality of our approach. ## Acknowledgment We thank Dr. D. Lim, N. Rubin, A. Zaidi, and L. Li, all from Harvard university, for the insightful discussions. The authors from Harvard University acknowledge financial support from Corning Incorporated. Lastly, financial support from the Office of Naval Research (ONR) MURI program, under grant no. N00014-20-1-2450 is acknowledged. ## Disclosures The authors declare no conflicts of interest. ## Supplemental document See Supplement 1 for supporting content. ## Data availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. ## References * [1]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (1), pp. 836-970. Cited by: SS1. * [2]N. A. Rubin, G. D'Aversa, P. Chevalier, Z. Shi, W. T. Chen, and F. Capasso (2019) Matrix fourier optics enables a compact full-stokes polarization camera. Science365 (4820), pp. eaax1839. Cited by: SS1. * [3]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (1), pp. 836-970. Cited by: SS1. * [4]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [5]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [6]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [7]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [8]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [9]N. A. Rubin, G. D'Aversa, P. Chevalier, Z. Shi, W. T. Chen, and F. Capasso (2019) Matrix fourier optics enables a compact full-stokes polarization camera. Science365 (4820), pp. eaax1839. Cited by: SS1. * [10]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [11]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [12]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [13]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [14]N. A. Rubin, G. D'Aversa, P. Chevalier, Z. Shi, W. T. Chen, and F. Capasso (2019) Matrix fourier optics enables a compact full-stokes polarization camera. Science365 (4820), pp. eaax1839. Cited by: SS1. * [15]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [16]N. A. Rubin, Z. Shi, and F. Capasso (2021) Polarization in diffractive optics and metasurfaces. Adv. Opt. Photon.13 (8), pp. 836-970. Cited by: SS1. * [17]J. Li, S. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704), pp. 704-710. Cited by: SS1. * [18]J. Park, J. Kang, S. J. Kim, X. Liu, and M. L. Brongersma (2017) Dynamic reflection phase and polarization control in metasurfaces. Nano Lett.17 (407-413), pp. 407-413. Cited by: SS1. * [19]J. P. Balthasar Mueller, N. A. Rubin, R. C. Devlin, B. Groever, and F. Capasso (2017) Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Phys. Rev. Lett.118 (113901), pp. 113901. Cited by: SS1. * [20]J. P. Balthasar Mueller, N. A. Rubin, R. C. Devlin, B. Groever, and F. Capasso (2017) Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Phys. Rev. Lett.118 (113901), pp. 113901. Cited by: SS1. * [21]J. P. Balthasar Mueller, N. A. Rubin, R. C. Devlin, B. Groever, and F. Capasso (2017) Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Phys. Rev. Lett.118 (113901), pp. 113901. Cited by: SS1. * [22]J. P. Balthasar Mueller, N. A. Rubin, R. C. Devlin, B. Groever, and F. Capasso (2017) Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Phys. Rev. Lett.118 (113901), pp. 113901. Cited by: SS1. * [23]J. P. Balthasar Mueller, N. A. Rubin, R. C. Devlin, B. Groever, and F. Capasso (2017) Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Phys. Rev. Lett.118 (113901), pp. 113901. Cited by: SS1. * [24]J. P. Balthasar Mueller, N. A. Rubin, R. C. Devlin, B. Groever, and F. Capasso (2017) Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Phys. Rev. Lett.118 (113901), pp. 113901. Cited by: SS1. * [25]J. Li, S. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [26]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [27]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [28]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [29]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [30]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [31]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [32]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [33]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [34]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [35]J. Li, Y. Chen, H. Yang, J. Li, P. Yu, H. Cheng, C. Gu, H. Chen, and J. Tian (2015) Simultaneous control of light polarization and phase distributions using plasmonic metasurfaces. Adv. Funct. Mater.25 (704-710), pp. 704-710. Cited by: SS1. * [36]J. Park, J. Kang, S. J. Kim, X. Liu, and M. L. Brongersma (2017) Dynamic reflection phase and polarization control in metasurfaces. Nano Lett.17 (407-413), pp. 407-413. Cited by: SS1. * [37]J. Park, J. Kang, S. J. Kim, X. Liu, and M. L. Brongersma (2017) Dynamic reflection phase and polarization control in metasurfaces. Nano Lett.17 (40), pp. 407-413. Cited by: SS1. * [38]J. Park, J. Kang, S. J. Kim, X. Liu, and M. L. Brongersma (2017) Dynamic reflection phase and polarization control in metasurfaces. Nano Lett.17 (407-413), pp. 407-413. Cited by: SS1. * [39]J. Park, J. Kang, S. J. Kim, X. Liu, and M. L. Brongersma (2017) Dynamic reflection phase and polarization control in metasurfaces. Nano Lett.17 (407-413), pp. 407-413. Cited by: SS1. * [40]J. Park, J. Kang, S. J. Kim, X. Liu, and M. L. Brongersma (2017) Dynamic reflection phase and polarization control in metasurfaces. Nano Lett.17 (407-413), pp. 407-413. Cited by: SS1. * [41]J. Park, J. Kang, S. J. Kim, X. Liu, and M. L. Brongersma (2017) Dynamic reflection phase and polarization control in metasurfaces. Nano Lett.17 (407-413), pp. 407-413. Cited by: SS1. * [42]A. Pors and S. I. Bozhevolnyi (2013) Plasmonic metasurfaces for efficient phase control in reflection. Opt. Express21 (27), pp. 27438-27451. Cited by: SS1. * [43]A. P * [32] T. Chang, J. Jung, S.-H. Nam, H. Kim, J. U. Kim, N. Kim, S. Jeon, M. Heo, and J. Shin, "Universal metasurfaces for complete linear control of coherent light transmission," Adv. Mater. **34**, 2204085 (2022). * [33] Y.-W. Huang, N. A. Rubin, A. Ambrosio, Z. Shi, R. C. Devlin, C.-W. Qiu, and F. Capasso, "Versatile total angular momentum generation using cascaded j-plates," Opt. Express **27**, 7469-7484 (2019). * [34] B. Mirzapourbeinekalaye, A. McClung, and A. Arbabi, "General lossless polarization and phase transformation using bilayer metasurfaces," Adv. Opt. Mater. **10**, 2102591 (2022). * [35] E. W. Wang, T. Phan, S.-J. Yu, S. Dhuey, and J. A. Fan, "Dynamic circular birefringence response with fractured geometric phase metasurface systems," Proc. National Acad. Sci. **119**, e2122085119 (2022). * [36] Y. Bao, F. Nan, J. Yan, X. Yang, C.-W. Qiu, and B. Li, "Observation of full-parameter jones matrix in bilayer metasurface," Nat. Commun. **13**, 7550 (2022). * [37] H. Zheng, M. He, Y. Zhou, I. I. Kravchenko, J. D. Caldwell, and J. G. Valentine, "Compound meta-optics for complete and loss-less field control," ACS Nano **16**, 15100-15107 (2022). * [38] P. Georgi, Q. Wei, B. Sain, C. Schlickriede, Y. Wang, L. Huang, and T. Zentgraf, "Optical secret sharing with cascaded metasurface holography," Sci. Adv. **7**, eabf9718 (2021). * [40] D. M. Whittaker and I. S. Culshaw, "Scattering-matrix treatment of patterned multilayer photonic structures," Phys. Rev. B **60**, 2610-2618 (1999). * [41] C. Wan and J. Encinar, "Efficient computation of generalized scattering matrix for analyzing multilayered periodic structures," IEEE Trans. on Antennas Propag. **43**, 1233-1242 (1995). * [42] R. Hall, R. Mittra, and K. Mitzner, "Analysis of multilayered periodic structures using generalized scattering matrix theory," IEEE Trans. on Antennas Propag. **36**, 511-517 (1988). * [43] S. Colburn, A. Zhan, E. Bayati, J. Whitehead, A. Ryou, L. Huang, and A. Majumdar, "Broadband transparent and cmos-compatible flat optics with silicon nitride metasurfaces [invited]," Opt. Mater. Express **8**, 2330-2344 (2018). ## Supplementary Information Do dielectric bilayer metasurfaces behave as a stack of decoupled single-layer metasurfaces? Alfonso Palmieri\({}^{1,\dagger}\), Ahmed H. Dorrah\({}^{1,\dagger}\), Jun Yang\({}^{2}\), Jaewon Oh\({}^{1}\), Paulo Dainese\({}^{2,\ast}\), and Federico Capasso\({}^{1,\ast}\) \({}^{1}\)Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA \({}^{2}\)Corning Inc., Painted Post, New York 14870, United States \({}^{\dagger}\)These authors contributed equally \({}^{\dagger}\)[email protected]; [email protected] ###### Contents * 1 Generating a single-layer metasurface library * 2 Bilayer simulations: the effect of relative rotation * 3 Phase coverage of the bilayer metasurface operating in transmission * 4 The effect of resonance and of back reflections * 5 Single-layer metasurface in reflection * 6 The effect of a relative rotation of \(45^{\circ}\) in reflection: sweeping the dimensions of the top nanofin while fixing the dimensions of the bottom one * 7 Near-field analysis for a Silicon bilayer metasurface Generating a single-layer metasurface library In this section, we present a systematic strategy for designing a single-layer metasurface. We start by building a metasurface library based on the results and ideas discussed in [1]. To the aim of obtaining a direct relation between the phase shift and the dimension of the nanofins, a library has been generated by performing a parameter sweep with finite-difference time-domain (FDTD) simulations. In a simplified picture, each subwavelength structure can be considered as a truncated waveguide or a low-quality-factor Fabry-Perot resonator. Nanofins with different dimensions (length and width) will induce different confinement of the field impinging on the structure. This confinement provides an effective refractive index that differs along the two polarization components. The FDTD software used is Ansys Lumerical. It solves Maxwell's equations on a discrete spatial and temporal grid in complex geometries such as the analyzed case. A model of the simulated structure is the one shown in Fig.1(a); it is composed of a TiO\({}_{2}\) rectangular nanofin on a fused silica substrate. The boundary conditions applied at the edges of the simulation box are the Periodic Boundary Conditions (PBC) which emulate the existence of an infinitely periodic array of these rectangular fins so that the simulated structure is an infinite array of the same metasurface unit cell. The dimension of the meta-atom, namely \(D_{x}\) and \(D_{y}\), range from 50 to 250 nm spanning a total of 2601 geometries which cover the phase range from 0 to \(2\pi\). The center-to-center separation \(d\), which defines the unit cell size, is 420 nm. The structure is illuminated by an x-polarized plane wave source at 532 nm that has been placed at a distance of 600 nm from the bottom of the substrate. At this wavelength the refractive index of the fused silica and of TiO\({}_{2}\) are set to 1.46 and 2.48, respectively. For the reflective metasurfaces, we set the refractive index of the Aluminum mirror to \(0.811+i0.366\). A monitor is placed a few wavelengths above the nanofin in order to evaluate the phase response in the far-field and the percentage of transmitted power. The mesh size of this simulation is set to 2.5 nm \(\times\) 2.5 nm \(\times\) 2.5 nm. The obtained results are shown in Fig. 1 showing an excellent agreement with those reported in Ref [1]. Note that for the case of a \(y\)-polarized source, the plots of the phase (\(\phi_{y}\)) and power transmission (\(T_{y}\)) would be identical to those shown in Fig. 1 but with the \(x\) and \(y\) axes exchanged (due to symmetry) so they are not shown. It is observed that the considered parameter space is able to completely cover the 0\(-\)2\(\pi\) phase range. The Power Transmission plot depicted in Fig. 1(b) shows that shorter nanofins provide almost uniform unitary transmission while a few geometries with larger \(D_{x}\) manifest resonances and transmission dips. In the complex transmission plot shown in Fig. 1(c) the red dots correspond to the electric field amplitude of each simulated geometry. The average transmission is around 0.91 (black curve) and the red circle represent the unitary circle. Thanks to the obtained results it is possible to map each nanofin geometry to a different (symmetric) Jones matrix. ## 2 Bilayer simulations: the effect of relative rotation In this section, we consider a bilayer metasurface and we analyze the effect of relative rotation between its top and bottom nanofins. Two cases are tested: in the first case we set the dimensions of both the bottom and top nanofins to be 134 nm \(\times\) 202 nm while in the second case we set the dimensions of both nanofins to be 114 nm \(\times\) 154 nm. The rotation angle of the bottom nanofin around its geometrical center is set to zero (\(\theta\) = 0) and we rotate the top nanofin (\(\theta^{\prime}\)) between 0\({}^{\circ}\) and 90\({}^{\circ}\) with increments of 15\({}^{\circ}\). Other rotation angles for the top nanofin beyond \(\theta^{\prime}>\) 90 are naturally covered by this sweep (following a simple symmetry argument). In Tables 1 - 4, we report element-by-element comparison between the Jones matrix of the simulated bilayer (\(\mathbf{J_{s}}\)) and the "analytical Jones matrix (\(\mathbf{J_{a}}\))" obtained as follows: \[\mathbf{J_{a}}=\mathbf{J_{\mathrm{top}}}\cdot\mathbf{J_{\mathrm{bottom}}}=R(- \theta^{\prime})\cdot\left[\begin{matrix}e^{i\phi_{x,\mathrm{top}}}&0\\ 0&e^{i\phi_{y,\mathrm{top}}}\end{matrix}\right]\cdot R(\theta^{\prime})\cdot \left[\begin{matrix}e^{i\phi_{x,\mathrm{bottom}}}&0\\ 0&e^{i\phi_{y,\mathrm{bottom}}}\end{matrix}\right], \tag{1}\] Figure 1: Simulation data for two-dimensional parameter sweeps of TiO\({}_{2}\) rectangular fins (\(h\) = 600 nm). **(a)** Phase shift \(\phi_{x}\) on x-polarized light. The phase shift has been computed as the ratio between the phase collected in the center of the far field projection of a monitor above the structure when the nanofin is present on top of the substrate and the phase at the same monitor when only the silica substrate is present. Units in radians. **(b)** Power Transmission \(T_{x}\) for x-polarized light. The total power passing through a monitor above the structure relative to the source. **(c)** Complex Transmission. The blue dots represent the electric field plotted on the complex plane for all the simulated geometries. The black circle corresponds to the averaged transmission. The red circle is the unit circle. where \(R(\theta^{\prime})\) is the \(2\times 2\) rotation matrix. \begin{tabular}{|c c c c c|} \hline & \(|J_{1,1;a}|-|J_{1,1;s}|\) & \(|J_{1,2;a}|-|J_{1,2;s}|\) & \(|J_{2,1;a}|-|J_{2,1;s}|\) & \(|J_{2,2;a}|-|J_{2,2;s}|\) \\ \hline \hline \(\theta^{\prime}=0^{\circ}\) & 0.0653 & 0 & 0 & -0.0548 \\ \(\theta^{\prime}=15^{\circ}\) & 0.0307 & -0.0154 & 0.0434 & -0.0547 \\ \(\theta^{\prime}=30^{\circ}\) & -0.0203 & -0.0345 & 0.0148 & -0.0386 \\ \(\theta^{\prime}=45^{\circ}\) & -0.0281 & -0.0404 & -0.0280 & -0.0155 \\ \(\theta^{\prime}=60^{\circ}\) & -0.0398 & -0.0381 & -0.0443 & -0.0101 \\ \(\theta^{\prime}=75^{\circ}\) & -0.0643 & -0.0257 & -0.0328 & -0.0225 \\ \(\theta^{\prime}=90^{\circ}\) & -0.0776 & 0 & 0 & -0.0323 \\ \hline \end{tabular} Table 1. Element-by-element comparison between the magnitude of the Jones matrix elements of the simulated bilayer \(\mathbf{J_{s}}\) and the elements of the analytical Jones matrix \(\mathbf{J_{a}}\) for the bilayer composed of two nanofins of dimensions 134 nm \(\times\) 202 nm. \begin{table} \begin{tabular}{|c c c c c|} \hline & \(|J_{1,1;a}|-|J_{1,1;s}|\) & \(|J_{1,2;a}|-|J_{1,2;s}|\) & \(|J_{2,1;a}|-|J_{2,1;s}|\) & \(|J_{2,2;a}|-|J_{2,2;s}|\) \\ \hline \hline \(\theta^{\prime}=0^{\circ}\) & 0.0653 & 0 & 0 & -0.0548 \\ \(\theta^{\prime}=15^{\circ}\) & 0.0307 & -0.0154 & 0.0434 & -0.0547 \\ \(\theta^{\prime}=30^{\circ}\) & -0.0203 & -0.0345 & 0.0148 & -0.0386 \\ \(\theta^{\prime}=45^{\circ}\) & -0.0281 & -0.0404 & -0.0280 & -0.0155 \\ \(\theta^{\prime}=60^{\circ}\) & -0.0398 & -0.0381 & -0.0443 & -0.0101 \\ \(\theta^{\prime}=75^{\circ}\) & -0.0643 & -0.0257 & -0.0328 & -0.0225 \\ \(\theta^{\prime}=90^{\circ}\) & -0.0776 & 0 & 0 & -0.0323 \\ \hline \end{tabular} \end{table} Table 2: Element-by-element comparison of the phase of the Jones matrix elements of the simulated bilayer \(\mathbf{J_{s}}\) and the elements of the analytical Jones matrix \(\mathbf{J_{a}}\) for the bilayer composed of two nanofins of dimensions 134 nm \(\times\) 202 nm. The discrepancy that arises between the Jones matrix of the simulated bilayer and the "analytical" \begin{table} \begin{tabular}{|l l l l l|} \hline & \(\angle(J_{1,1;a})-\angle(J_{1,1;s})\) & \(\angle(J_{1,2;a})-\angle(J_{1,2;s})\) & \(\angle(J_{2,1;a})-\angle(J_{2,1;s})\) & \(\angle(J_{2,2;a})-\angle(J_{2,2;s})\) \\ \hline \hline \(\theta^{\prime}=0^{\circ}\) & -1.8\({}^{\circ}\) & 0 & 0 & 1.7\({}^{\circ}\) \\ \(\theta^{\prime}=15^{\circ}\) & -1.2\({}^{\circ}\) & -3.9\({}^{\circ}\) & 1.6\({}^{\circ}\) & 0.8\({}^{\circ}\) \\ \(\theta^{\prime}=30^{\circ}\) & -1.2\({}^{\circ}\) & 1.7\({}^{\circ}\) & 0.1\({}^{\circ}\) & 1.2\({}^{\circ}\) \\ \(\theta^{\prime}=45^{\circ}\) & -1.6\({}^{\circ}\) & 2.2\({}^{\circ}\) & 0.7\({}^{\circ}\) & 0.1\({}^{\circ}\) \\ \(\theta^{\prime}=60^{\circ}\) & -1.7\({}^{\circ}\) & 0.2\({}^{\circ}\) & 1.8\({}^{\circ}\) & 1.3\({}^{\circ}\) \\ \(\theta^{\prime}=75^{\circ}\) & -2.6\({}^{\circ}\) & 2.6\({}^{\circ}\) & 3.4\({}^{\circ}\) & 2.7\({}^{\circ}\) \\ \(\theta^{\prime}=90^{\circ}\) & 2.2\({}^{\circ}\) & 0 & 0 & 2.6\({}^{\circ}\) \\ \hline \end{tabular} \end{table} Table 4: Element-by-element comparison of the phase of the Jones matrix elements of the simulated bilayer \(\mathbf{J_{s}}\) and the elements of the analytical Jones matrix \(\mathbf{J_{a}}\) for the bilayer composed of two nanofins of dimensions 114 nm \(\times\) 154 nm. \begin{table} \begin{tabular}{|l l l l l|} \hline & \(|J_{1,1;a}|-|J_{1,1;s}|\) & \(|J_{1,2;a}|-|J_{1,2;s}|\) & \(|J_{2,1;a}|-|J_{2,1;s}|\) & \(|J_{2,2;a}|-|J_{2,2;s}|\) \\ \hline \hline \(\theta^{\prime}=0^{\circ}\) & -0.0291 & 0 & 0 & 0.0112 \\ \(\theta^{\prime}=15^{\circ}\) & -0.0343 & 0.0277 & 0.0122 & 0.0564 \\ \(\theta^{\prime}=30^{\circ}\) & -0.0324 & 0.0174 & -0.0036 & 0.0375 \\ \(\theta^{\prime}=45^{\circ}\) & -0.0367 & 0.0029 & 0.0085 & 0.0025 \\ \(\theta^{\prime}=60^{\circ}\) & -0.0523 & -0.0002 & -0.0005 & -0.0309 \\ \(\theta^{\prime}=75^{\circ}\) & -0.0609 & 0.0108 & 0.0158 & -0.0469 \\ \(\theta^{\prime}=90^{\circ}\) & -0.0560 & 0 & 0 & -0.0439 \\ \hline \end{tabular} \end{table} Table 3: Element-by-element comparison of the magnitude of the Jones matrix elements of the simulated bilayer \(\mathbf{J_{s}}\) and the elements of the analytical Jones matrix \(\mathbf{J_{a}}\) for the bilayer composed of two nanofins of dimensions 114 nm \(\times\) 154 nm. Jones matrix" is on average not so large and can be considered acceptable for both the case of phase and amplitude. This implies that, at least for the tested geometries, the output response can be calculated using Eq. (5). To reconcile the discrepancies, recall that we apply the Periodic Boundary Conditions (PBC) when simulating both the single and bilayer metasurfaces. The PBC emulates a periodic array composed of identical nanofins with zero rotation angle everywhere producing an electric field at the borders of the unit cell that is different from the case of a rotated nanofin. Although the nanofins are rotated, the square unit cells comprising the simulated structure are always fixed. Hence, the lattice symmetry is different from the case reported in Fig. 1. Accordingly, applying the rotation matrix on the simulation data of the aligned single-layer nanofins--calculating Eq. (5)--does not entirely capture their response under rotation and is thus expected to slightly differ from the full-wave simulations. The data reported in Tables 1-4 suggest that the discrepancy between the analytical and simulated cases is higher for the bilayer geometry of dimensions \(134\) nm \(\times\)\(202\) nm. This is likely due to the larger mismatch between the major and minor axes (compared to the \(114\) nm \(\times\)\(154\) nm geometry) which in turn reduces the overlap region between the top and bottom nanofins, thereby introducing more reflections and making this geometry more sensitive to the rotation-dependent variation of the electric field at the boundary. ## 3 Phase coverage of the bilayer metasurface operating in transmission Here, we report the results obtained from the simulation of the structure shown in Fig. 1(b). The dimension of the bottom nanofin are fixed (\(134\) nm \(\times\)\(202\) nm) while the dimensions of the top nanofin are swept between \(50\) nm and \(250\) nm. Figure 2 depicts the phase shift of the diagonal elements of the Jones matrix of the simulated structure, \(J_{1,1}\) and \(J_{2,2}\). These results suggest that both elements (in the analyzed parameter space) can cover a phase range between \(0\) and \(2\pi\) as reported on the vertical axis of the plots. The color bars depict the error between the simulated results and the analytical ones for both Jones matrix elements. Similar results were obtained for the case of a bilayer metasurface operating in reflection with fixed bottom nanofin dimensions. Figure 2. Phase error between the Jones matrix elements of the full-wave simulated bilayer and the corresponding analytically computed ones. Full 0 to \(2\pi\) phase coverage can be achieved with low phase error. Here the bottom nanofin dimensions are fixed. **(a)** Difference between the phase of element \(J_{1,1}\) extracted from the simulation and the phase of \(J_{1,1}\) analytically computed from the data of the single layer. The vertical axis depicts the phase shift of each geometry, suggesting 0 to \(2\pi\) phase coverage. **(b)** Difference between the magnitude of \(J_{2,2}\) extracted from the simulation and analytical calculations. Full phase coverage can again be achieved, as before. The effect of resonance and of back reflections There are two main sources of errors when describing the bilayer as a stack of two decoupled single layers: a) Fabry-Perot-like back reflections that typically occur between the top nanofin and the substrate if the top nanofin is larger than the bottom one, and b) coupling through the evanescent fields when either nanofin operates near resonance. In this section, we study these sources of error in more detail. Figure 3 shows the field profile in the xz-plane of all possible geometries that cover these two sources of error. The results were obtained using full wave (FDTD) simulations. By varying the gap between the top and bottom nanofins and recording the amplitude response, we gain some insights into the bilayer coupling strength. In Fig. 3(a) we consider a bilayer meta-atom composed of two nanofins at resonance. In this case, as it has been pointed out in the near-field section, the two nanofins are coupled through the evanescent modes (which are more significant when the nanofins are in proximity). Consequently, by varying the gap between the two nanofins, the field profiles and their confinement is altered. Such behavior is consistent with the S- and T- matrix analysis (of Fig. 3(c)) where a discrepancy between full S- and T-matrices versus the (\(2\times 2\)) matrices was observed. Figure 3(b) depicts another case in which the top nanofin is at resonance and is larger than the bottom one. Interference fringes are observed between the top nanofin and the substrate due to the back reflections. This is consistent with the oscillations in transmission amplitude observed in Fig.3(b). The mode shape in the top nanofin is unperturbed. Hence, the effect of back reflections is more significant than resonance, causing the two nanofins to be strongly coupled. To complement this picture, we consider another geometry in which the top nanofin is still at resonance but is smaller than the nanofin bottom; hence, the back reflections are mitigated. In this case, the field profile still does not change as a function of gap size and no interference fringes are observed. Therefore, unlike the previous case, the two nanofins can be considered decoupled even in the presence of resonance. Resonance becomes much more significant when it occurs in the bottom nanofin; the bilayer coupling becomes strong due to the interaction of the evanescent fields. If the top nanofin is too large (considering our parameter space, if its dimensions roughly exceed \(130\) nm \(\times\)\(130\) nm), these evanescent fields are strongly coupled. This is confirmed by the FDTD simulations shown in Fig. 3(d) where the mode profiles in both the top and bottom nanofins are clearly perturbed as the gap size is varied. Figure 3: Simulated field profiles of different examples bilayer meta-atoms (in the xz-plane). The input polarization is \(E_{x}\) (in plane).**(a)** Both top and bottom nanofin are at resonance; exhibiting strong coupling. **(b)** Top nanofin is at resonance and is larger than the bottom one; causing back reflections. **(c)** Top nanofin is at resonance but is smaller in size than the bottom one; exhibiting very weak coupling. **(d)** Bottom nanofin is at resonance. The top nanofin dimensions are fairly large (170 nm \(\times\) 170 nm); showing strong coupling. **(e)** Bottom nanofin at resonance. The top nanofin dimensions are fairly small (90 nm \(\times\) 90 nm); effectively decoupled. **(f)** Neither bottom nor top nanofin at resonance. Top nanofin is larger than the bottom one; introducing back reflections. **(g)** Neither the bottom nor the top nanofins are at resonance. Top nanofin is smaller than the bottom one; a perfectly decoupled geometry. On the other hand, in the limit when the top nanofin is much smaller (while the bottom is still at resonance), the coupling is weak and the field profile does not change by varying the gap size as shown in Fig. 3(e). Figure 3(f) emphasizes the effect of back reflections in the absence of resonance (both nanofins are off-resonance). One can observe the Fabry-Perot-like interference fringes that arise between the top nanofin and the substrate. This effect was not captured by the (\(2\times 2\)) T-matrix but rather by the S- matrices. This confirms that this geometry cannot be treated as a stack of two decoupled single layers. Lastly, in Fig. 3(g) we consider a more straight forward case with neither resonances nor back reflections. Here, both nanofins operate off-resonance. The top nanofin is smaller than the bottom one. As it can be noticed, the field profile remains identical regardless of the gap size. No fringes arise and the two nanofins are effectively decoupled. ## 5 Single-layer metasurface in reflection Here, we show the simulation results of the single-layer metasurface operating in reflection without introducing a spacer between the mirror and the nanofin. As mentioned in the main text, looking at the results of the analytical calculations (Fig. 4(a)) and those of the FDTD simulations (Fig. 4(b)), one can immediately notice the significant differences over the full parameter space. The average phase error between the two cases is on the order of \(10^{\circ}\). Figure 4: single-layer metasurface working in reflection. **(a)** Phase shift obtained from the analytical product. **(b)** Phase shift obtained from FDTD simulation of a single-layer metasurface working in reflection without the introduction of the spacer. The effect of a relative rotation of \(45^{\circ}\) in reflection: sweeping the dimensions of the top nanofin while fixing the dimensions of the bottom one To verify if the assumption of decoupling is also valid when a relative rotation between the metasurface nanofins is introduced, we simulated a bilayer structure while fixing the bottom nanofin and sweeping the dimension of the top nanofin. Here, the top nanofin is rotated with respect to the bottom one by an angle of \(45^{\circ}\). In this case it is necessary to introduce a rotation matrix \(R(\theta)\) that sandwiches the Jones matrix of the top nanofin to take into account the rotation of the top nanofin. The Jones matrix of this structure can be analytically written as follows: \[\mathbf{J_{a}}=\mathbf{J_{bottom}}\cdot\mathbf{J_{top}}=R(-\theta^{\prime}) \cdot\begin{bmatrix}e^{i\phi_{x,\text{bottom}}}&0\\ 0&e^{i\phi_{y,\text{bottom}}}\end{bmatrix}\cdot R(\theta^{\prime})\cdot\begin{bmatrix} e^{i\phi_{x,\text{top}}}&0\\ 0&e^{i\phi_{y,\text{top}}}\end{bmatrix}, \tag{2}\] where \(R(\theta)\) is the \(2\times 2\) rotation matrix that is defined as: \[R(\theta)=\begin{bmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{bmatrix}. \tag{3}\] Hence, by introducing a relative rotation, it is possible to obtain a metasurface where the four Jones matrix elements are all different. In Supplementary Figures 5 and 6 we show the magnitude and phases of the four elements of the Jones matrix describing the bilayer obtained via FDTD simulations. Interestingly, the geometries that now exhibit larger phase error -- for elements \(J_{1,1}\) and \(J_{2,2}\) -- are not the ones in the top right of the plot (as was the case with no rotation) but rather the ones in the top left and bottom right. The average phase error is \(1.14^{\circ}\) for the element \(J_{1,1}\) and \(1.02^{\circ}\) for the element \(J_{2,2}\), exhibiting less error compared to the case where no rotation angle was introduced. This is because the rotation increases the overlap region between the top and bottom nanofins, thereby reducing multiple reflections between the top nanofin and the substrate. For the off diagonal Jones matrix elements, the largest phase error is in both cases on the diagonal of the plot. On the other hand, the phase of these element is not relevant since, as it can be noticed from Fig. 5, the magnitude of these elements is zero. The average absolute phase error for the elements \(J_{1,2}\) and \(J_{2,1}\), excluding the points on the diagonal is around \(3^{\circ}\). Figure 5: Magnitude of the four elements of the Jones matrix of a bilayer metasurface whose bottom nanofins dimensions are set to be 134 nm \(\times\) 202 nm while the dimensions of the top nanofin are sweeping between 50 and 250 nm. Figure 6: Phase of the four elements of the Jones matrix of a bilayer metasurface whose bottom nanofins dimensions are set to be 134 nm \(\times\) 202 nm while the dimensions of the top nanofin are sweeping between 50 and 250 nm. ## 7 Near-field analysis for a Silicon bilayer metasurface In this section, we expand our analysis by considering silicon as another material platform at a design wavelength, \(\lambda=1550\) nm. Our aim is to show that the physical dynamics associated with back reflections and coupling in a bilayer metasurface are, to some extent, independent of the material platform, its parameter space, and wavelength. The phase and transmission library for a single layer silicon nanofin is depicted in Fig. 8. Using this library, we then performed a near-field analysis for a silicon bilayer metasurface at 1550 nm, considering a unit-cell size of \(500\times 500\) nm and 1-\(\mu\)m tall nanofins. We adopted the same scattering and transmission matrix approaches used for the TiO\({}_{2}\) platform in the main text. We tested the same four categories considered before, assuming x-polarized incident light. Figure 9(a) shows the first case: a bilayer metasurface composed of two off-resonance nanofins with a smaller nanofin at the top. It is observed that the Full S-matrix transmission oscillates slightly (0.01%) around a mean value which matches the (2x2) T-matrix. In this case, the Jones matrix is able to correctly describe the nanofin dynamics as if they were decoupled. This is consistent with the results obtained for TiO\({}_{2}\). Furthermore, in the same figure, we show Figure 7: Error in phase of the four elements of the Jones matrix of a bilayer metasurface whose bottom nanofins dimensions are set to be 134 nm \(\times\) 202 nm while the dimensions of the top nanofin are sweeping between 50 and 250 nm. the field profile in the xz-plane obtained using FDTD simulations. In this case, since neither the evanescent coupling nor back reflections play a significant role, the field profile remains identical regardless of the gap size. Figure 9(b) shows the case when the top nanofin is larger than the bottom one. Due to the size mismatch between the bottom and top nanofins, the effect of back reflections result in a large deviation between the responses of the S-matrices and the T-matrices. More specifically, the Fabry-Perot cavity effect created between the top nanofin and the substrate is emphasized here by looking at the field profile and the associated interference fringes. Figure 9(c) represents the third case where two identical nanofins operate at resonance. Here, the coupling trough evanescent modes causes a large discrepancy between the Full S- (and T-) matrices, compared to the (2x2) T (and S) matrices. By looking at the field profiles, we observe that the field confinement due to resonance is highly dependent on the gap size. On the other hand, if the top nanofin is much smaller, while the bottom is still at resonance, the evanescent coupling becomes weak and field confinement exhibit less dependence on the gap size. This is depicted in 9(d) where the full S-matrix and the (2x2) T-matrix responses are in very good agreement. In summary, this analysis shows that the rules governing the evanescent coupling and back reflections in bilayer metasurfaces are somewhat universal, regardless of the material platform and design wavelength. The underlying physics dictating if a bilayer meta-atom can be described as two decoupled nanofins thus apply to different metasurface libraries. Figure 8: Simulation data for two-dimensional parameter sweeps of Si rectangular fins (\(h=1000\) nm). **(a)** Power Transmission \(T_{x}\) for x-polarized light. The total power passing through a monitor above the structure relative to the source. **(b)** Phase shift \(\phi_{x}\) on x-polarized light. The phase shift has been computed as the ratio between the phase collected in the center of the far field projection of a monitor above the structure when the nanofin is present on top of the substrate and the phase at the same monitor when only the silica substrate is present. Units in radians. Figure 9: Transmission amplitude response from S- and T-matrices under x-polarization and FDTD profiles for a Silicon bilayer metasurface operating at 1550nm. Four cases are considered. **(a)** Two nanofins operating off resonance. **(b)** Bilayer meta-atom composed of a top nanofin larger than the bottom. **(c)** Two identical nanofins operating at resonance **(d)** Only bottom nanofin of the bilayer meta-atom is at resonance and is larger than the top nanofin.
2309.05879
Generalized Attacks on Face Verification Systems
Face verification (FV) using deep neural network models has made tremendous progress in recent years, surpassing human accuracy and seeing deployment in various applications such as border control and smartphone unlocking. However, FV systems are vulnerable to Adversarial Attacks, which manipulate input images to deceive these systems in ways usually unnoticeable to humans. This paper provides an in-depth study of attacks on FV systems. We introduce the DodgePersonation Attack that formulates the creation of face images that impersonate a set of given identities while avoiding being identified as any of the identities in a separate, disjoint set. A taxonomy is proposed to provide a unified view of different types of Adversarial Attacks against FV systems, including Dodging Attacks, Impersonation Attacks, and Master Face Attacks. Finally, we propose the ''One Face to Rule Them All'' Attack which implements the DodgePersonation Attack with state-of-the-art performance on a well-known scenario (Master Face Attack) and which can also be used for the new scenarios introduced in this paper. While the state-of-the-art Master Face Attack can produce a set of 9 images to cover 43.82% of the identities in their test database, with 9 images our attack can cover 57.27% to 58.5% of these identifies while giving the attacker the choice of the identity to use to create the impersonation. Moreover, the 9 generated attack images appear identical to a casual observer.
Ehsan Nazari, Paula Branco, Guy-Vincent Jourdan
2023-09-12T00:00:24Z
http://arxiv.org/abs/2309.05879v1
# Generalized Attacks on Face Verification Systems ###### Abstract Face verification (FV) using deep neural network models has made tremendous progress in recent years, surpassing human accuracy and seeing deployment in various applications such as border control and smartphone unlocking. However, FV systems are vulnerable to Adversarial Attacks, which manipulate input images to deceive these systems in ways usually unnoticeable to humans. This paper provides an in-depth study of attacks on FV systems. We introduce the DodgePersonation Attack that formulates the creation of face images that impersonate a set of given identities while avoiding being identified as any of the identities in a separate, disjoint set. A taxonomy is proposed to provide a unified view of different types of Adversarial Attacks against FV systems, including Dodging Attacks, Impersonation Attacks, and Master Face Attacks. Finally, we propose the "One Face to Rule Them All" Attack which implements the DodgePersonation Attack with state-of-the-art performance on a well-known scenario (Master Face Attack) and which can also be used for the new scenarios introduced in this paper. While the state-of-the-art Master Face Attack [22] can produce a set of 9 images to cover 43.82% of the identities in their test database, with 9 images our attack can cover 57.27% to 58.5% of these identities while giving the attacker the choice of the identity to use to create the impersonation. Moreover, the 9 generated attack images appear identical to a casual observer. ## 1 Introduction Face verification (FV) involves checking whether two face images represent the same person [15]. In 2014, DeepFace [26], a model based on convolutional neural networks, achieved human accuracy in FV on Labeled Faces in the Wild (LFW) dataset [9]. Since then, neural network models have not only surpassed human accuracy but have also progressed to the point where these technologies are utilized in public safety applications such as border control procedures and also in commercial applications like unlocking smartphones. Under benign conditions, these models demonstrate excellent accuracy. However, their robustness is questionable when faced with an adversary. These models are prone to inaccurate output predictions when faced with imperceptible or perceptible but natural-looking adversarial input images [27]. Such attacks are classified into (1) Physical Attacks, in which the input to a model is manipulated (e.g. printing a photo, or printing a face in 3D and presenting it to the FV system); and (2) Digital Attacks, in which digital images are manipulated to fool the FV system [27]. In this paper, we focus on a specific type of Digital Attack called an Adversarial Attack that manipulates a face image in such a way that to the human eye the changes are not discernible, but the FV system does not recognize the correct identity. These attacks are imperceptible to humans and can drive the construction of Physical Attacks, thus presenting a high risk. Various attacks have been developed to deceive an FV system. For instance, in the Dodging Attack a face image is de-identified so the FV system cannot recognize it as the original face, but to the human eye it is still the same person [2, 11]. Another method is known as the Impersonation Attack, where a face image of an individual A is altered to be recognized as a different desired individual B while, to the human eye, the manipulated image is still recognized as the initial individual A [19]. A third attack scenario named Master Faces or Master Face Attack, tries to generate images that impersonate a wide number of identities [17, 18, 22]. In this paper, we provide a unified view of these attacks; we show that other types of attacks that present a high risk for the FV systems are possible, and we present a novel attack framework capable of easily targeting specific types of these attacks and which outperforms state-of-the-art methods. We propose a generalized attack definition, named DodgePersonation Attack, that encompasses the Dodging, Impersonation, and Master Face Attacks while also introducing new attacks. The DodgePersonation Attack is formulated as follows: we select a set of human face images called the \(\mathcal{M}\)atchSet, and another disjoint set called the \(\mathcal{D}\)odgeSet. Our objective is to generate one or several input face images to the FV system, that collectively impersonate all identities in the \(\mathcal{M}\)atchSet, while effectively dodging recognition as any of the identities in the \(\mathcal{D}\)odgeSet. Furthermore, we propose an approach to carry out the DodgePersonation Attack. Moreover, we maintain control over the visual identity of the generated attack images, whereas the previous work did not have such control. Additionally, we aim for these input face images to be indistinguishable to the human eye, appearing as a single image of the same person. For the special case of the Master Face Attack, our results are significantly better than previous work [22]. In addition, we solve new attack scenarios emerging from the DodgePersonation Attack using the same framework. Our key contributions are as follows: 1. We provide a holistic view of attacks against FV systems by defining the DodgePersonation Attack, a generic attack that integrates various attack types. 2. We propose a taxonomy of different Adversarial Attacks against FV systems based on their target. 3. We introduce a novel algorithm named "One Face to Rule Them All" to deploy the DodgePersonation Attack on FV systems that exhibits state-of-the-art results while allowing complete control over the identity of the generated attack images. In the rest of the paper, we review the relevant work in this domain in Section 2, define the DodgePersonation Attack in Section 3, and present our new "One Face to Rule Them All" method in Section 4. We discuss our experimental settings in Section 5 and present and discuss our results in Section 6. Finally, Section 7 concludes the paper. ## 2 Related Work and Problem Motivation ### Face Verification Systems In general, automatic face recognition can be separated into two main categories: face identification (also known as closed-set face recognition) and face verification. The former involves classifying faces into a pre-determined set of identities, while the latter determines if two unseen identities during the training phase, represent the same person [1, 5, 15]. Therefore, FV is a zero-shot learning task, as the identities of the individuals are not known during the training phase [1]. This makes FV a more difficult and complex task when compared to face identification. Attacking an FV system is the focus of this study. As shown in Figure 1, current state-of-the-art solutions offer a three-step process for FV: i) face detection; ii) face mapping from image space to embedding space; and iii) face matching. For the first step, a commonly used method to detect faces automatically in a given image is the Multi-task Cascaded Convolutional Neural Networks (MTCNN) [31]. For the second step, a face mapper (FM) or face descriptor is used to map a face from the image space into a point in the embedding space. We refer to this point as the embedding corresponding to a face that is processed by an FM. The embedding space is a high-dimensional space where the feature vectors of the faces are projected. Usually, the output feature vector is L2-normalized. Current state-of-the-art FV systems are based on neural network FMs. Finally, in the third step corresponding to the face matching, the distance (e.g., Euclidean distance) between the embeddings of two faces is calculated. Two faces represent the same identity if the distance is below a given threshold. Otherwise, the faces are considered to be of two different persons, i.e., they belong to two different identities. One of the key factors in the formation of an embedding space is the loss function used for training the FM. DeepFace [26], one of the earliest solutions, used a vanilla softmax loss function. In addition to the three-step FV process mentioned above, it includes a step known as face frontalization after face detection. The method achieved 97.35% accuracy on the LFW dataset, achieving similar accuracy to human level (97.53%). Another early method, DeepID2 [24], and its extension DeepID2+ [25], improved upon DeepFace by employing a combination of softmax loss and a distance metric to minimize the distance between the same identity pairs and maximize the distance between different identity pairs. These methods achieved an accuracy of 99.15% and 99.47% respectively on the LFW dataset. A breakthrough in the field was the introduction of FaceNet [20], which used a triplet loss function for direct learning of an embedding space. This method further improved the performance, with accuracy reaching 99.63% on the LFW dataset. In the years following FaceNet's release, several other methods have been proposed to improve upon it. One such method [23] generalizes the triplet loss by allowing joint comparison among more than one negative example. Other methods, such as SphereFace [12], CosFace [29, 4, 28], and ArcFace [3], have used a combination of softmax loss and a marginal penalty loss to improve performance. ### Attacks on Face Verification Systems One of the key challenges in FV is the robustness of the methods against various types of attacks. These attacks can take the form of Physical Attacks, such as using masks or makeup to alter the appearance of a face, or Digital Attacks, such as manipulating images or videos [27]. An Adversarial Attack is a type of Digital Attack that involves adding small perturbations to an image in order to fool a model - in our case, an FV system. Adversarial Attacks can be categorised based on their specificity to non-targeted and targeted [27]. A non-targeted attack, also known as Dodging Attack, is designed to cause an FV system to fail to recognize the correct identity. Research in this area has also focused on methods to generate such attacks, such as [6, 8, 21]. A targeted attack, also known as Impersonation Attack, is designed to cause an FV to misidentify a specific individual. The research community has also focused on methods to generate these attacks, such as [2, 6, 21, 32, 6]. Additionally, a recently emerging type of Adversarial Attack known as Master Face Attack is designed to cause an FV system to accept the input as a match to all identities, essentially serving as a face master-key. In [1] such images are crafted by minimizing the distance of a perturbed image embedding from a mini-batch of dataset embedding. The authors take an arbitrary face image as input and then craft the attacked image based on it. Finally, three works have been published in recent years that generate Master Faces by searching through the latent space of a pre-trained Generative Adversarial Network (GAN) [17, 18, 22]. Unlike in [1], these works do not have control over the image identity of the synthesized master face, i.e., the attack is not built based on an identity of their choice. Despite being considered Digital Attacks, these are not Adversarial Attacks since they do not alter a given face image by adding perturbations, but create a face that does not exist in reality with the aid of GANs. The field of FV systems and attacks has some gaps and unresolved issues in the existing related work. Research has mainly focused on individual attack types, neglecting potential connections among them. Our research aims to address this by proposing a holistic view of different attacks, consolidated into a singular problem. This new formulation has led to new attack scenarios not previously considered by the research community. Additionally, we propose a new flexible method capable of conducting various attack scenarios, with success rates that surpass the state-of-the-art. ### Problem Motivation As highlighted earlier, the research conducted in the field of FV systems has primarily revolved around exploring novel attack solutions. The existing body of work has primarily centered on the isolated investigation of individual attack types, thereby overlooking potential interconnections. This limited focus has hindered the development of a comprehensive understanding of the broader attack landscape. **DodgePersonation Attack.** To bridge this gap, we propose an attack formulation that **integrates distinct attack types, offering a unified perspective of FV attacking scenarios**. This formulation consolidates a diverse set of attacks that have been addressed in isolation into a singular definition, with each instance representing a distinct manifestation of an attack. Our approach uses two distinct sets of face images, namely \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet. The primary aim is to deceive the FV system by presenting one or multiple inputs that are collectively recognized as identity matches for individuals in \(\mathcal{M}\)atchSet, while concurrently evading identification as any member of \(\mathcal{D}\)odgeSet. The fact that we might want to match a set of identities seems more intuitive than the condition of having to evade another set of identities. We illustrate why this case is indeed relevant through a recent case in the news. A company allegedly used facial recognition to prevent a lawyer from entering their venues because that lawyer's firm was representing clients involved in a litigation against that company1. This serves as an example of an attack scenario where our proposed system would bypass such restrictive measures. In this scenario, the targeted lawyer, who is denied access, could input their own face into our system and add it to the \(\mathcal{D}\)odgeSet. The system would generate an output image that resembles the attacker's face but remains unrecognized by the facial recognition system. Additionally, the targeted lawyer has the option to include both their own face and the faces of their coworkers in the \(\mathcal{D}\)odgeSet. This serves the purpose of not only concealing their own identity but also avoiding being identified as any other individuals who they suspect may also be denied access. Footnote 1: Source: MSG probed over the use of facial recognition to eject lawyers from show venues Figure 1: Three-step FV process. First step: automatic detection of faces in a given image using the MTCNN method. Second step: map the faces from the image space to the embedding space using a neural network FM. Third step: face matching by calculating the distance between two face’s embeddings and determining if they represent the same identity based on a threshold. We introduce the concept of **Source Face**, which represents the user-input face image used to initiate the attack. Carrying out an attack involves generating one or multiple variations of the Source Face, referred to as **Attack Faces**, for deceiving the FV system and meeting the attack constraints. These image variations are intentionally crafted to be nearly imperceptible to the human eye. The minimum number of Attack Faces required from a given Source Face depends on the specific attack scenario (which we will discuss in Section 3.2) and the quality and performance of the FM used. For instance, in the case of conducting a Master Face Attack, the more advanced the FM, the greater the number of Attack Faces required to cover most of the provided identities. In practice, we have found that even with a relatively small number of Attack Faces, a significant majority of the identities can be covered. We elaborate on this aspect in greater detail in the subsequent sections. ### Threat Model #### 2.4.1 Attacker's Objective The attacker's objective is to generate one or multiple face images that match as many identities as possible from the \(\mathcal{M}\)atchSet, while successfully evading detection as any member of the \(\mathcal{D}\)odgeSet. The attacker might have secondary objectives, such as generating visually identical face images or generating images visually similar to pre-selected ones. #### 2.4.2 Attacker's Capabilities and Constraints In this study, we consider a white-box attack [27] scenario where the attacker has unrestricted access to the target model. However, the attacker is limited by the requirement to create one or several human-like face images (Attack Faces) that can successfully pass the face detection stage of the FV process. Consequently, the attacker must generate images that meet their goal while still looking like human faces. Furthermore, the attacker can be limited to crafting the face images to closely resemble a specific user-input identity (i.e., the Source Face). #### 2.4.3 Possible Attack Scenarios Varying populations of \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet, result in distinct attack scenarios. As an illustration, consider the scenario where a wanted criminal, acting as an attacker, intends to fraudulently apply for a license using an online form. Their objective is to impersonate another individual and successfully evade recognition. To achieve this, the attacker needs to generate an image or set of images that, in the eyes of the FV system, resemble the victim's face and differ from their own, while to the human eye, these images appear to be of the attacker. This can be represented as a \(\mathcal{M}\)atchSet containing the victim's face image and an \(\mathcal{D}\)odgeSet containing the attacker's face image, with the attacker's face image serving as the Source Face. More scenarios are detailed in Section 3.2. #### 2.4.4 Targeted Attack Model We utilize FaceNet, a neural network-based FV system, as the target for our attack. ## 3 The _DodgePersonation Attack_ In this section, we propose a taxonomy for FV system attack scenarios. We first define some terms and notations. Then, we introduce the concepts of impersonation and dodging for face images and define them mathematically. Finally, we introduce two sets of face images: \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet, which form the foundational pillars of the DodgePersonation Attack. These sets are instrumental in defining the problem and serve as a basis for categorizing various attacks within our proposed taxonomy. ### DodgePersonation Attack Definition and Taxonomy Let us assume that \(\mathcal{F}\) is the domain of all face images and \(I\) is the set of all identities. Let \(Ident():\mathcal{F}\to I\) represent a function that maps a face image into its identity. If we consider, for instance, three face images, \(\{i_{1},i_{2},i_{3}\}\subset\mathcal{F}\) of the same identity \(I_{1}\in I\), then \(Ident(i_{1})=Ident(i_{2})=Ident(i_{3})=I_{1}\). Let us assume that a face image is represented by 3 squared matrices \(M=m\times m\), each representing a color channel. Consider a given FV system, that is decomposed into two elements: (i) a face mapper function that we represent by \(FM()\); and (ii) a distance function that we represent by \(Dist()\). The system uses \(FM():M^{3}\rightarrow\mathbb{R}^{p}\) to project a given face image into the embedding space. When an image \(x\) (or set of images \(S\)) is mapped from the face image space into the embedding, it will be represented as \(\tilde{x}\) (or \(\tilde{S}\)). This notation will be used for all objects in the embedding space. \(Dist():\mathbb{R}^{p}\rightarrow\mathbb{R}\) is used to calculate the distance between two points in the embedding space as a hard similarity measure. The matching or mismatching of the identities of two face images in the embedding space is decided using a threshold \(th\in\mathbb{R}\). As described in Section 2, \(FM()\) uses a neural network architecture to extract features from the images. **Definition 1** (Impersonation and Dodging): _Let \(A,B\in\mathcal{F}\). We say Face Image A **impersonates** Face Image B if:_ \[Dist(FM(A),FM(B))\leq th \tag{1}\] _If the condition is not met, we say Face Image A **dodges** Face Image B._ **Definition 2** (\(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet Face Image Sets): _Let \(\mathcal{M}\)atchSet \(=\{m_{1},m_{2},\cdots,m_{k-1},m_{k}\}\subset\mathcal{F}\) represent a set of images, and let \(\mathcal{D}\)odgeSet \(=\{d_{1},d_{2},\cdots,d_{l-1},d_{l}\}\subset\mathcal{F}\) represent another set of images. In the rest of the paper, we will assume that sets of face images for a given problem are referred to as meeting certain constraints. Specifically, these sets must fulfill the following requirements:_ * \(k\geq 0\)__ * \(l\geq 0\)__ * \(\mathcal{M}\)atchSet \(\cap\)\(\mathcal{D}\)odgeSet \(=\)\(\emptyset\)__ Having \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet defined, we can now specify the DodgePersonation Attack and categorize the different attacks into a taxonomy. **Definition 3** (DodgePersonation Attack): _Consider two sets \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet. The DodgePersonation Attack is defined as the following multi-objective optimization problem. The objective is to produce a collection of images called "Attack Faces", represented as \(\mathbf{X}=\{x_{1},x_{2},\ldots,x_{n}\}\). These images must conform to the requirement of being identified as faces by the MTCNN face detector allowing them to proceed to the subsequent FV steps successfully. For \(x_{i}\in X\), let \(M_{x_{i}}\) and \(D_{x_{i}}\) be defined as:_ \[\begin{cases}M_{x_{i}}:=\{m\in\mathcal{M}\text{atchSet}:\text{Dist}(FM(x_{i}),FM(m))\leq th\}\\ D_{x_{i}}:=\{d\in\mathcal{D}\text{odgeSet}:\text{Dist}(FM(x_{i}),FM(d))\leq th \}\end{cases} \tag{2}\] _Then, our multi-objective optimization problem is defined as:_ \[\begin{cases}\max\limits_{\forall x_{i}\in X}M_{x_{i}}|\\ \min\bigcup\limits_{\forall x_{i}\in X}D_{x_{i}}|\end{cases} \tag{3}\] ### DodgePersonation Attack taxonomy Based on the cardinality of \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet, the DodgePersonation Attack can be divided into multiple types of attacks. These types are displayed in our proposed taxonomy in Figure 2. The three top branches on the first layer characterize the number of identities to be impersonated (i.e., they correspond to different \(\mathcal{M}\)atchSet sizes). At this level, we can consider the three following scenarios: (1) no impersonation; (2) a single identity impersonation; and (3) multiple identities impersonation. In the second layer, we consider the number of identities to dodge from, which is related to the \(\mathcal{D}\)odgeSet size. This layer consists of three branches: (1) no identity to dodge from; (2) a single identity to dodge from; and (3) multiple identities to dodge from. Depending on the desired number of identities to impersonate and dodge, nine possible scenarios emerge from this taxonomy. The research community commonly refers to scenarios where the \(\mathcal{M}\)atchSet size is zero and \(\mathcal{D}\)odgeSet size is one as None-Targeted Attack, Face De-Identification, or Dodging Attack, and where the \(\mathcal{M}\)atchSet size is one and \(\mathcal{D}\)odgeSet size is zero as Targeted Attack [27]. Large \(\mathcal{M}\)atchSet sizes, i.e., containing a large number of identities, where the \(\mathcal{D}\)odgeSet size is zero, are referred to as Master Faces or Master Face Attack [18]. Our proposed taxonomy also includes novel scenarios with no established names in the literature because they have not yet been considered. Taking a closer look at FV systems from the perspective of an attacker, we can explore the conditions under which the systems can be deceived. The objective of the attacker is to create a set of images that _look like them_ to the human eye (i.e., the Source Face is a face image of the attacker) but are identified as a different identity or identities by the FV system. Below, we present a list of real-world scenarios that correspond to specific examples of the cases in our proposed taxonomy, shown in Figure 2. **Null Attack.** The first scenario, which we call the Null Attack, occurs when there is no intention to impersonate or dodge any identity. This is a trivial case where any valid input is a solution. **Single Identity Dodging.** In a specific attack scenario, an individual aims to shield their identity from being identified by an online FV system used on social media, thereby seeking to evade the system's facial identity-check mechanism. To do so, the attacker needs to create an image that does not match their face. This scenario can be accomplished by creating an empty \(\mathcal{M}\)atchSet and a \(\mathcal{D}\)odgeSet that contains the attacker's own face im Figure 2: DodgePersonation Attack Taxonomy proposed for categorizing face verification system attack scenarios. age. This scenario is named None-Targeted Attack, Face Didertification, or Dodging Attack by the research community [27]. **Multi Identity Dodging.** In this scenario, the attacker, who is a wanted criminal, aims to conceal their identity and ensure that the altered image does not resemble any other potentially sensitive identities, such as other criminals. To minimize the likelihood of being recognized as either themselves or another criminal, the attacker must generate an image that deviates from multiple identities, including their own. This can be achieved by defining a \(\mathcal{M}\)atchSet with no members and a \(\mathcal{D}\)odgeSet consisting of the images of those identities. **Single Identity Impersonation.** In another attack scenario, an attacker may wish to gain access to someone else's smartphone by impersonating the victim's identity 2. To achieve this, the attacker would need to create a set of images that match the victim's face, which can be formulated as a \(\mathcal{M}\)atchSet with one member being the face image of the victim with an empty \(\mathcal{D}\)odgeSet. The research community has given the name Targeted Attack to this scenario, as documented in [27]. Footnote 2: In this given scenario, we consider having direct access to the smartphone’s FV system. Moving forward, a logical progression for this research would involve exploring Physical Attacks, which represent a more realistic setting for this particular scenario. **Multi Identity Impersonation.** This situation can arise when the attacker aims to deceive the online FV system of a portal in order to obtain unauthorized access. In this scenario, the attacker possesses incomplete knowledge regarding the authorized employees, lacking awareness of which employees have access privileges and which ones do not. To increase the chances of success, the attacker needs to impersonate all possible individuals who might have proper access. This scenario can be accomplished by creating a \(\mathcal{M}\)atchSet that contains several members, which are the face images of the employees. In a similar scenario, the attacker may want to gain access to any arbitrary smartphone by fooling its FV system. To achieve this, the attacker needs to create a set of images that look like a large number of identities. This scenario can be formulated as a \(\mathcal{M}\)atchSet containing face images of _a large number of identities_, while keeping \(\mathcal{D}\)odgeSet empty. This scenario is an extreme case of the previous one and is known as a Master Face Attack [18]. **Single Impersonation and Single Dodging.** Another scenario arises when we need to satisfy the requirements of both **Single Identity Dodging** and **Single Identity Impersonation at the same time**. In a hypothetical scenario, we can envision a situation where a wanted criminal, acting as an attacker, intends to utilize their own image on an online system equipped with an FV mechanism to gain entry. In this case, the attacker seeks to access the system by impersonating an authorized individual. However, the attacker also wants to make sure that their own identity is not recognized. To achieve this, the attacker needs to create a set of images that match the victim's face and do not match their own face. This can be formulated as a \(\mathcal{M}\)atchSet with one member being the face image of the victim and an \(\mathcal{D}\)odgeSet with the attacker's own face image. In the same vein, when the attacker intends to hide their identity and prevent the modified image from resembling any other critical identities, the scenario of **Single Impersonation and Multi Dodging** arises. Additionally, in situations where the attacker has limited information about the individuals they want to impersonate and is unsure of who has access or not, and also needs to avoid being recognized as themselves, the scenario of **Multi-Impersonation and Single Dodging** arises. **Multi Impersonation and Multi Dodging.** Lastly, if an organization allows access to certain individuals, but alerts the police if certain wanted individuals are recognized, the attacker might want to impersonate the authorized individuals and avoid being recognized as one of the wanted individuals. To increase their chance of gaining access, they can use a \(\mathcal{M}\)atchSet that contains face images of several authorized individuals who might have access, while using a \(\mathcal{D}\)odgeSet that contains face images of the wanted individuals. This way, the attacker can minimize the risk of getting caught while maximizing the chances of entering the facility. ## 4 One Face to Rule Them All Method This section presents the **One Face to Rule Them All Algorithm** that is used to perform a DodgePersonation Attack. Our algorithm consists of two phases: Phase 1 - the embedding space search; and Phase 2 - the Attack Face generation. In Phase 1, we map the \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet into the embedding space \(\mathcal{E}\) using the \(FM\) and employ a genetic algorithm (GA) with a special-purpose fitness function to search for a point in the embedding space that satisfies the attack's constraints. During Phase 2, we modify the user's input face image (the Source Face) to ensure that when it is mapped to the embedding space using \(FM\), it is positioned in close proximity to the point discovered in Phase 1. This way, the manipulated user input image satisfies the constraints of the \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet. We must highlight that in our algorithm we introduce an additional constraint to the DodgePersonation Attack in Definition 3. We require that the set of Attack Faces obtained, i.e., the manipulated images generated by our algorithm, resemble a single 3 predetermined identity recognizable to humans (referred to as the Source Face). ### Phase 1 - Embedding Space Search OverviewThe goal of Phase 1 is to obtain one or more points in the embedding space that can correspond to the Attack Face(s) we want to find. To achieve this we map the \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet using the FV system from the face space into the embedding space where we carry out a search to determine the desired points. Recall that we use a bar on top of the elements that are represented in the embedding space, thus, after this step, we will have \(\overline{\mathcal{M}}\)atchSet and \(\overline{\mathcal{D}}\)odgeSet. We propose two key steps for determining the desired points: (i) constructing clusters over \(\overline{\mathcal{M}}\)atchSet; and (ii) searching the embedding space around each cluster using a GA with a specially developed fitness function. Figure 3 provides an overview of Phase 1. From Face Space to Embedding Space.The faces in \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet are passed through the MTCNN algorithm, normalized, and fed into the \(FM\) function, as shown in Figure 1. Cluster Generation.The final objective is to locate a set of points, \(\overline{x_{i}}\), in the embedding space \(\mathcal{E}\) that satisfy two conditions: (i) collectively are near all members of the \(\overline{\mathcal{M}}\)atchSet; and (ii) \(\overline{x_{i}}\) are far from all members of \(\overline{\mathcal{D}}\)odgeSet. However, a single point may not meet the requirements since the points in \(\overline{\mathcal{M}}\)atchSet might be dispersed throughout the embedding space. Therefore, we create C clusters of the \(\overline{\mathcal{M}}\)atchSet cases (\(\overline{\mathcal{M}}\)atchSet\({}_{1}\), \(\cdots\), \(\overline{\mathcal{M}}\)atchSet\({}_{C}\)), by applying the K-Means algorithm [13]. The number of clusters is a hyper-parameter of the system. Genetic Algorithm and Proposed Fitness Function.For each cluster \(\overline{\mathcal{M}}\)atchSet\({}_{i}\), we want to solve the following multi-objective optimization problem: \[max|\{\overline{m}\in\overline{\mathcal{M}}\text{atchSet}_{i}:Dist(\overline{x }_{i},\overline{m})\leq th\}| \tag{4}\] \[min|\{\overline{d}\in\overline{\mathcal{D}}\text{odgeSet}:Dist(\overline{x} _{i},\overline{d})\leq th\}| \tag{5}\] We employ the LM-MA-ES [14] GA to identify a point in the high-dimensional embedding space satisfying our requirements. We selected this GA because it was shown to be effective for searching high-dimensional spaces [22]. We evaluate the effectiveness of a point in meeting the requirements by minimizing our proposed fitness function, the DodgePersonation Fitness function (cf. Definition 5), consisting of a positive and a negative component that corresponds to the objectives in Equations 4 and 5. The DodgePersonation Fitness function is a weighted sum of these two components that are based on the \(\mathit{DPloss}\) (cf. Definition 4) that is applied to either the \(\overline{\mathcal{M}}\)atchSet\({}_{i}\) or \(\overline{\mathcal{D}}\)odgeSet. Our \(\mathit{DPloss}\) is calculated for a target case \(\overline{a}\) and a given set \(\overline{S}\). It takes into account the number of cases in \(\overline{S}\) that are farther away than the threshold to the case \(\overline{a}\), and the sum of the distances between case \(\overline{a}\) and all cases in \(\overline{S}\). **Definition 4** (Dp Loss): _loss is a normalized linear combination of \(\mathit{DodgeCount}(\overline{a},\overline{S},th)\) and \(\mathit{DistSum}(\overline{a},\overline{S})\):_ \[\mathit{DPloss}(\overline{a},\overline{S},th,m)=\] \[\frac{m\cdot\sum\limits_{\overline{a}\in\overline{S}}\mathds{1}_{ \{Dist(\overline{a},\overline{S})>th\}}(\overline{a})+(1-m)\cdot\sum\limits_ {\overline{a}\in\overline{S}}\mathit{Dist}(\overline{a},\overline{s})}{| \overline{S}|}\] _where \(m\in[0,1]\) is the weight factor, and the indicator function \(\mathds{1}_{\{Dist(\overline{a},\overline{s})>th\}}(\overline{a})\) returns 1 when then condition \(\mathit{Dist}(\overline{a},\overline{s})>th\) is satisfied and 0 otherwise._ We normalize the result to ensure that the outcome is not related to the size of \(\overline{S}\). In our initial experiments, we observed that simply using the indicator function component was insufficient for the GA to select suitable individuals for the next generation. We solved that problem by incorporating the distances between the searched point and \(\overline{S}\) members. Figure 3: Overview of One Face to Rule Them All Phase 1: Embedding Space Search. The \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet are mapped from the face space into the embedding space. Then, C clusters are built using \(\overline{\mathcal{M}}\)atchSet. For each cluster, one point is sought using our genetic algorithm such that it is close to the cluster members and distant from the \(\overline{\mathcal{D}}\)odgeSet members. The DodgePersonation Fitness function of our GA is defined as a weighted sum of the positive \(\mathit{DPloss}\) applied to each \(\overline{\mathcal{M}\text{atchSet}_{i}}\) cluster and the negative \(\mathit{DPloss}\) (i.e., multiplied by -1) applied to the \(\overline{\mathcal{D}\text{odgeSet}}\) (cf. Definition 5). **Definition 5** (DodgePersonation Fitness Function): _Given a point \(\overline{x}\) within the embedding space \(\mathcal{E}\), let us consider \(\overline{\mathcal{M}\text{atchSet}_{i}}\), a cluster of \(\overline{\mathcal{M}\text{atchSet}}\), and the \(\overline{\mathcal{D}\text{odgeSet}}\). Let \(th1\) and \(th2\) be the decision thresholds for \(\overline{\mathcal{M}\text{atchSet}_{i}}\) and \(\overline{\mathcal{D}\text{odgeSet}}\), respectively. Let \(\alpha\) and \(\beta\) be the weights used on the DPloss components, and \(\gamma\) be a weight parameter of the fitness function. The DodgePersonation Fitness function is defined as follows:_ \[\begin{split} fitness(\overline{x},\overline{\mathcal{M}\text{atchSet} _{i}},\overline{\mathcal{D}\text{odgeSet}},th1,th2,\alpha,\beta,\gamma)=\\ \gamma*\mathit{DPloss}(\overline{x},\overline{\mathcal{M}\text{atchSet }_{i}},th1,\alpha)+\\ (1-\gamma)*(-\mathit{DPloss}(\overline{x},\overline{\mathcal{D} \text{odgeSet}},th2,\beta))\end{split}\] We must highlight that our DodgePersonation Fitness function uses two different thresholds for the positive and negative \(\mathit{DPloss}\) allowing them to be adjusted independently, which provides a more flexible method and more favorable results. Algorithm 1 shows the One Face to Rule Them All Phase 1 pseudo-code for carrying out DodgePersonation Attack by searching a point or set of points in the embedding space. ``` Input:\(x\): Source Face; \(\overline{y}\): point in the embedding space returned in Phase 1; \(\epsilon\): maximum change allowed for each pixel of the normalized image; \(iterations\): number of iterations; Output:\(x_{adv}\): attack image 1while\(x\neq MTCNN(x)\)do 2\(x\gets MTCNN(x)\); 3 4 end while 5\(x_{adv}\gets x\); 6 7foreir in\(iterations\)do 8\(\overline{x}_{adv}\gets FM(x_{adv})\); 9\(loss\gets Dist(\overline{x}_{adv},\overline{y})\); 10\(gradients\leftarrow\frac{\overline{\mathcal{D}\text{ loss}}}{\overline{x}_{adv}}\); 11\(x_{adv}\gets Adam(x_{adv},gradients)\); 12\(x_{adv}\gets clip(x_{adv},|x_{adv}-\epsilon,x_{adv}+\epsilon|)\); 13 14 end while return\(x_{adv}\); ``` **Algorithm 2**One Face to Rule Them All Algorithm ### Phase 2 - Attack Face Generation Once we have obtained a set of points (\(\overline{x_{i}}\)) in the embedding space, we proceed to generate the corresponding Attack Face images. Starting with a Source Face \(x\), our objective is to modify the image in such a way that, when passed through the FM, it is mapped closely to the Phase 1 point (\(\overline{x_{i}}\)). We developed a method for the FV task that builds upon any pre-selected source face image \(x\), changing it as little as possible while simultaneously forcing the changed image to be matched to the previously obtained point \(\overline{x_{i}}\). Drawing inspiration from renowned attack techniques like Projected Gradient Descent [16], the key idea of the Attack Face Generation method is to calculate the derivatives of the FM function with respect to the input image \(x\) and alter the image to decrease the loss value, which is \(Dist(FM(x),FM(\overline{x_{i}}))\). As far as we know, we are the first ones to apply this mapping of the embedding space into a specific image on the face image space, which gives full control over the initial identity in the face image. ``` Input:\(x\): Source Face; \(\overline{y}\): point in the embedding space returned in Phase 1; \(\epsilon\): maximum change allowed for each pixel of the normalized image; \(iterations\): number of iterations; Output:\(x_{adv}\): attack image 1while\(x\neq MTCNN(x)\)do 2\(x\gets MTCNN(x)\); 3 4 end while 5\(x_{adv}\gets x\); 6 7foreir in\(iterations\)do 8\(\overline{x}_{adv}\gets FM(x_{adv})\); 9\(loss\gets Dist(\overline{x}_{adv},\overline{y})\); 10\(gradients\leftarrow\frac{\overline{\mathcal{D}\text{ loss}}}{\overline{x}_{adv}}\); 11\(x_{adv}\gets Adam(x_{adv},gradients)\); 12\(x_{adv}\gets clip(x_{adv},|x_{adv}-\epsilon,x_{adv}+\epsilon|)\); 13 14 end while return\(x_{adv}\); ``` **Algorithm 3**One Face to Rule Them All Algorithm Algorithm 4 shows the pseudo-code of the Attack Face Generation procedure. We first crop the face out from the Source Face \(x\) using MTCNN [31] as the face detector. More precisely, we feed \(x\) to MTCNN and resize it to match the input shape of the FM. This process is repeated until MTCNN outputs the exact same input without trimming, i.e., the input and output of MTCNN are of the same exact dimension. Then, the cropped version of \(x\) is passed through a given number of iterations to embed the attack in the image. In each iteration, the algorithm evaluates the distance between the image mapped into the embedding space \(FM(x)\) and the target point in the embedding space, \(\overline{x}\). Then, the derivative of FM with respect to the image is calculated. Although the Adam optimizer [10] is typically used for updating neural network weights to minimize loss, we have repurposed the Adam optimizer to apply the obtained gradients to the image. We also impose a constraint on the amount of change applied to control the difference (\(\epsilon\)) between the initial image and the altered one. This method provides a final modified version of \(x\) that is visually very similar to the initial source image. An overview of the One Face to Rule Them All Phase 2 method is shown in Figure 4. ## 5 Experimental Setup In this section, we evaluate our methodology for deploying DodgePersonation Attacks by using the One Face to Rule Them All Algorithm. We first describe the dataset used in our experiments, followed by the implementation details of our experimental setting. ### Dataset We conducted our experiments on the LFW dataset [9], which is one of the standard datasets for FV. This dataset contains 13,233 human images of 5,749 identities. We used the funneled version of the dataset. We obtained the FV decision threshold for a false acceptance rate4 of 0.001 by using Euclidean distance on the LFW's provided training set that contains 1100 matching and 1100 mismatching pairs. Unless explicitly stated otherwise, the remaining experiments utilize a random subset of 5,749 images, with each identity in the dataset represented by a single image. The dataset, along with our code, is accessible from our GitHub repository. Footnote 4: False Acceptance Rate, a metric commonly used to evaluate FV, represents the rate at which embeddings of different subjects are incorrectly matched as the same person. ### Evaluation Metrics The evaluation of the effectiveness of our approach involves calculating the coverage score on a given set of images \(S\), given a set of Attack Faces \(X\) as follows: \(\textit{coverage}=\frac{\left|\{s\in S:\exists v\in X,\textit{Dir}(FM(x),FM(s)) <th\}\right|}{\left|S\right|}\). The coverage calculates the percentage of face images from \(S\) that are matched, by at least one image in the set \(X\) of Attack Faces. We calculate the coverage of the generated Attack Faces on both the \(\mathcal{M}\)atchSet and the \(\mathcal{D}\)odgeSet, which we aim to maximize and minimize, respectively. ### Implementation Details We randomly chose two disjoint sets of identities from the LFW dataset based on the given size of \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet. One image per identity was chosen. If we use multiple images for each identity and assume that FM accurately groups these images in a nearby region of the embedding space, we are providing additional points to Phase 1 Embedding Space Search that are close to each other. This makes the task of the search algorithm to find an appropriate point in the embedding space easier. As a result, having this extra information allows the algorithm to produce more accurate outcomes. In contrast, when we opt to use only one image per identity, we are choosing a potentially more challenging situation. We used the default settings for the MTCNN and resized the images into the input shape of the FM using the PIL library with BOX resampling algorithm. The input shape is 160x160, and the input images are normalized from \([0,255]\) to \([-1,+1]\) as required by FaceNet [20], the FM we used. FaceNet provides a mapping from the image space into a compact Euclidean Space and is trained on triplets of faces obtained from CASIA-WebFace dataset [30] using triplet loss. Images are mapped onto the surface of a hyper-sphere, where each point in the embedding space has a fixed length of 512. Figure 4: Overview of One Face to Rule Them All Phase 2: Attack Face Generation. The source image is mapped into the embedding space and modified to be in the proximity of a point determined in Phase 1. Using this method, we can conduct attacks on any given face image with control over the amount of change applied. For the LM-MA-ES [14] algorithm, we set the population size to 100 and the number of generations to 1000. If not specifically mentioned, the values for parameters \(\alpha\), \(\beta\), and \(\gamma\) were set to 0.99, 0.99, and 0.9, respectively. Additionally, unless otherwise specified, the decision threshold values for th1 and th2 have been set to approximately 1.055, representing a false acceptance rate of 0.001. This particular false acceptance rate was chosen to replicate the testing conditions described in [22]. After the manipulation of each generation, all individuals were L2-normalized and assessed by the fitness function. The best 100 new individuals were chosen to replace the previous population. For Phase 2, we used the Euclidean distance as the loss for mapping the output. The training iterations were set to 1,000. This value was obtained empirically after an initial set of experiments. ## 6 Results and Discussion This section presents the results of the experiments to assess the performance of our proposed One Face to Rule Them All Algorithm on the multiple scenarios defined in our taxonomy. We also present two ablation studies on parameters of our proposed DodgePersonation Fitness function. ### Results for \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet of varying size In this set of experiments we considered 10 clusters in \(\mathcal{M}\)atchSet and evaluated our solution with \(\mathcal{M}\)atchSet sizes of 10, 100, 500, 1000, 2500, and \(\mathcal{D}\)odgeSet sizes of 0, 1, 2, 3, 10, 100, 1000, 2500. \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet are created by randomly selecting elements from the LFW dataset. These sets are mapped into the embedding space to obtain \(\overline{\mathcal{M}}\)atchSet and \(\overline{\mathcal{D}}\)odgeSet. We apply the One Face to Rule Them All Algorithm to find one point (Attack Face) in each cluster that satisfies the problem constraints. We evaluated the coverage of the generated points in the embedding space at Phase 1 (in \(\overline{\mathcal{M}}\)atchSet and \(\overline{\mathcal{D}}\)odgeSet) and the coverage of the corresponding generated Attack Faces at Phase 2 (in \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet). This procedure is repeated 5 times. The average results are presented in Figure 5. The coverage results of Phase 1 and Phase 2 are shown in blue and orange, respectively. Each subplot represents a specific scenario corresponding to a combination of a given \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet size (e.g., the bottom right subplot shows the coverage results on a \(\mathcal{M}\)atchSet with size 2500 and a \(\mathcal{D}\)odgeSet of size 2500). In each subplot, the left side represents the coverage on \(\overline{\mathcal{M}}\)atchSet (Phase 1 in blue) and \(\mathcal{D}\)odgeSet (Phase 2 in orange) and the right side represents the coverage on the \(\overline{\mathcal{D}}\)odgeSet (Phase 1 in blue) and \(\mathcal{D}\)odgeSet and (Phase 2 in orange). The optimal outcome for both Phase 1 and Phase 2 in each subplot is represented by a coverage of 100% on the \(\mathcal{M}\)atchSet (left) and a coverage of 0% on the \(\mathcal{D}\)odgeSet (right), which is a diagonal line drawn from the top left corner to the bottom right corner of the subplot (dashed line shown in the bottom right subplot). As an example, let us focus on the subplot with a \(\mathcal{M}\)atchSet size of 100 and a \(\mathcal{D}\)odgeSet size of 2500. In this case, on Phase 1, 66.33% of the \(\overline{\mathcal{M}}\)atchSet is covered, while only 0.13% of the \(\overline{\mathcal{D}}\)odgeSet is covered. In Phase 2, we obtain a coverage of 65.33% and 5.16% on the \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet, respectively. We observe that the results of Phase 1 and Phase 2 are very similar given the almost overlapping lines. This demonstrates the effectiveness of the One Face to Rule Them All Phase 2, which successfully generates Attack Faces of the selected identity that are close to the embedding space point found during Phase 1. Overall, we observe that the experiments with a lower number of faces to match and to dodge (subplots close to the top left) exhibit near-optimal results. As the number of faces to match and dodge increases, the coverage of the \(\mathcal{M}\)atchSet decreases, and the coverage of the \(\mathcal{D}\)odgeSet increases. The scenario with 2500 faces to match and 2500 faces to dodge (bottom right subplot) is the most challenging scenario and the least performing one. Still, even in this extreme scenario, a coverage of approximately 51% is achieved on the \(\mathcal{M}\)atchSet on both phases, and a coverage of approximately 27% and 43% is achieved on the \(\mathcal{D}\)odgeSet on Phase 1 and 2, respectively. This is an excellent result given that we are able to match 1275 faces with only 10 attack images while dodging 1425 faces out of 2500. The detailed results are shown in Table 2 in Appendix A.4. We must highlight that Phase 2 tends to have lower performance when compared to Phase 1 results on the most challenging scenarios with higher sizes of \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet, especially for the coverage of the \(\mathcal{D}\)odgeSet. Figure 6 illustrates this by examining the coverage results obtained with a fixed \(\mathcal{M}\)atchSet size of 500 and a \(\mathcal{D}\)odgeSet with size varying between 10 and 2500. The coverage of \(\mathcal{M}\)atchSet on both phases is higher when the size of \(\mathcal{D}\)odgeSet is lower, and it decreases with the increase of the \(\mathcal{D}\)odgeSet size. ### Multi Identity Impersonation or Master Face #### 6.2.1 Comparing One Face to Rule Them All algorithm against competitors An extreme scenario in Multi Identity Impersonation, also known as Master Face Attack, aims at impersonating a very large number of identities while considering an empty \(\mathcal{D}\)odgeSet. Recently, Shmelkin et al. [22] addressed the Master Face attack proposing a method to obtain a set of nine images generated using pre-trained GANs, which covered 43.82% of the 5749 identities in the LFW dataset5. However, the authors do not control the identity of the generated Master Faces. To compare our method with this attack, we recreated the precise testing setup in [22], which involved using FaceNet trained on CASIA-WebFace with a decision threshold that corresponds to a false acceptance rate of 0.001, and a total of 5749 identities from the LFW dataset as the \(\mathcal{M}\)atchSet. We selected one image of Albert Einstein for this experiment, but any other Source Face could have been used. Figure 7 shows the Attack Faces generated when applying our One Face to Rule Them All method. The original (Source Face) image is in a blue square, followed by the 9 Attack Faces generated. Our proposed One Face to Rule Them All method significantly improves the coverage achieved while allowing complete control over the identity of the nine generated images. We obtained a coverage of 58.5% with the 9 images presented while the competing method achieved only a coverage of 43.82% (also with 9 images but without any control over the identity of the generated images)6. This shows that our method can use any preferred Source Face, generating Attack Faces with state-of-the-art coverage. The Attack Faces generated have modifications that are almost imperceptible to humans, which strengthens the feasibility of the attack as it does not raise any alarms to humans. Moreover, the results show that the 9 images generated, although looking identical, cover different regions of the face space matching different identities. Footnote 6: We present a second experiment carried out under the same conditions in Appendix A.5 where we used a different Source Image of Albert Einstein and generated 9 Attack Faces with the One Face to Rule Them All method. #### 6.2.2 Analysis of the coverage of One Face to Rule Them All algorithm on Master Faces We carried out further experiments where we increased the number of Attack Faces generated to determine the number of images required to cover more than 90% of the members in \(\mathcal{M}\)atchSet. The results of this experiment are presented in Figure 8. We observed that increasing the number of clusters, Figure 5: Coverage results for different \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet sizes. Each plot shows the coverage on the \(\mathcal{M}\)atchSet on the left-hand side and the coverage on the \(\mathcal{D}\)odgeSet on the right-hand side. The coverage results on the two sets are connected by a line with a color corresponding to the phase where it was calculated. Figure 6: Coverage results of a fixed \(\mathcal{M}\)atchSet with 500 faces and different \(\mathcal{D}\)odgeSet sizes. and consequently the number of Attack Faces, leads to higher coverage rates. Remarkably, with only 80 Attack Face images, the coverage exceeded 90%. This experiment emphasizes the fact that with a relatively small number of images, we can efficiently cover a significant portion of the \(\mathcal{M}\)atchSet. This suggests that the FM might be clustering the images together in the embedding space. More importantly, our attack can successfully breach FV systems using a limited number of Attack Faces, highlighting the efficiency of our attack algorithm but also the fragility of these systems. The fact that with only 80 Attack Faces we match 90% cases shows that FV systems are not safe, being breached with a small number of trials. #### 6.2.3 Generalizability of One Face to Rule Them All Attack on Master Faces The goal of this set of experiments is to assess the generalizability of the Attack Faces generated using the One Face to Rule Them All method for the Master Face Attack scenario. To assess this, we investigate the coverage of the Attack Faces on **other identities** that were not encountered during the generation process. We build a \(\mathcal{M}\)atchSet with a set of 2,500 identities randomly selected and craft a set of 10 Attack Faces to impersonate those identities. Then, we build a separate set of 2,500 identities, which we named \(\mathcal{U}\)InseenSet, containing only identities that were not present in the \(\mathcal{M}\)atchSet. Finally, the coverage of the 10 Attack Faces generated is evaluated on both \(\mathcal{M}\)atchSet and \(\mathcal{U}\)InseenSet. We repeated this test five times and report the average results. Our results show that with 10 Attack Faces, we achieve a coverage of 65.78% and 60.22% in the \(\mathcal{M}\)atchSet during Phase 1 and Phase 2, respectively. **Furthermore, the same 10 Attack Faces cover 58.25% and 56.9% of the \(\mathcal{U}\)InseenSet members in Phase 1 and Phase 2, respectively.** These findings provide strong evidence of the generalizability capability of our proposed One Face to Rule Them All method. ### Fitness Function ablation studies #### 6.3.1 Sensitivity of \(\mathcal{D}\)odgeSet threshold Our proposed GA's DodgePersonation Fitness function (cf. Definition 5) takes into account two decision thresholds: \(th1\) for \(\mathcal{M}\)atchSet and \(th2\) for \(\mathcal{D}\)odgeSet. In our experiments, \(th1\) and \(th2\) are both set to 1.055, as explained in Section 5.3. In this experiment, we investigate the impact of varying \(th2\) on the coverage of \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet. Our goal is two-fold: (i) understand if we can generate At Figure 8: Coverage results for different numbers of clusters (or Attack Faces) on a \(\mathcal{M}\)atchSet containing 5,749 identities from the LFW dataset. (Source Face used shown in the plot) Figure 7: One Face to Rule Them All Algorithm for carrying out the Multi Identity Impersonation of 5749 identities using another image of Albert Einstein. The Original image (Source Face) is in a blue box. The remaining images are the Attack Faces, achieving a coverage of 58.5% of the identities. The previous method covered only 43.82% of the identities. tack Faces that are better at dodging the cases in \(\mathcal{D}\)odgeSet; and (ii) understand the impact of changing threshold \(th2\) on the \(\mathcal{M}\)atchSet coverage. We hypothesize that if the GA is forced to consider a wider margin on the \(\overline{\mathcal{D}\text{odgeSet}}\) by increasing \(th2\), then more points can be dodged as the optimal points will be further away from this set members. We tested the initial value of \(th2\) of 1.055 and included four other \(th2\) values representing an increase of \(th2\) by 3%, 4%, 5%, and 6%. We repeated these experiments 5 times using 1000 and 500 identities randomly selected for the \(\mathcal{M}\)atchSet and the \(\mathcal{D}\)odgeSet, respectively. The average coverage results on \(\mathcal{M}\)atchSet and \(\mathcal{D}\)odgeSet for Phase 1 and Phase 2 are displayed in Table 1. In Phase 1 and Phase 2, the coverage of the \(\mathcal{D}\)odgeSet when using the default value of 1.055 is 7.65% and 30.20%, respectively. Our special purpose GA was not able to avoid 7.65% of the \(\overline{\mathcal{D}\text{odgeSet}}\) members in the embedding space while the generated Attack Faces could not dodge 30.2% of the \(\mathcal{D}\)odgeSet cases. Increasing the \(\mathcal{D}\)odgeSet \(th2\) by 3% led to a significant decrease in coverage percentages of the \(\mathcal{D}\)odgeSet in both phases, which confirms that increasing the threshold \(th2\) helps to keep the Attack Faces further away from the \(\mathcal{D}\)odgeSet cases. However, we observe that the \(\mathcal{M}\)atchSet coverage is negatively impacted by this change, showing a decreased coverage. The \(\mathcal{M}\)atchSet coverage continues to decrease as the threshold \(th2\) increases while the coverage of the \(\mathcal{D}\)odgeSet tends to zero. After an increase of 4% in the \(th2\) only the \(\mathcal{M}\)atchSet coverage is being affected because the \(\mathcal{D}\)odgeSet coverage is already very close to the ideal value of zero. Therefore, we confirm that adjusting the \(\mathcal{D}\)odgeSet threshold ratio can help to achieve better dodging results while experiencing lower impersonation results. This trade-off should be taken into account based on the problem's nature and the importance of dodging versus impersonation. #### 6.3.2 Sensitivity of parameter \(\gamma\) Parameter \(\gamma\) in the GA Fitness function weights the \(\mathcal{D}\)_Ploss_ of the \(\overline{\mathcal{M}\text{atchSet}}\) and \(\overline{\mathcal{D}\text{odgeSet}}\). We randomly selected 1000 \(\overline{\mathcal{M}\text{atchSet}}\) members and 500 \(\overline{\mathcal{D}\text{odgeSet}}\) members and tested the values of 0, 0.1, 0.3, 0.5, 0.7, 0.9, and 1 for parameter \(\gamma\). We repeated these experiments five times and reported the average. Figure 9 shows the results of these experiments. We observe that, when \(\gamma\) is 0, the focus is entirely on evading \(\overline{\mathcal{D}\text{odgeSet}}\) members, leading to the neglect of \(\overline{\mathcal{M}\text{atchSet}}\) members. On the other hand, when \(\gamma\) is 1, the coverage of \(\overline{\mathcal{M}\text{atchSet}}\) members peaks, while \(\overline{\mathcal{D}\text{odgeSet}}\) members are ignored during dodging, resulting in a coverage of a significant percentage of \(\overline{\mathcal{D}\text{odgeSet}}\) members. With an increase in \(\gamma\) to 0.1, \(\overline{\mathcal{M}\text{atchSet}}\) members gain considerable coverage while the coverage of \(\overline{\mathcal{D}\text{odgeSet}}\) does not increase significantly. As \(\gamma\) continues to grow, the emphasis on \(\overline{\mathcal{M}\text{atchSet}}\) increases, resulting in coverage of more members, while the emphasis on \(\overline{\mathcal{D}\text{odgeSet}}\) decreases, leading to less dodging of \(\overline{\mathcal{D}\text{odgeSet}}\) members. Overall, the results are fairly stable for values of \(\gamma\) between 0.1 and 0.9. ## 7 Conclusion We proposed the DodgePersonation Attack for attacking FV systems. Our definition and taxonomy encompass both existing and novel types of attacks to FV systems. The DodgePersonation Attack aims at finding images that impersonate members in \(\mathcal{M}\)atchSet while avoiding the \(\mathcal{D}\)odgeSet members. We introduced a novel approach called One Face to Rule Them All, which successfully deploys the DodgePersonation Attack by generating adversarial images that impersonate a set of identities while dodging others. Our proposed algorithm achieves state-of-the-art performance in known types of attacks while also achieving outstanding results for novel attacks. Moreover, the One Face to Rule Them All algorithm generates attack images using a source face. This capability in our solution is not present in previous research. Finally, we must highlight that the generated attack images are built to embed the smallest change possible, and thus, the modifications applied can pass unnoticed to the human eye. For future work, we plan to investigate the generalizability of our solution across different face descriptors. This could involve exploring the robustness of the solution to variations in the feature extraction process, which could have implications for the reliability of the system in real-world scenarios. The generalizability of our solution across different images of the \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{\(th2\)} & \multirow{2}{*}{increase \%} & \multicolumn{2}{c}{Coverage} \\ \cline{3-4} & & \(\mathcal{M}\)atchSet & \(\mathcal{D}\)odgeSet \\ \hline 1.055 & - & 56.00 (54.68) & 7.65 (30.20) \\ 1.086 & 3\% & 41.56 (42.30) & 0.00 (3.04) \\ 1.097 & 4\% & 37.98 (39.50) & 0.00 (1.48) \\ 1.107 & 5\% & 30.46 (35.62) & 0.00 (0.60) \\ 1.118 & 6\% & 31.02 (33.72) & 0.00 (0.40) \\ \hline \hline \end{tabular} \end{table} Table 1: Coverage results for different \(\mathcal{D}\)odgeSet thresholds \(th2\) on Phases 1 and 2 (Phase 2 results in parenthesis). same identity should also be explored. The exploration of our method on 3D images should also be considered, following a growing trend in related works [7]. A logical progression of this work involves experimenting with the printed version of the Attack Faces to carry out Physical or Presentation Attacks. Finally, we will consider the extension of the proposed algorithm to fingerprint recognition systems and investigate its effectiveness in generating a master fingerprint set.
2307.16417
Effect of air pollution on the growth of diabetic population
Diabetes mellitus is a disease which is currently a huge health hazard globally. The cases of diabetes had increased by a significant amount in past decades. Also it has been predicted that it will further increase in future. Diabetes depends on various factors like obesity, physical inactivity. Also diabetes can depend on various environmental issues. In this article, our main focus is to study the dependence of the diabetic cases on the air pollution. We have used the data for diabetic population and PM2.5 concentration in the air for five countries from 2010 to 2021. Here we have studied the correlation between the diabetic cases data and PM2.5 concentration data. Also, we have done the linear regression analysis to find whether this correlation is statistically significant.
Sourav Chowdhury, Suparna Roychowdhury, Indranath Chaudhuri
2023-07-31T05:56:15Z
http://arxiv.org/abs/2307.16417v1
# Effect of air pollution on the growth of diabetic population ###### Abstract Diabetes mellitus is a disease which is currently a huge health hazard globally. The cases of diabetes had increased by a significant amount in past decades. Also it has been predicted that it will further increase in future. Diabetes depends on various factors like obesity, physical inactivity. Also diabetes can depend on various environmental issues. In this article, our main focus is to study the dependence of the diabetic cases on the air pollution. We have used the data for diabetic population and PM2.5 concentration in the air for five countries from 2010 to 2021. Here we have studied the correlation between the diabetic cases data and PM2.5 concentration data. Also, we have done the linear regression analysis to find whether this correlation is statistically significant. ## 1 Introduction Diabetes mellitus is a disease which is affecting global health widely. According to IDF (International diabetes federation), the number of people with diabetes has increased from 151 million to 537 million from 2000 to 2021 worldwide [1]. Also, they have predicted that this number will shoot up to 783 million by 2045. A person can suffer from diabetes with various reasons like lesser insulin secretion from Pancreas (Type-1 diabetes) or insulin resistance in the body cells (Type-2 diabetes). Also there is another type of diabetes which occurs at the time of pregnancy is called gestational diabetes. Diabetes disrupts homeostasis of the glucose level in the blood and thus blood glucose level remains higher than the normal level. This causes various short term and long term problems in the body. There are various factors which are behind the diabetes. Physical inactivity, unhealthy life style, obesity, smoking are the leading cause of the diabetes [2, 3, 4]. Genetic factors are also a reason of the diabetes [4]. However, various environmental effects also influences the growth of the diabetic cases. There are some studies which shows that the global temperature increase and various social factors are behind the increment of diabetic cases [5, 6]. Air pollution is a leading factor which is affecting the environment and thus the health of world population. Due to rapid industrialization and globalization, the air pollution increased by significant amount in past decades. Only recently the governments around the world are taking steps to control the air pollution. However, it is still a major problem that the world is facing. The contradiction between going green and development of industries which helps to advancement of economy is ever constant and growing. There are various studies which shows that the air pollution has a major role behind type-2 diabetes mellitus and gestational diabetes [7, 8]. A polluted air contains PM2.5 particles which causes oxidative stress and as a result various problems like hypertension, asthma and insulin resistivity occurs [9, 10, 11]. So it is important to study the effect of air pollution on the diabetic cases. Thus in this regard we propose to explore the dependence of ever growing diabetic cases with air pollution as a world health hazard. Our paper is arranged as follows: In section 2 the data and data source is discussed. Section 3 describes the methods of our work and section 4 shows the results of this work. Finally, the concluding remarks are given in section 5. ## 2 Data In this work, we have used 2010 to 2021 diabetes data from IDF (International diabetes federation) of various countries [12]. The PM2.5 concentration data (in \(\mu g/m^{3}\)) is taken from World bank data center (2010-2017) [13] and IQAir website (2018-2021) [14]. An initial analysis has been done for five countries: India, China, France, Germany, and UK whose data is robust. Method and analysis Here, we have assumed that the increment of diabetic cases is directly correlated with the PM2.5 concentrations in the air. So, from IDF data we have calculated the yearly increment of the diabetic cases. Then we have created a scatter plot with the yearly increment of diabetic cases against PM2.5 concentration for various countries. The value of the correlation coefficient have been evaluated for each of these countries. ## 4 Results In this section, we have shown various results of our work. The estimated values of the correlation coefficients (\(r\)) of PM2.5 data and increment of the diabetic cases for different countries are shown in Table 1. It is generally known that if \(r>0.8\) then the two variables are highly correlated and the correlation is positive.[15] From Table 1 it is seen that the \(r\) values for most of the countries are greater than 0.8. For China, \(r\) value is slightly less than 0.8. The linear regression model fit on the data are shown in Figure 1. In figure 1 the fitted regression model is plotted against the data with the confidence bounds. The outliers of the data are represented by red crossed points. From this figure it is seen that the linear regression model fits well to the data. Different p-values of the linear regression fit for these countries are shown in Table 2. If p-value \(\leq 0.05\) then the correlation is statistically significant. Table 2 shows that p-values for all countries are less than 0.05. Thus the correlation between PM2.5 concentration and increment of diabetic cases is statistically significant. \begin{table} \begin{tabular}{|c|c|} \hline Countries & Values of \(r\) \\ \hline India & 0.8037 \\ China & 0.7668 \\ UK & 0.8584 \\ France & 0.8783 \\ Germany & 0.8719 \\ \hline \end{tabular} \end{table} Table 1: Values of correlation coefficients for different countries. \begin{table} \begin{tabular}{|c|c|} \hline Countries & \(p\)-values \\ \hline India & 0.00905 \\ China & 0.0059 \\ UK & 0.000351 \\ France & 0.000171 \\ Germany & 0.000218 \\ \hline \end{tabular} \end{table} Table 2: p-values of correlation coefficients for different countries. ## 5 Conclusions In this section, we have summarized the important features and results of our work. Here our main motive is to find the correlation between diabetes and air pollution. Thus we have taken data of diabetic cases and PM2.5 concentration from 2010 to 2021 for India, China, UK, France, Germany. We have calculated the value of correlation coefficient (\(r\)) between yearly increment of diabetic cases and PM2.5 concentration. We have found that the value of \(r>0.8\) for most of the Figure 1: Regression model fit to the increment of diabetic cases with PM2.5 concentration data for various countries. Here red cross (\(\times\)) denotes the outliers of the data. countries, which implies that these two variables are strongly correlated. Also we have applied the regression analysis on this data for different countries. For all of the countries, the \(p<0.05\), which means that the correlation between increment of diabetic cases and PM2.5 concentration in the air is statistically significant. Thus we can say that air pollution affects our health and it is a significant reason behind the increment of diabetic cases around the world. There are various other factors which influences the increase of diabetic cases. In future, we aim to consider other factors like economic condition, obesity prevalence to understand their effects on the growth of diabetic cases globally. We also would like to include other countries in our analysis and increase the time span of our study to get a better idea of the environmental effects on diabetic cases. ## Acknowledgment The authors would like to thank the Department of Physics, St. Xavier's College, Kolkata for providing support during this work. One of the authors (S. C.) acknowledges the financial support provided from the University Grant Commission (UGC) of the Government of India, in the form of CSIR-UGC NET-JRF.
2309.08385
A Unified View Between Tensor Hypergraph Neural Networks And Signal Denoising
Hypergraph Neural networks (HyperGNNs) and hypergraph signal denoising (HyperGSD) are two fundamental topics in higher-order network modeling. Understanding the connection between these two domains is particularly useful for designing novel HyperGNNs from a HyperGSD perspective, and vice versa. In particular, the tensor-hypergraph convolutional network (T-HGCN) has emerged as a powerful architecture for preserving higher-order interactions on hypergraphs, and this work shows an equivalence relation between a HyperGSD problem and the T-HGCN. Inspired by this intriguing result, we further design a tensor-hypergraph iterative network (T-HGIN) based on the HyperGSD problem, which takes advantage of a multi-step updating scheme in every single layer. Numerical experiments are conducted to show the promising applications of the proposed T-HGIN approach.
Fuli Wang, Karelia Pena-Pena, Wei Qian, Gonzalo R. Arce
2023-09-15T13:19:31Z
http://arxiv.org/abs/2309.08385v1
# A Unified View Between Tensor Hypergraph Neural Networks And Signal Denoising ###### Abstract Hypergraph Neural networks (HyperGNNs) and hypergraph signal denoising (HyperGSD) are two fundamental topics in higher-order network modeling. Understanding the connection between these two domains is particularly useful for designing novel HyperGNNs from a HyperGSD perspective, and vice versa. In particular, the tensor-hypergraph convolutional network (T-HGCN) has emerged as a powerful architecture for preserving higher-order interactions on hypergraphs, and this work shows an equivalence relation between a HyperGSD problem and the T-HGCN. Inspired by this intriguing result, we further design a tensor-hypergraph iterative network (T-HGIN) based on the HyperGSD problem, which takes advantage of a multi-step updating scheme in every single layer. Numerical experiments are conducted to show the promising applications of the proposed T-HGIN approach. Hypergraph Neural Network, Hypergraph Signal Denoising, Hypergraph Tensor. ## I Introduction Hypergraphs are ubiquitous in real-world applications for representing interacting entities. Potential examples include biochemical reactions that often involve more than two interactive proteins [1], recommendation systems that contain more than two items in a shopping activity [2], and traffic flows that can be determined by more than two locations [3]. In a hypergraph, entities are described as vertices/nodes, and multiple connected nodes form a hyperedge as shown in Fig. 1 (b, c) of a hypergraph example. A hypergraph \(\mathcal{G}\) is defined as a pair of two sets \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},...,v_{N}\}\) denotes the set of \(N\) nodes and \(\mathcal{E}=\{e_{1},e_{2},...,e_{K}\}\) is the set of \(K\) hyperedges whose elements \(e_{k}\) (\(k=1,2,...,K\)) are nonempty subsets of \(\mathcal{V}\). The maximum cardinality of edges, or \(m.c.e(\mathcal{G})\), is denoted by \(M\), which defines the order of a hypergraph. Apart from the hypergraph structure, there are also features \(\mathbf{x}_{v}\in\mathbb{R}^{D}\) associated with each node \(v\in\mathcal{V}\), which are used as row vectors to construct the feature matrix \(\mathbf{X}\in\mathbb{R}^{N\times D}\) of a hypergraph. From a hypergraph signal processing perspective, since the feature matrix \(\mathbf{X}\) can be viewed as a \(D\)-dimensional signal over each node, we use the words "feature" and "signal" interchangeably throughout the paper. Given the hypergraph structure \(\mathcal{G}\) and the associated feature matrix \(\mathbf{X}\), hypergraph neural networks (HyperGNNs) are built through two operations: 1) signal transformation and 2) signal shifting to leverage higher-order information. Specifically, if a HyperGNN is defined in a matrix setting, these two steps can be written as follows: \[\left\{\begin{array}{cl}\text{Signal transformation: }\mathbf{X}^{\prime}=\phi_{trans}( \mathbf{X};\mathcal{W});\\ \text{Signal shifting: }\mathbf{Y}=\phi_{shift}(\mathbf{X}^{\prime}, \mathcal{G});\end{array}\right. \tag{1}\] where \(\mathbf{X}^{\prime}\) is the transformed signal in a desired hidden dimension \(D^{\prime}\) and \(\mathbf{Y}\) represents the linear combination of signals at the neighbors of each node according to the hypergraph structure \(\mathcal{G}\). While here the variables are denoted by matrices, in fact, a tensor paradigm provides significant advantages [4] as will be introduced later, and thus will be at the core of this paper context. The signal transformation function \(\phi_{trans}\), is parameterized by a learnable weight \(\mathcal{W}\) and is generally constructed by multi-layer perceptrons (MLPs). As a result, the variation of HyperGNNs mainly lies in the signal-shifting step. To make use of the hypergraph structure in the signal-shifting step, an appropriate hypergraph algebraic descriptor is required. Prior efforts on HyperGNNs primarily focus on matrix representations of hypergraphs with possible information loss [4, 5]. Consider one of the most common hypergraph matrix representations, the adjacency matrix of the clique-expanded hypergraph used in [6, 7], which constructs pairwise connections between any two nodes that are within the same hyperedge, thus only providing a non-injective mapping. As shown in Fig 1, hypergraphs (b) \(\mathcal{G}_{1}\) and (c) \(\mathcal{G}_{2}\) have the same pairwise connections as the simple graph of Fig. 1 (a). Recently, a tensor-based HyperGNN framework T-HyperGNN [8] has been proposed to address potential information loss in matrix-based HyperGNNs. Specifically, the T-HyperGNN formulates tensor-hypergraph convolutional network (T-HGCN) via tensor-tensor multiplications (t-products) [9], which fully exploits higher-order features carried by a hypergraph. Interestingly, we find that the hypergraph signal shifting in T-HGCN is equivalent to a one-step gradient descent of solving a hypergraph signal denoising Fig. 1: Robot collaboration network represented by (a) a simple graph and (b) a hypergraph \(\mathcal{G}_{1}\) and (c) another hypergraph \(\mathcal{G}_{2}\). In (a), each cooperation relationship is denoted by a line connecting exactly two entities; whereas in (b) and (c), each hyperedge denoted by a colored ellipse represents multi-robot cooperation. (HyperGSD) problem (to be shown in Sec. III). Nevertheless, updating the gradient in one step per HyperGNN layer might be sub-optimal: For the two steps of HyperGNNs, only the signal shifting step corresponds to the gradient descent update. If we simply stack many layers of T-HGCN to perform multi-step gradient descent as shown in Fig. 2(a), the number of learnable parameters will unnecessarily increase. More importantly, numerous sequential transformations of the hypergraph signals could cause indistinguishable features across all nodes, leading to the well-known over-smoothing problem [10]. To overcome these issues, we propose an iterative \(K\)-step gradient descent procedure to solve the underlying HyperGSD problem, and further cast this procedure to formulate the novel **Tensor-hypergraph iterative network (T-HGIN)**, which combines the \(K\)-step updating process (signal shifting) in just a single layer as shown in Fig. 2(b). Additionally, T-HGIN leverages the initial input (with weight \(\alpha\)) and the current output (with weight \(1-\alpha\)) at each shifting step, performing a skip-connection operation that avoids over-smoothing. ## II Preliminaries ### _Hypergraph tensor representations and signal shifting_ While a hypergraph can be represented in either a matrix or a tensor form, in this work, we use tensorial descriptors to represent hypergraphs as they preserve intrinsic higher-order characteristics of hypergraphs [11]. Given a hypergraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) containing \(N\) nodes with order \(M\) (that is, \(m.c.e(\mathcal{G})=M\)), we define its **normalized adjacency tensor** as an \(M\)-order \(N\)-dimensional tensor \(\mathcal{A}\in\mathbb{R}^{N^{M}}\). Specifically, for any hyperedge \(e_{k}=\{v_{k_{1}},v_{k_{2}},...,v_{k_{k}}\}\in\mathcal{E}\) with \(c=|e_{k}|\leq M\), the tensor's corresponding entries are given by \[a_{p_{1}p_{2}...p_{M}}=\frac{1}{d(v_{p_{1}})}\frac{c}{\alpha}, \tag{2}\] with \[\alpha=\sum_{r_{1},r_{2},...,r_{c}\geq 1,\sum_{i=1}^{c}r_{i}=M}\binom{M}{r_{1},r_{2},\cdots,r_{c}}, \tag{3}\] and \(d(v_{p_{1}})\) being the degree of node \(v_{p_{1}}\) (or the total number of hyperedges containing \(v_{p_{i}}\)). The indices \(p_{1},p_{2},...,p_{M}\) for adjacency entries are chosen from all possible ways of \(\{k_{1},k_{2},...,k_{c}\}\)'s permutations with at least one appearance for each element of the hyperedge set, and \(\alpha\) is the sum of multinomial coefficients with the additional constraint \(r_{1},r_{2},...,r_{c}\neq 0\). In addition, other entries not associated with any hyperedge are all zeros. Note that for any node \(v_{p_{1}}\in\mathcal{V}\), we have \(\sum_{p_{2},...,p_{M}=1}^{N}a_{p_{1}p_{2}...p_{M}}=1\). The **hypergraph signal tensor**, on the other hand, is designed as the \((M-1)\)-time outer product of features along each feature dimension. Given the feature (or signal) matrix \(\mathbf{X}\in\mathbb{R}^{N\times D}\) as the input, with \(D\) being the dimension of features for each node, the \(d\)-th dimensional hypergraph signal (\(d=1,\cdots,D\)) is given by \[[\mathcal{X}]_{d}=\underbrace{[\mathbf{x}]_{d}\circ[\mathbf{x}]_{d}\circ \cdots\circ[\mathbf{x}]_{d}}_{\text{(M-1) times}}\in\mathbb{R}^{N\times 1\times N ^{(M-2)}}, \tag{4}\] where \(\circ\) denotes the outer (elementary tensor) product, and \([\mathbf{x}]_{d}\in\mathbb{R}^{N}\) represents the \(d\)-th dimensional feature vector of all \(N\) nodes. For example, given \(M=3\), \([\mathcal{X}]_{d}=[\mathbf{x}]_{d}[\mathbf{x}]_{d}^{T}\in\mathbb{R}^{N\times 1 \times N}\), where we unsqueeze the outer-product tensor to generate the additional second mode for the dimension index of different features. Then by computing \([\mathcal{X}]_{d}\) for all \(D\) features and stacking them together along the second-order dimension, we obtain an \(M^{\text{th}}\)-order interaction tensor \(\mathcal{X}\in\mathbb{R}^{N\times D\times N^{(M-2)}}\). The resulting interaction tensor can be viewed as a collection of \(D\) tensors, each depicting node interactions at one feature dimension. Analogous to the simple graph signal shifting, **hypergraph signal shifting** is defined as the product of a hypergraph representation tensor \(\mathcal{A}\) and a hypergraph signal tensor \(\mathcal{X}\), offering the notion of information flow over a hypergraph. The tensor-tensor multiplications (known as t-products), in particular, preserve the intrinsic higher-order properties and are utilized to operate hypergraph signal shifting [11]. Take \(M=3\) as a convenient example of the t-product. To provide an appropriate alignment in the t-product signal shifting (to be introduced in Eq. (7)), we first symmetrize the adjacency tensor \(\mathcal{A}\in\mathbb{R}^{N\times N\times N}\) to be \(\mathcal{A}s\in\mathbb{R}^{N\times N\times(2N+1)}\) by adding a zero matrix \(\mathbf{0}N\times N\) as the first frontal slice, reflecting the frontal slice of the underlying tensor, and then dividing by 2: \(\mathcal{A}_{s}=\frac{1}{2}\)\(\texttt{fold}([\mathbf{0},\mathbf{A}^{(1)},\mathbf{A}^{(2)},...,\mathbf{A}^{(N)}, \mathbf{A}^{(N)},...,\mathbf{A}^{(2)},\mathbf{A}^{(1)}])\), where the \(k\)-th frontal slice is \(\mathbf{A}^{(k)}=\mathcal{A}(:,:,k)\in\mathbb{R}^{N\times N\times 1}\). After applying the same operation to the hypergraph tensor \(\mathcal{X}\) and obtain \(\mathcal{X}_{s}\), the hypergraph signal shifting is then defined through the t-product \(*\) as \[\mathcal{A}_{s}*\mathcal{X}_{s} \tag{5}\] \[= \texttt{fold}(\texttt{bcirc}(\mathcal{A}_{s})\cdot\texttt{unfold}( \mathcal{X}_{s}))\] (6) \[= \texttt{fold}\begin{pmatrix}\begin{bmatrix}\mathbf{0}&\mathbf{A}^{ (1)}&\mathbf{A}^{(2)}&\cdots&\mathbf{A}^{(2)}&\mathbf{A}^{(1)}\\ \mathbf{A}^{(1)}&\mathbf{0}&\mathbf{A}^{(1)}&\cdots&\mathbf{A}^{(3)}&\mathbf{A} ^{(2)}\\ \mathbf{A}^{(2)}&\mathbf{A}^{(1)}&\mathbf{0}&\cdots&\mathbf{A}^{(4)}&\mathbf{A} ^{(3)}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \mathbf{A}^{(2)}&\mathbf{A}^{(3)}&\mathbf{A}^{(4)}&\cdots&\mathbf{0}&\mathbf{A }^{(1)}\\ \mathbf{A}^{(1)}&\mathbf{A}^{(2)}&\mathbf{A}^{(3)}&\cdots&\mathbf{A}^{(1)}& \mathbf{0}\end{bmatrix}\begin{bmatrix}\mathbf{0}\\ \mathbf{X}^{(1)}\\ \mathbf{X}^{(2)}\\ \vdots\\ \mathbf{X}^{(1)}\end{bmatrix}, \tag{7}\] Fig. 2: To perform \(K\)-step gradient descent for the underlying hypergraph signal denoising problem, we need (a) K-layer T-HGCN or alternatively (b) 1-layer T-HGIN. where \(\mathtt{bcirc}(\mathcal{A}_{s})\) converts the set of \(N_{s}\) frontal slice matrices (in \(\mathbb{R}^{N\times N}\)) of the tensor \(\mathcal{A}_{s}\) into a block circulant matrix. The \(\mathtt{unfold}(\mathcal{X}_{s})\) stacks vertically the set of \(N_{s}\) frontal slice matrices (in \(\mathbb{R}^{N\times D}\)) of \(\mathcal{X}_{s}\) into a \(N_{s}N\times D\) matrix. The \(\mathtt{fold}()\) is the reverse of the \(\mathtt{unfold}()\) process so that \(\mathtt{fold}(\mathtt{unfold}(\mathcal{A}_{s}))=\mathcal{A}_{s}\). The t-product of higher order tensors is more involved with recursive computation with \(3^{\mathrm{rd}}\) order base cases. To maintain presentation brevity here, a reader may refer to literature [9] for full technical details of the t-product \(*\). ### _Tensor-Hypergraph Convolutional Neural Network_ With the defined hypergraph signal shifting operation, a single T-HGCN [8] layer is given by \(\mathcal{Y}_{s}=\mathcal{A}_{s}*\mathcal{X}_{s}*\mathcal{W}_{s}\), where \(\mathcal{W}_{s}\in\mathbb{R}^{D\times D^{\prime}\times N_{s}^{(M-2)}}\) is a learnable weight tensor with \(DD^{\prime}\) weights parameterized in the first frontal slice and all the remaining frontal slices being zeros. Since the t-product is commutable [9], we rewrite the T-HGCN into the following two steps: \[\begin{cases}&\text{Signal transformation: }\mathcal{X}_{s}^{\prime}= \text{MLP}(\mathcal{X}_{s});\\ &\text{Signal shifting: }\mathcal{Y}_{s}=\mathcal{A}_{s}*\mathcal{X}_{s}^{ \prime},\end{cases} \tag{8}\] where \(\mathcal{X}_{s}\in\mathbb{R}^{N\times D\times N_{s}^{(M-2)}}\) and \(\mathcal{Y}_{s}\in\mathbb{R}^{N\times D^{\prime}\times N_{s}^{(M-2)}}\) are the input and output of a T-HGCN layer. To perform downstream tasks, non-linear activation functions can be applied to \(\mathcal{Y}_{s}\) accordingly. ## III Equivalence Between T-HGCN and Tensor Hypergraph Signal Denoising Recall that the signal-shifting function \(\phi_{shift}\) aggregates neighboring signals to infer the target signal of each node. The intuition behind the architecture of HyperGNNs (especially the signal shifting) is that connected nodes tend to share similar properties, that is, signals over a hypergraph are smooth. Motivated by this intuition and precious work [12] on simple graphs, we introduce the tensor Hypergraph signal denoising (HyperGSD) problem with the smoothness regularization term and prove its equivalency to T-HGCN in this section. ### _Tensor Hypergraph Signal Denoising_ **Problem (Hypergraph Signal Denoising).** Suppose \(\mathcal{X}_{s}\in\mathbb{R}^{N\times D\times N_{s}^{(M-2)}}\) is the hypergraph signal of an observed noisy hypergraph signal on an \(M^{\mathrm{th}}\) order hypergraph \(\mathcal{G}\). Without loss of generality, we assume \(D=1\) (if \(D>1\), we can simply take summation over all feature dimensions and obtain the same result). Motivated by a smoothness assumption of hypergraph signals, we formulate the HyperGSD problem with the Laplacian-based total variation regularization term as follows: \[\operatorname*{argmin}_{\mathcal{Y}_{s}}\mathcal{J}=(\mathcal{Y}_{s}- \mathcal{X}_{s})^{T}*(\mathcal{Y}_{s}-\mathcal{X}_{s})+b\mathcal{Y}_{s}^{T}* \mathcal{L}_{s}*\mathcal{Y}_{s}, \tag{9}\] where \(\mathcal{Y}_{s}\in\mathbb{R}^{N\times 1\times N_{s}^{(M-2)}}\) is the desired hypergraph signal that we aim to recover, \(b>0\) is a scalar for the regularization term, and the last \(M-2\) orders of all the tensors are flattened as frontal slice indices to simplify the t-product. Here, \(\mathcal{L}_{s}=\mathcal{I}_{s}-\mathcal{A}_{s}\) is the normalized symmetric Laplacian tensor, and \(\mathcal{I}_{s}\) is an identity tensor (with the first frontal slice being identity matrix and the other entries being zero). The tensor transpose of \(\mathcal{Y}_{s}\in\mathbb{R}^{N\times 1\times N_{s}^{(M-2)}}\), under the t-algebra, is defined as \(\mathcal{Y}_{s}^{T}\in\mathbb{R}^{1\times N\times N_{s}^{(M-2)}}\), which is obtained by recursively transposing each sub-order tensor and then reversing the order of these sub-order tensors [9]. The first term encourages the recovered signal \(\mathcal{Y}_{s}\) to be close to the observed signal \(\mathcal{X}_{s}\), while the second term encodes the regularization as neighboring hypergraph signals tend to be similar. Notice that the cost function \(\mathcal{J}(\mathcal{Y}_{s})\) is not a scalar, but a tensor in \(1\times 1\times N_{s}^{(M-2)}\). ### _T-HGCN as Hypergraph Signal Denoising_ Next, we show the key insight that the hypergraph signal shifting operation in the T-HGCN is directly connected to the HyperGSD problem, which is given in the following theorem. **Theorem III.1**: _The hypergraph signal shifting \(\mathcal{Y}_{s}=\mathcal{A}_{s}*\mathcal{X}_{s}\) is equivalent to a one-step gradient descent of solving the leading function of the HyperGSD problem Eq. (9) with \(c=\frac{1}{2b}\), where \(c\) is the learning rate of the gradient descent step._ _Proof._ First take the derivative of the cost function \(\mathcal{J}(\mathcal{Y}_{s})\) w.r.t \(\mathcal{Y}_{s}\): \[\frac{\partial\mathcal{J}}{\partial\mathcal{Y}_{s}}=2\cdot\mathtt{bcirc}( \mathcal{Y}_{s}-\mathcal{X}_{s})+2b\cdot\mathtt{bcirc}(\mathcal{L}_{s}* \mathcal{Y}_{s}). \tag{10}\] Recall from Eq. (7) that the \(\mathtt{bcirc}(\cdot)\) operation has the first column being the unfolded \(2N+1\) frontal slices, and the other columns being the cyclic shifting of the first column. When updating \(\mathcal{Y}_{s}\) using one-step gradient descent, the first column of a block circulant tensor is sufficient, as it contains all information of updating \(\mathcal{Y}_{s}\), and the remaining columns differ from the first column in order only. Using the leading function \(\mathcal{J}_{1}\) for Eq. (10), which gives the first block column of the circulant tensor \(\frac{\partial\mathcal{J}}{\partial\mathcal{Y}_{s}}\), we can simply drop the \(\mathtt{bcirc}(\cdot)\) operation so that the one-step gradient descent to update \(\mathcal{Y}_{s}\) from \(\mathcal{X}_{s}\) is \[\mathcal{Y}_{s} \leftarrow\mathcal{X}_{s}-c\frac{\partial\mathcal{J}_{1}}{ \partial\mathcal{Y}_{s}}\Big{|}_{\mathcal{Y}=\mathcal{X}_{s}} \tag{11}\] \[=\mathcal{X}_{s}-2bc(\mathcal{L}_{s}*\mathcal{X}_{s})\] (12) \[=(1-2bc)\mathcal{X}_{s}+2bc\mathcal{A}_{s}*\mathcal{X}_{s}. \tag{13}\] Given learning rate \(c=\frac{1}{2b}\), we obtain \(\mathcal{Y}_{s}\leftarrow\mathcal{A}_{s}*\mathcal{X}_{s}\), which is the same form as the shifting operation in Eq. (8). \(\square\) This theorem implies that a single layer of T-HGCN [8] is essentially equivalent to solving the HyperGSD problem by one-step gradient descent. Correspondingly, performing a \(K\)-step gradient descent would require \(K\) layers of T-HGCN, which could much increase the number of learnable parameters. As a result, a question naturally arises: Can we perform multi-step gradient descent toward the HyperGSD problem with just a single layer of HyperGNNs? We provide an affirmative answer by proposing the T-HGIN approach in the next section. ## IV Tensor-Hypergraph Iterative Network With the goal of merging multi-step gradient descent into a single HyperGNN, we first propose the \(K\)-step iterative gradient descent for the HyperGSD problem in Eq. (9). Then we adopt the iteration process to design the Tensor-Hypergraph Iterative Network (T-HGIN). **Iterative Gradient Descent for Signal Denoising.** Given the gradient of the HyperGSD problem in Eq. (10), we now update the gradient iteratively to obtain the sequence of hypergraph signals \((\mathcal{Y}_{s}^{(0)},\mathcal{Y}_{s}^{(1)},\mathcal{Y}_{s}^{(2)},..., \mathcal{Y}_{s}^{(K)})\) with the following iterative process: \[\mathcal{Y}_{s}^{(k)} \leftarrow\mathcal{Y}_{s}^{(k-1)}-c\frac{\partial\mathcal{J}_{1} }{\partial\mathcal{Y}_{s}}\Big{|}_{\mathcal{Y}_{s}=\mathcal{Y}_{s}^{(k-1)}}\] \[=(1-2b-2bc)\mathcal{Y}_{s}^{(k-1)}+2b\mathcal{X}_{s}+2bc\mathcal{ A}_{s}*\mathcal{Y}_{s}^{(k-1)}, \tag{14}\] where \(\mathcal{Y}_{s}^{(k)}\) with \(k=1,...,K\) are iteratively updated clean hypergraph signals and the starting point is \(\mathcal{Y}_{s}^{(0)}=\mathcal{X}_{s}\). **From Iterative Signal Denoising To T-HGIN.** From the updating rule above, we then formulate T-HGIN by a slight variation to Eq. (14). Setting the regularization parameter \(b=\frac{1}{2(1+c)}\), we then obtain that \[\mathcal{Y}_{s}^{(k)}\gets 2b\mathcal{X}_{s}+2bc\mathcal{A}_{s}*\mathcal{Y}_ {s}^{(k-1)}. \tag{15}\] Since \(2b+2bc=1\), setting \(2b=\alpha\) implies that \(2bc=1-\alpha\). Consequently, a single layer of the T-HGIN is formulated as \[\left\{\begin{array}{l}\text{Signal transformation: }\mathcal{X}_{s}^{\prime}= \text{MLP}(\mathcal{X}_{s});\\ \text{Signal shifting: }\mathcal{Y}_{s}^{(k)}=\alpha\mathcal{X}_{s}^{\prime}+(1- \alpha)\mathcal{A}_{s}*\mathcal{Y}_{s}^{(k-1)},\end{array}\right. \tag{16}\] with \(k=1,...,K\), \(\mathcal{Y}_{s}^{(0)}=\mathcal{X}_{s}^{\prime}\) and \(\alpha\in[0,1]\). The signal transformation is constructed by a MLP. The signal shifting of the T-HGIN can be roughly viewed as an iterative personalized PageRank [10], where \(\alpha\) is the probability that a node will teleport back to the original node and \(1-\alpha\) is the probability of taking a random walk on the hypergraph through the hypergraph signal shifting. In fact, when \(\alpha=0\) and \(K=1\), the T-HGIN is the same as the T-HGCN, indicating that the T-HGCN could be subsumed in the proposed T-HGIN framework. In addition, T-HGIN has three major advantages compared to T-HGCN: 1. As shown in Fig. 2, a \(K\)-layer T-HGCN is required to perform \(K\) steps of hypergraph signal shifting, but in contrast, the T-HGIN breaks this required equivalence between the depth of neural networks and the steps of signal shifting, allowing any steps of signal shifting in just one layer. 2. The T-HGIN leverages the information contained in the original hypergraph signal \(\mathcal{X}_{s}\), which performs a "skip-connection" analogous to ResNet [13] and mitigates the potential over-smoothing problem [10] as the neural network is going deep to aggregate broader neighborhood. 3. Although the \(K\)-step hypergraph signal shifting is somewhat involved, the number of learnable parameters remains the same as only one layer of the T-HGCN. As shown in the following experiment, the T-HGIN can often achieve better performance than other alternative HyperGNNs that would require more learnable parameters. ## V Experiments We evaluate the proposed T-HGIN approach on three real-world academic networks and compare it to four state-of-the-art benchmarks. The experiment aims to conduct a semi-supervised node classification task, in which each node is an academic paper and each class is a research category. We use the accuracy rate to be the metric of model performance. For each reported accuracy rate, \(50\) experiments are performed to compute the mean and the standard deviation of the accuracy rates. We use the Adam optimizer with a learning rate and the weight decay choosing from \(\{0.01,0.001\}\) and \(\{0.005,0.0005\}\) respectively, and tune the hidden dimensions over \(\{64,128,256,512\}\) for all the methods. **Datasets.** The hypergraph datasets we used are the co-citation datasets (Cora, CiteSeer, and PubMed) in the academic network. The hypergraph structure is obtained by viewing each co-citation relationship as a hyperedge. The node features associated with each paper are the bag-of-words representations summarized from the abstract of each paper, and the node labels are research categories (e.g., algorithm, computing, etc). For expedited proof of concept, the raw datasets from [14] are downsampled to smaller hypergraphs. The descriptive statistics of these hypergraphs are summarized in Table I. **Experiment Setup and Benchmarks.** To classify the labels of testing nodes, we feed the whole hypergraph structure and node features to the model. The training, validation, and testing data are set to be \(50\%,25\%\), and \(25\%\) for each complete dataset, respectively. We choose regular multi-layer perceptron (MLP), HGNN [6], HyperGCN [14], and HNHN [15] as the benchmarks. In particular, the HGNN and the HyperGCN utilize hypergraph reduction approaches to define the hypergraph adjacency matrix and Laplacian matrix, which may result in higher-order structural distortion [5]. The HNHN formulates a two-stage propagation rule using the incidence matrix, which does not use higher-order interactions of the hypergraph signal tensor [8]. Following the convention of HyperGNNs, we set the number of layers for all HyperGNNs to be 2 to avoid over-smoothing except for the T-HGCN and the proposed T-HGIN. For the T-HGCN and the T-HGIN, we use only one layer: the T-HGCN's accuracy decreases when the number of layers is greater than one, while the T-HGIN can achieve a deeper HyperGNN architecture by varying the times of iteration \(K\) within one layer as shown in Fig. 2 (b). The grid search is used to tune the two hyperparameters \(K\) and \(\alpha\) through four evenly spaced intervals in both \(K\in[1,5]\) and \(\alpha\in[0.1,0.5]\) **Results and Discussion.** The averaged accuracy rates are summarized in Table II, which shows that our proposed \(K\)-step shifting entailed T-HGIN achieves the best performance among the state-of-the-art HyperGNNs on the three hypergraphs. While high variances of the results often occur to other existing HyperGNNs in these data examples, the proposed T-HGIN desirably shows only relatively moderate variance. **The effect of the number of iterations.** Interestingly, the optimal values selected for \(K\) coincide with the maximum shortest path on the underlying hypergraphs, the observation of which is consistent with that of [10]. To some extent, this phenomenon supports the advantage of the proposed T-HGIN over other "shallow" HyperGNNs that perform only one or two steps of signal shifting. Equipped with the multi-step iteration and the skip-connection mechanism, the T-HGIN is able to fully propagate across the whole hypergraph, and importantly, avoid the over-smoothing issue at the same time. **The effect of the teleport probability.** Regarding the teleport parameter \(\alpha\), the optimal selected values for the three datasets are \(\{0.1,0.1,0.3\}\), respectively. Empirically, the selection of \(\alpha\)'s could depend on the connectivity of nodes. For example, the PubMed hypergraph has more isolated connected components and tends to require a higher value of \(\alpha\). A direct visualization for the PubMed network is also shown in Fig. 3 using one representative run of the experiment, which shows that the tensor-based approaches appear to give more satisfactory performance than the classic matrix-based HyperGNN; the proposed T-HGIN further improves upon the T-HGCN, confirming the effectiveness of the proposed multi-step iteration scheme. ## VI Conclusion In the context of Tensor-HyperGraph Neural Networks (T-HyperGNNs), this work demonstrates that the hypergraph signal shifting of T-HGCN is equivalent to a one-step gradient descent of solving the hypergraph signal denoising problem. Based on this equivalency, we propose a \(K\)-step gradient descent rule and formulate a new hypergraph neural network - Tensor-Hypergraph Iterative Network (T-HGIN). Compared to the T-HGCN, the T-HGIN benefits from the construction of \(K\)-step propagation in one single layer, offering an efficient way to perform propagation that spreads out to a larger-sized neighborhood. Satisfactorily, the proposed T-HGIN achieves competitive performance on multiple hypergraph data examples, showing its promising potential in real-world applications. We also note that the equivalency between HyperGNNs and HyperGSDs can also be utilized to design neural networks for denoising like in [16, 17], and we will leave this as an interesting extension for future studies. ## Acknowledgment This work was partially supported by the NSF under grants CCF-2230161, DMS-1916376, the AFOSR award FA9550-22-1-0362, and by the Institute of Financial Services Analytics.
2309.16805
Cascaded Nonlinear Control Design for Highly Underactuated Balance Robots
This paper presents a nonlinear control design for highly underactuated balance robots, which possess more numbers of unactuated degree-of-freedom (DOF) than actuated ones. To address the challenge of simultaneously trajectory tracking of actuated coordinates and balancing of unactuated coordinates, the proposed control converts a robot dynamics into a series of cascaded subsystems and each of them is considered virtually actuated. To achieve the control goal, we sequentially design and update the virtual and actual control inputs to incorporate the balance task such that the unactuated coordinates are balanced to their instantaneous equilibrium. The closed-loop dynamics are shown to be stable and the tracking errors exponentially converge towards a neighborhood near the origin. The simulation results demonstrate the effectiveness of the proposed control design by using a triple-inverted pendulum cart system.
Feng Han, Jingang Yi
2023-09-28T19:18:14Z
http://arxiv.org/abs/2309.16805v2
# Cascaded Nonlinear Control Design for Highly Underactuated Balance Robots ###### Abstract This paper presents a nonlinear control design for highly underactuated balance robots, which possess more numbers of unactuated degree-of-freedom (DOF) than actuated ones. To address the challenge of simultaneously trajectory tracking of actuated coordinates and balancing of unactuated coordinates, the proposed control converts a robot dynamics into a series of cascaded subsystems and each of them is considered virtually actuated. To achieve the control goal, we sequentially design and update the virtual and actual control inputs to incorporate the balance task such that the unactuated coordinates are balanced to their instantaneous equilibrium. The closed-loop dynamics are shown to be stable and the tracking errors exponentially converge towards a neighborhood near the origin. The simulation results demonstrate the effectiveness of the proposed control design by using a triple-inverted pendulum cart system. ## I Introduction Underactuated robots have less number of control inputs than that of the degree-of-freedom (DOF). Highly underactuated balance robots possess more numbers of unactuated DOFs than actuated ones. Control design for underactuated balance robots faces the design challenge of limited control actuation for simultaneously trajectory tracking and platform balance. Most existing works focus on underactuated balance systems with more actuated coordinates than unactuated ones. For instance, cart-pole system has one input with 2-DOF [1, 2], five-link bipedal walker robot has six inputs with 7-DOF [3, 4], autonomous bicycle robot has two inputs with 3-DOF [5, 6]. There are various well-developed control frameworks for those systems including the external and internal convertible form-based control (i.e., EIC-based control) [7], orbital stabilization [8], energy-shaping based control [9], etc. Both the model-based control and machine learning-based control approaches are extensively studied [2, 10]. However, for highly underactuated balance robots, such as a triple passive inverted pendulum on a controlled cart (i.e., one input with 4-DOF), those control approaches might not work properly. For instance, it remains an open problem for the periodical orbit stabilization design to guarantee multiple unactuated coordinates. For highly underactuated balance robots with more unactuated than actuated coordinates, the inherently unstable property and coupled dynamics between them impose great challenges in control system design [11, 12, 13]. With highly limited control actuation available, there exist great task conflicts. To reduce the design complexity, most of the existing works focus on stabilization control. Linearization of non-linearly system and pole placement/LQR (linear quadratic regulation) techniques are popular methods [12, 14, 15, 16]. The research work in [14] presented an LQR-based robust control for a triple-inverted pendulum cart system and a fault tolerant control was proposed for a double-inverted pendulum cart system using a linearized model [16]. In [17], the authors enhanced the inversion-based approach (e.g., [18]) towards the stabilization of a periodic orbit of a multi-link triple pendulum on a cart. To this end, A two-point boundary value problem was formulated to obtain a nominal trajectory and control design via a linear-quadratic-Gaussian controller. However, simultaneously control of trajectory tracking and platform balance remain a challenge for highly underactuated balance robots. Among the aforementioned control methods, the EIC-based control has been demonstrated as an effective approach for underactuated balance robots. The EIC-based control has been applied to underactuated balance robots that have more numbers of actuated than unactuated DOFs, including inverted pendulum [2], autonomous bikebot [5, 10], and aggressive vehicle under ski-stunt maneuvers [19]. The unstable, unactuated subsystem is balanced onto a balance equilibrium manifold (BEM) and trajectory tracking and platform balance control are achieved simultaneously. However, the EIC-based control has not been designed for highly underactuated balance robots. In [20], we show that some of the unactuated coordinates were not able to display the designed dynamics, which resulted in unstable motion. Given its attractive feature, the EIC-based control can be potentially revised or redesigned for highly underactuated balance robots. The main feature of the EIC-based control is to embed the balance control into the trajectory tracking design. The target profile of the unactuated subsystem is associated with the actuated subsystem motion. The motion of the actuated subsystem can be viewed as a control input to drive the unactuated subsystem to its BEM. Inspired by such an observation, we propose a cascaded EIC form (i.e., CEIC) that reformulates the original highly underactuated balance system to a series of cascaded subsystems, which are virtually actuated by their interactions. Associated with each two-subsystem is one coordinate, which accounts for the coupling and also serves as a virtual control input. We sequentially estimate and obtain the BEM and then update the control input of the subsystems sequentially. Each subsystem has been shown under active control design. Trajectory tracking and balance control can be achieved. We illustrate and demonstrate the CIEC-based control through an example of a triple-inverted pendulum on a cart. The main contribution of this work is the proposed new cascaded control framework for highly underactuated balance robots. We also for the first time reveal the controllable condition of the highly underactuated balance robots. The rest of the paper is outlined as follows. Section II presents the dynamics and EIC-based control design. In section III, we propose the cascaded EIC design. The CEIC-based control is presented in IV. We present the simulation results in V. Finally, Section VI discusses the concluding remarks. ## II Highly Underactuated Balance Robots In this section, we present the dynamics and the EIC-based control design for underactuated balance robots. ### _System Dynamics_ Let the generalized coordinates of underactuated balance robots be \(\mathbf{q}=[q_{1}\cdots q_{n+m}]^{T}\in\mathbb{R}^{n+m}\), \(n,m\in\mathbb{N}\). We partition \(\mathbf{q}\) into \(\mathbf{q}=[\mathbf{q}_{a}^{T}\ \mathbf{q}_{u}^{T}]^{T}\) with actuated coordinate \(\mathbf{q}_{a}\in\mathbb{R}^{n}\) and unactuated \(\mathbf{q}_{u}\in\mathbb{R}^{m}\). The robot dynamics for actuated and unactuated subsystems can be written into [21] \[\mathcal{S}_{a}:\mathbf{D}_{aa}\tilde{\mathbf{q}}_{a}+\mathbf{D}_{au}\bar{\bm {q}}_{u}+\mathbf{C}_{a}\dot{\mathbf{q}}+\mathbf{G}_{a}=\mathbf{u}, \tag{1a}\] \[\mathcal{S}_{u}:\mathbf{D}_{ua}\tilde{\mathbf{q}}_{a}+\mathbf{D}_{uu}\tilde{ \mathbf{q}}_{u}+\mathbf{C}_{u}\dot{\mathbf{q}}+\mathbf{G}_{u}=\mathbf{0}, \tag{1b}\] where \(\mathbf{D}(\mathbf{q})\), \(\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\) and \(\mathbf{G}(\mathbf{q})\) are the inertia, Coriolis and gravity matrices, respectively. The subscripts \(aa\) (\(uu\)) and \(ua\) and \(au\) indicate the variables related to the actuated (unactuated) coordinates and coupling effects, respectively. For the convenience of representation, the dependence of matrices \(\mathbf{D}\), \(\mathbf{C}\), and \(\mathbf{G}\) on \(\mathbf{q}\) and \(\dot{\mathbf{q}}\) is dropped. We denote \(\mathbf{H}_{a}=\mathbf{C}_{a}\dot{\mathbf{q}}+\mathbf{G}_{a}\) and \(\mathbf{H}_{u}=\mathbf{C}_{u}\dot{\mathbf{q}}+\mathbf{G}_{u}\). In general, robot dynamics of unactuated subsystem \(\mathcal{S}_{u}\) in (1b) is intrinsically unstable. Most of the previous work focus on \(\mathcal{S}=\{\mathcal{S}_{a},\mathcal{S}_{u}\}\) with the property \(n\geq m\), that is, more actuated DOFs than that of unactuated. In this work, we consider highly underactuated balance robots, i.e., \(n<m\). With far less control actuation, it becomes a challenging problem for simultaneously trajectory tracking and platform balance control design [17]. ### _EIC-Based Tracking Control_ We first present the EIC-based control and discuss its limitations for highly underactuated balance robot control. Given the desired trajectory of actuated coordinates \(\mathbf{q}_{a}^{d}\), the goal of the robot control is to achieve that the actuated subsystem \(\mathcal{S}_{a}\) follows \(\mathbf{q}_{a}^{d}\) and the unactuated, unstable subsystem \(\mathcal{S}_{u}\) is balanced around unstable equilibrium, denoted by \(\mathbf{q}_{u}^{e}\). Note that the unstable equilibrium \(\mathbf{q}_{u}^{e}\) depends on the tracking performance of \(\mathcal{S}_{a}\) and its profile needs to be estimated in real-time. Given \(\mathbf{q}_{a}^{d}\), we temporarily neglect the dynamics of \(\mathcal{S}_{u}\) and the control input for \(\mathcal{S}_{a}\) is designed using the feedback linearization as \[\mathbf{u}_{a}^{\mathrm{ext}}=\mathbf{D}_{aa}\mathbf{v}_{a}^{\mathrm{ext}}+\mathbf{D}_{au} \bar{\mathbf{q}}_{u}+\mathbf{H}_{a}, \tag{2}\] where \(\mathbf{v}_{a}^{\mathrm{ext}}=\tilde{\mathbf{q}}_{a}^{d}-\mathbf{k}_{p1}\mathbf{e}_{a}-\mathbf{k} _{d1}\hat{\mathbf{e}}_{a}\), is an auxiliary control design. \(\mathbf{e}_{a}=\mathbf{q}_{a}-\mathbf{q}_{a}^{d}\) is the tracking error and \(\mathbf{k}_{p1},\mathbf{k}_{d1}\) are control gains. The \(\mathbf{q}_{u}\) coordinate should be stabilized onto the BEM. Given the control input \(\mathbf{u}_{a}^{\mathrm{ext}}\), the BEM is defined as instantaneous equilibrium in terms of \(\mathbf{q}_{u}\) as \[\mathcal{E}=\left\{\mathbf{q}_{u}^{e}:\mathbf{\Gamma}(\mathbf{q}_{u}^{e};\mathbf{v}_{a}^{ \mathrm{ext}})=\mathbf{0},\dot{\mathbf{q}}_{u}=\tilde{\mathbf{q}}_{u}=\mathbf{0}\right\}, \tag{3}\] where \(\mathbf{\Gamma}(\mathbf{q}_{u};\mathbf{v}_{a}^{\mathrm{ext}})=\mathbf{D}_{uu}\tilde{\mathbf{q}}_{ u}+\mathbf{D}_{ua}\mathbf{v}_{a}^{\mathrm{ext}}+\mathbf{H}_{u}\). The equilibrium \(\mathbf{q}_{u}^{e}\) is obtained by inverting \(\mathbf{\Gamma}(\mathbf{q}_{u};\mathbf{v}_{a}^{\mathrm{ext}})\big{|}_{\mathbf{q}_{u}=\tilde{ \mathbf{q}}_{u}=\mathbf{0}}=\mathbf{0}\). Using the BEM \(\mathbf{q}_{u}^{e}\in\mathcal{E}\) as target reference for \(\mathcal{S}_{u}\), we redesign the \(\tilde{\mathbf{q}}_{u}\) profile such that under \(\tilde{\mathbf{q}}_{a}\), \(\mathbf{q}_{u}\rightarrow\mathbf{q}_{u}^{e}\). The control is updated by incorporating the \(\mathcal{S}_{u}\) dynamics as \[\mathbf{v}_{a}^{\mathrm{int}}=-\mathbf{D}_{ua}^{+}(\mathbf{H}_{u}+\mathbf{D}_{uu}\mathbf{v}_{a}^{ \mathrm{int}}), \tag{4}\] where \(\mathbf{v}_{u}^{\mathrm{int}}=\tilde{\mathbf{q}}_{u}^{e}-\mathbf{k}_{p2}\mathbf{e}_{u}-\mathbf{k} _{d2}\mathbf{e}_{u}\). \(\mathbf{D}_{ua}^{+}=(\mathbf{D}_{ua}^{T}\mathbf{D}_{ua})^{-1}\mathbf{D}_{ua}^{T}\) is the generalized inverse of \(\mathbf{D}_{ua}\). \(\mathbf{e}_{u}=\mathbf{q}_{u}-\mathbf{q}_{u}^{e}\) and \(\mathbf{k}_{p2},\mathbf{k}_{p2}\) are control gains. With the design (4), the final control becomes \[\mathbf{u}_{a}^{\mathrm{int}}=\mathbf{D}_{ua}\mathbf{v}_{a}^{\mathrm{int}}+\mathbf{D}_{au} \tilde{\mathbf{q}}_{u}+\mathbf{H}_{a}. \tag{5}\] The above sequentially designed control, known as EIC-based control, aims to achieve tracking of \(\mathcal{S}_{a}\) and balance of \(\mathcal{S}_{u}\), simultaneously [7]. Inserting the updated control design \(\mathbf{u}_{a}^{\mathrm{int}}\) into the system dynamics \(\mathcal{S}\), we obtain \[\ddot{\mathbf{q}}_{u} =-\mathbf{D}_{uu}^{-1}(\mathbf{D}_{ua}\tilde{\mathbf{q}}_{a}+\mathbf{H}_{u})\] \[=-\mathbf{D}_{uu}^{-1}\left[-\mathbf{D}_{ua}\mathbf{D}_{ua}^{+}(\mathbf{H}_{u}+ \mathbf{D}_{uu}\mathbf{v}_{u}^{\mathrm{int}})+\mathbf{H}_{u}\right]. \tag{6}\] Since \(\mathbf{D}_{ua}\in\mathbb{R}^{m\times n}\) and \(n<m\), we have \(\mathbf{D}_{ua}\mathbf{D}_{ua}^{+}\in\mathbb{R}^{m\times m}\) and \(\mathrm{rank}(\mathbf{D}_{ua}\mathbf{D}_{ua}^{+})=n<m\). Therefore, part of the control effect design of \(\mathbf{v}_{u}^{\mathrm{int}}\) would not appear and the nonlinearity term \(\mathbf{H}_{u}\) cannot be fully canceled at all dimensions. The unactuated subsystem \(\mathcal{S}_{u}\) does not approach \(\mathcal{E}\) in \(\mathbb{R}^{m}\) as designed and the balance would not be guaranteed for highly underactuated balance robot. ## III Cascaded EIC Form For Highly Underactuated System The enhanced EIC-based control has been successfully demonstrated for underactuated balance robots with \(n\geq m\)[20]. If the system \(\mathcal{S}\) with \(n<m\) can be transferred virtually to a series of subsystems with more actuated coordinates, we can still achieve guaranteed performance. We note that \(\tilde{\mathbf{q}}_{a}\) is used as a virtual control input when incorporating the balance control \(\mathbf{v}_{u}^{\mathrm{int}}\) into control design (see (4)). However, the \(\mathcal{S}_{u}\) dynamics with respect to \(\dot{\mathbf{q}}_{a}\) is another underactuated system with \(m\) coordinates. For such an underactuated subsystem, we can perform the EIC-based control again to \(\mathcal{S}_{u}\). Following such an inspiration, in the following, we formally present our design. The \(\mathcal{S}_{a}\) dynamics under the control \(\mathbf{u}\) can be solved as \[\ddot{\mathbf{q}}_{a}=\mathbf{D}_{aa}^{-1}\left(\mathbf{u}-\mathbf{D}_{au}\tilde{\mathbf{q}}_{u}-\bm {H}_{a}\right). \tag{7}\] Substituting (7) into \(\mathcal{S}_{u}\) dynamics yields \[\mathcal{S}^{1}:\mathbf{D}^{(1)}\ddot{\mathbf{q}}^{(1)}+\mathbf{H}^{(1)}=\mathbf{B}^{(1)}\mathbf{u}, \tag{8}\] where \(\mathbf{q}^{(1)}=\mathbf{q}_{u}\) and \(\mathbf{D}^{(1)}=\mathbf{D}_{uu}-\mathbf{D}_{ua}\mathbf{D}_{aa}^{-1}\mathbf{D}_{au}\), \(\mathbf{H}^{(1)}=\mathbf{H}_{u}-\mathbf{D}_{ua}\mathbf{D}_{aa}^{-1}\mathbf{H}_{a}\), \(\mathbf{B}^{(1)}=-\mathbf{D}_{ua}\mathbf{D}_{aa}^{-1}\). We note \(\mathbf{D}_{ua}\in\mathbb{R}^{m\times n}\) and \(\mathbf{B}^{(1)}\in\mathbb{R}^{m\times n}\). Equation (8) represents another underactuated balance system with \(m\) generalized coordinates and \(n\) control inputs. We partition the \(\mathbf{q}^{(1)}\) coordinates into two parts as \[\mathbf{q}^{(1)}=\left[(\mathbf{q}_{a}^{(1)})^{T}\ (\mathbf{q}_{u}^{(1)})^{T}\right]^{T},\] where \(\mathbf{q}_{a}^{(1)}\) denotes the first \(n\) unactuated coordinates, such that \(\dim(\mathbf{q}_{a}^{(1)})=n\), \(\dim(\mathbf{q}_{u}^{(1)})=m-n\). Then we rewrite the \(\mathcal{S}^{1}\) dynamics \[\mathcal{S}_{a}^{1}:\mathbf{D}_{aa}^{(1)}\bar{\mathbf{q}}_{a}^{(1)}+\mathbf{D }_{aa}^{(1)}\bar{\mathbf{q}}_{a}^{(1)}+\mathbf{H}_{a}^{(1)}=\mathbf{B}_{a}^{(1)}\mathbf{u}, \tag{9a}\] \[\mathcal{S}_{u}^{1}:\mathbf{D}_{ua}^{(1)}\bar{\mathbf{q}}_{a}^{(1)}+\mathbf{D }_{uu}^{(1)}\bar{\mathbf{q}}_{u}^{(1)}+\mathbf{H}_{u}^{(1)}=\mathbf{B}_{u}^{(1)}\mathbf{u}, \tag{9b}\] where matrix \(\mathbf{D}^{(1)}\), \(\mathbf{H}^{(1)}\) and \(\mathbf{B}^{(1)}\) are block matrixes in proper order. Clearly, (9) is in the form of an underactuated robot model, similar to (1). We note that the input matrix \(\mathbf{B}^{(1)}\) in \(\mathcal{S}^{1}\) is no longer a constant. Namely, the selection of \(\dim(\mathbf{u})=n\) generalized coordinates as the actuated ones out of \(\mathbf{q}^{(1)}\) is arbitrary, as long as \(\mathrm{rank}(\mathbf{B}_{a}^{(1)})=n\). From \(\mathcal{S}_{a}^{1}\) dynamics we can solve the \(\mathbf{q}_{a}^{(1)}\) dynamics as \(\check{\mathbf{q}}_{a}^{(1)}=\mathbf{D}_{aa}^{(1)}\left(\mathbf{B}_{a}^{(1)}\mathbf{u}-\mathbf{D} _{aa}^{(1)}\check{\mathbf{q}}_{u}^{(1)}-\mathbf{H}_{a}^{(1)}\right)\). Inserting \(\check{\mathbf{q}}_{a}^{(1)}\) into \(\mathcal{S}_{u}^{1}\), we obtain \[\mathcal{S}^{2}:\mathbf{D}^{(2)}\check{\mathbf{q}}^{(2)}+\mathbf{H}^{(2)}=\mathbf{B}^{(2)}\mathbf{u},\] where \(\mathbf{q}^{(2)}=\mathbf{q}_{u}^{(1)}\) and \(\mathbf{D}^{(2)}=\mathbf{D}_{uu}^{(1)}-\mathbf{D}_{ua}^{(1)}\left(\mathbf{D}_{aa}^{(1)}\right) ^{-1}\mathbf{D}_{au}^{(1)}\), \(\mathbf{H}^{(2)}=\mathbf{H}_{u}^{(1)}-\mathbf{D}_{ua}^{(1)}\left(\mathbf{D}_{aa}^{(1)}\right) ^{-1}\mathbf{H}_{a}^{(1)}\), and \(\mathbf{B}^{(2)}=\mathbf{B}_{u}^{(1)}-\mathbf{D}_{ua}^{(1)}\left(\mathbf{D}_{aa}^{(1)}\right) ^{-1}\mathbf{B}_{a}^{(1)}\). If \(\dim(\mathbf{q}_{u}^{(2)})>\dim(\mathbf{u})\), \(\mathbf{B}^{(2)}\in\mathbb{R}^{(m-n)\times n}\) and \(\mathcal{S}^{2}\) is also an underactuated balance system. We can continue to perform such a transformation. We assume that there are in total \(k\) actuated subsystems (each contains \(n\) coordinates) and \((k+1)\)-th subsystem is fully actuated (contain last \(z\) coordinates, i.e., \(m=kn+z\)). The \(\mathcal{S}_{a}^{i}\) dynamics only contains the first \(n\) coordinates. \(\mathcal{S}_{u}^{i}\) dynamics (containing the rest of coordinates) is used to obtain \(\mathcal{S}^{i+1}\). Hence, \(\mathcal{S}^{i}=\{\mathcal{S}_{a}^{i},\mathcal{S}^{i+1}\}\) holds. Recursively, the \(\mathcal{S}^{i}\) dynamics is written as \[\mathcal{S}_{a}^{i}:\mathbf{D}_{aa}^{(i)}\mathbf{q}_{a}^{(i)}+\mathbf{D}_{au} ^{(i)}\mathbf{q}_{a}^{(i)}+\mathbf{H}_{a}^{(i)}=\mathbf{B}_{a}^{(i)}\mathbf{u},\] \[\mathcal{S}_{a}^{i}=\left\{\mathcal{S}_{a}^{i+1},...,\mathcal{S}_{ a}^{k},\mathcal{S}^{k+1}\right\},\] where \(\mathbf{q}_{u}^{(i)}\) is composed by \(\mathbf{q}_{a}^{(i+1)},\cdots\mathbf{q}_{a}^{(k)},\mathbf{q}^{(k+1)}\). The coupling in \(\mathcal{S}^{k+1}\) and \(\mathcal{S}_{a}^{k}\) shows up only in \(\mathbf{q}^{(k+1)}\) virtually. The original system \(\mathcal{S}\) then can be rewritten into a series of cascaded subsystems as \[\mathcal{S}\equiv\{\mathcal{S}_{a}^{0},\mathcal{S}_{a}^{1},\underbrace{ \mathcal{S}_{a}^{1},\underbrace{\mathcal{S}_{a}^{k},\mathcal{S}^{k+1}}_{ \mathcal{S}^{k}}}_{\mathcal{S}^{1}}\} \tag{10}\] where \(\mathcal{S}_{a}^{k+1}=\mathcal{S}^{k+1}=\mathcal{S}_{u}^{k}\). The BEM can still be used to characterize the balance target profile of each sub-order underactuated system. Given the control input \(\mathbf{u}\), the BEM for the underactuated system \(\mathcal{S}^{i}\) is obtained by using its unaccentuated subsystem. The BEM \(\mathcal{E}_{i}\) is defined \[\mathcal{E}_{i}=\left\{\mathbf{q}_{u}^{(i+1),e}:\mathbf{\Gamma}_{i+1}\left(\mathbf{q}_{a}^{ (i+1)};\mathbf{u}\right)=\mathbf{0},\check{\mathbf{q}}_{a}^{(i+1)},\check{\mathbf{q}}_{a}^{(i+ 1)}=\mathbf{0}\right\}, \tag{11}\] where \(\mathbf{\Gamma}_{i+1}\) is obtained by using the dynamics of \(\mathcal{S}_{a}^{i+1}\) \[\mathbf{\Gamma}_{i+1}=\mathbf{D}_{aa}^{(i+1)}\check{\mathbf{q}}_{a}^{(i+1)}+\mathbf{D}_{au}^{(i+ 1)}\bar{\mathbf{q}}_{u}^{(i+1)}+\mathbf{H}_{a}^{(i+1)}-\mathbf{B}_{a}^{(i+1)}\mathbf{u}.\] Clearly, \(\mathcal{E}_{i}\) follows the BEM definition but only accounts for \(\mathbf{q}_{a}^{(i+1)}\) (i.e., the \(n-m\) coordinates in \(\mathbf{q}_{u}^{(i)}\)). While the rest of the unactuated coordinates \(\mathbf{q}_{u}^{(i)}\) is untouched. ## IV Cascaded Tracking Control Design Based on the CEIC form, in this section we design the control input and show the stability of the closed-loop dynamics. ### _Control Design_ Starting from \(\mathcal{S}_{a}^{0}\) (the actuated subsystem of \(\mathcal{S}\)), we sequentially design the control input and obtain the corresponding BEM. The control input to drive \(\mathbf{q}_{a}^{(0)}\rightarrow\mathbf{q}_{a}^{(0),d}\) can be designed using the feedback linearization technique as \[\mathbf{u}_{0}^{\mathrm{ext}}=\left(\mathbf{B}_{a}^{(0)}\right)^{-1}\left(\mathbf{D}_{aa}^{(0) }\mathbf{v}_{0}^{\mathrm{ext}}+\mathbf{D}_{au}^{(0)}\check{\mathbf{q}}_{u}^{(0)}+\mathbf{H}_{a}^ {(0)}\right), \tag{12}\] where \(\mathbf{v}_{0}^{\mathrm{ext}}=\check{\mathbf{q}}_{a}^{(0),d}-\mathbf{a}_{0}\mathbf{e}_{0}-\mathbf{b}_ {0}\hat{\mathbf{e}}_{0}\), \(\hat{\mathbf{e}}_{0}=\mathbf{q}_{a}^{(0)}-\mathbf{q}_{a}^{(0),d}\) is the tracking error, and \(\mathbf{a}_{0},\mathbf{b}_{0}\) are control gains. The design of \(\mathbf{u}_{0}^{\mathrm{ext}}\) follows the same idea as shown in (2) regardless of the numbers of unactuated coordinates. Now let's consider the general case. If the control input for \(\mathcal{S}^{i}\) is known, denoted as \(\mathbf{u}_{i}^{\mathrm{ext}}\), we need to design the control for \(\mathcal{S}^{i+1} here \(\mathbf{a}_{i+1},\mathbf{b}_{i+1}\in\mathbb{R}^{z\times z}\). \(\mathbf{v}_{k+1}^{\mathrm{int}}\) is the auxiliary control design that drives \(\mathbf{q}^{(k+1)}\) to \(\mathbf{q}^{(k+1),e}\). The ultimate internal state of the original system becomes \(\mathbf{q}^{(k+1)}\) and \(\mathcal{S}^{k}\) is the simplest sub-order underactuated system with the property \(\dim(\mathbf{q}_{a}^{(k)})\geq\dim(\mathbf{q}_{u}^{(k)})\). Given the balance control \(\mathbf{u}_{k+1}^{\mathrm{int}}\), incorporating the balance control of \(\mathbf{q}^{(k+1)}\) can be achieved by the EIC-based controller. We redesign the control input so that the virtually "actuated" coordinates (\(\mathbf{q}_{a}^{(k)}\)) will drive \(\mathbf{q}_{u}^{(k)}\) to \(\mathcal{E}_{k}\). Inserting \(\mathbf{u}_{k+1}^{\mathrm{int}}\) and \(\mathbf{v}_{k+1}^{\mathrm{int}}\) into \(\mathcal{S}_{a}^{k}\) dynamics leads to \[\mathbf{D}_{aa}^{(k)}\tilde{\mathbf{q}}_{a}^{(k)}+\mathbf{D}_{au}^{(k)}\mathbf{v}_{k+1}^{ \mathrm{int}}+\mathbf{H}_{a}^{(k)}=\mathbf{B}_{a}^{(k)}\mathbf{u}_{k+1}^{\mathrm{int}}. \tag{14}\] Clearly, in order to achieve \(\mathbf{q}_{u}^{(k)}=\mathbf{v}_{k+1}^{\mathrm{int}}\), we need to revise \(\mathbf{q}_{a}^{(k)}\) dynamics (i.e., \(\mathcal{S}_{a}^{k}\)), which is realized by redesigning the control input \[\mathbf{u}_{k}^{\mathrm{int}} =\left(\mathbf{B}_{a}^{(k)}\right)^{-1}\left(\mathbf{D}_{aa}^{(k)}\mathbf{v}_ {k}^{\mathrm{int}}+\mathbf{D}_{au,k+1}^{(k)}\mathbf{q}_{u}^{(i)}+\mathbf{H}_{a}^{(k)} \right), \tag{15a}\] \[\mathbf{v}_{k}^{\mathrm{int}} =\left(\mathbf{D}_{aa}^{(k)}\right)^{-1}\left(\mathbf{B}_{a}^{(k)}\mathbf{u}_ {k+1}^{\mathrm{int}}-\mathbf{D}_{au,k}^{(k)}\mathbf{v}_{k+1}^{\mathrm{int}}-\mathbf{H}_{a} ^{(k)}\right). \tag{15b}\] It is easy to verify \(\mathbf{q}_{u}^{(k)}=\mathbf{v}_{k+1}^{\mathrm{int}}\) by replacing the controls (14) with those in (15). The control updating for \(\mathbf{u}_{k}^{\mathrm{int}}\) follows a similar idea in (4). Under \(\mathbf{u}_{k}^{\mathrm{int}}\), the balance of \(\mathbf{q}^{(k)}\) is guaranteed. For \(\mathcal{S}^{i}\), \(\mathbf{u}_{i}^{\mathrm{int}}\) is obtained by replacing \(k\) with \(i\) in (15). In particular, the \(\mathbf{v}_{i}^{\mathrm{int}}\) is designed to update the virtually "actuated" coordinate \(\mathbf{q}_{a}^{(i+1)}\) dynamics so that it drives \(\mathbf{q}_{a}^{(i+1)}\) to BEM. The control \(\mathbf{v}_{i}^{\mathrm{int}}\) is \[\mathbf{v}_{i}^{\mathrm{int}}= \left(\mathbf{D}_{aa}^{(i)}\right)^{-1}\left(\mathbf{B}_{a}^{(i)}\mathbf{u}_{ i+1}^{\mathrm{int}}-\mathbf{D}_{au,i+1}^{(i)}\mathbf{v}_{i+1}^{\mathrm{int}}\right.\] \[\left.-\sum\nolimits_{j=i+1}^{k}\mathbf{D}_{au,j+1}^{(i)}\tilde{\mathbf{q }}_{a}^{(j+1)}-\mathbf{H}_{a}^{(i)}\right). \tag{16}\] We only consider \(\mathbf{v}_{i+1}^{\mathrm{int}}\) (the first \(n-m\) unactuated coordinates of \(\mathcal{S}^{i}\)) in updating the motion of virtually actuated coordinates. We denote the final control as \(\mathbf{u}_{0}^{\mathrm{int}}\). The diagram in Fig. 1 shows the structure of the proposed control design. We sequentially decompose the system \(\mathcal{S}^{i}\) and design control for actuated subsystem. When updating the control input, the \(\mathcal{S}^{i+1}\) dynamics is recognized as the internal subsystem of \(\mathcal{S}^{i}\) as shown in Fig. 1(a). However, in EIC-based control, the BEM is solved at once and the updated control needs to take of all unactuated coordinates (see Fig. 1(b)). ### _Stability Analysis_ Firstly, we show that all coordinates of \(\mathcal{S}^{i}\) under the control design \(\mathbf{u}_{i}^{\mathrm{int}}\) are under active control. Secondly, the convergence of the tracking error for \(\mathcal{S}^{i}\) is proved. **Lemma 1**: _Given the highly underactuated balance system \(\mathcal{S}\), if \(\mathcal{S}\) can be written into the CEIC form (10), under the control input \(\mathbf{u}_{i}^{\mathrm{int}}\), the closed-loop dynamics of \(\mathcal{S}^{i}\) becomes_ \[\tilde{\mathbf{q}}_{a}^{(j)} =\mathbf{v}_{j}^{\mathrm{int}},\ i\leq j\leq k,\] \[\tilde{\mathbf{q}}^{(k+1)} =\mathbf{v}_{k+1}^{\mathrm{int}}.\] The proof can be found in Appendix I. The primary concern when applying EIC-based control to \(\mathcal{S}\) is that certain coordinates would not display desired dynamics as shown in (6). The result in Lemma 1 indicates that each sub-order underactuated system is under active control design. Meanwhile, the constant input matrix assumption is no longer needed. Next, we show that \(\mathbf{q}\) converge to \(\{\mathcal{E}_{i},...,\mathcal{E}_{k+1}\}\) (\(\mathbf{q}_{a}^{d}\) can be viewed as \(\mathcal{E}_{0}\)). Based on the results in Lemma 1, the closed-loop dynamics of \(\mathcal{S}^{k+1}\) under the control design \(\mathbf{u}_{i}^{\mathrm{int}}\) becomes \[\tilde{\mathbf{q}}^{(k+1)}=\mathbf{v}_{k+1}^{\mathrm{int}}=\tilde{\mathbf{q}}^{(k+1),e}-\mathbf{ a}_{k+1}\mathbf{e}_{k+1}-\mathbf{b}_{k+1}\hat{\mathbf{e}}_{k+1}.\] The \(\mathcal{S}^{k+1}\) dynamics clearly is exponentially stable, if \(\mathbf{a}_{k+1}\) and \(\mathbf{b}_{k+1}\) are selected properly. The preliminary control design \(\mathbf{u}_{i}^{\mathrm{ext}}\) is used to obtain \(\mathcal{E}_{i}\). \(\mathbf{\Gamma}_{i+1}=\mathbf{0}\) can be explicitly written as \[\mathbf{D}_{au}^{(i+1)}\tilde{\mathbf{q}}_{a}^{(i+1)}+\mathbf{H}_{a}^{(i+1)}-\mathbf{D}_{a}^{(i+ 1)}\mathbf{u}_{i}^{\mathrm{ext}}=\mathbf{0} \tag{17}\] under \(\mathbf{q}_{a}^{(i+1)}=\mathbf{q}_{a}^{(i+1),e}\) and \(\tilde{\mathbf{q}}_{a}^{(i+1)}=\tilde{\mathbf{q}}_{a}^{(i+1)}=\mathbf{0}\). The above relationship (17) shall play a significant role in showing the convergence of \(\mathbf{q}_{a}^{(i)}\). The control input \(\mathbf{u}_{i+1}^{\mathrm{int}}\) is used to update \(\mathbf{u}_{i}^{\mathrm{int}}\). We rewrite \(\mathbf{u}_{i+1}^{\mathrm{int}}\) around \(\mathbf{q}_{a}^{(i+1)}=\mathbf{q}_{a}^{(i+1),e},\tilde{\mathbf{q}}_{a}^{(i+1)}=\tilde{\mathbf{q}}_{a }^{(i+1)}=\mathbf{0}\), \[\mathbf{u}_{i+1}^{\mathrm{int}} =\left(\mathbf{B}_{a}^{(i+1)}\right)^{-1}\left(\mathbf{D}_{au}^{(i+1)} \tilde{\mathbf{q}}_{a}^{(i+1)}+\mathbf{H}_{a}^{(i+1)}\right)\big{|}_{\mathbf{x}_{q}^{(i+1),e} }+\mathbf{o}_{i}\] \[=\left(\mathbf{B}_{a}^{(i+1)}\right)^{-1}\mathbf{B}_{a}^{(i+1)}\mathbf{u}_{i}^ {\mathrm{ext}}\big{|}_{\mathbf{x}_{q}^{(i+1),e}}+\mathbf{o}_{i}\] \[=\mathbf{u}_{i}^{\mathrm{ext}}+\mathbf{o}_{i}. \tag{18}\] Fig. 1: Illustrative diagram of control design for \(\mathcal{S}\). (a) CEIC-based control design. (b) EIC-based control design. where (17) is used to simplify the above equation. \(\mathbf{x}_{q}^{(i+1),e}=\{\mathbf{q}_{a}^{(i+1),e}\ \mathbf{0},\mathbf{0}\}\) and \(\mathbf{o}_{i}\) denotes perturbations including the higher order term and \(\left(\mathbf{B}_{a}^{(i+1)}\right)^{-1}\mathbf{D}_{aa}^{(i+1)}\mathbf{v}_{i+1}^{\text{ext}}\). To proceed, substituting (18) into \(\ddot{\mathbf{q}}_{a}^{(i)}=\mathbf{v}_{i}^{\text{int}}\) and using Lemma 1 yields \(\ddot{\mathbf{q}}_{a}^{(i)}=\mathbf{v}_{i}^{\text{int}}=\mathbf{v}_{i}^{\text{ext}}+\mathbf{O} _{i}\), where \(\mathbf{O}_{i}=\left(\mathbf{D}_{aa}^{(i)}\right)^{-1}\mathbf{B}_{a}^{(i)}\mathbf{o}_{i}\). The closed-loop dynamics becomes \[\ddot{\mathbf{e}}_{i} =-\mathbf{a}_{i}\mathbf{e}_{i}-\mathbf{b}_{i}\mathbf{e}_{i}+\mathbf{O}_{i},i\leq k \tag{19a}\] \[\ddot{\mathbf{e}}_{k+1} =-\mathbf{a}_{k+1}\mathbf{e}_{k+1}-\mathbf{b}_{k+1}\mathbf{e}_{k+1}. \tag{19b}\] Let \(\mathbf{\xi}=[e_{0}^{T}\ \dot{\mathbf{e}}_{0}^{T}\...\ \mathbf{e}_{k+1}^{T}\ \dot{\mathbf{e}}_{k+1}^{T}]^{T}\) be the error vector. We rewrite the error dynamics into the following compact form \[\mathcal{S}_{\epsilon}:\dot{\mathbf{\xi}} =\begin{bmatrix}\mathbf{0}&\mathbf{I}&\cdots&\mathbf{0}&\mathbf{0}\\ -\mathbf{a}_{0}&-\mathbf{b}_{0}&\cdots&\mathbf{0}&\mathbf{0}\\ &&\ddots&&\\ \mathbf{0}&\mathbf{0}&\cdots&\mathbf{0}&\mathbf{I}\\ \mathbf{0}&\mathbf{0}&\cdots&-\mathbf{a}_{k+1}&-\mathbf{b}_{k+1}\end{bmatrix}\mathbf{\xi}+\begin{bmatrix} \mathbf{0}\\ \mathbf{O}_{0}\\ \vdots\\ \mathbf{0}\\ \mathbf{0}\end{bmatrix}\] \[\triangleq\mathbf{A}\mathbf{\xi}+\mathbf{O}_{\xi}. \tag{20}\] If the gains \(\{\mathbf{a}_{j},\mathbf{b}_{j}\},j=i,...,k+1\) are properly selected such that \(\mathbf{A}\) is Hurwitz, \(\mathbf{\xi}\) can be shown converging to zero under perturbations. Assume that the perturbation term is affine to tracking errors as \(\left\|\mathbf{O}_{\xi}\right\|\leq c_{1}\left\|\mathbf{\xi}\right\|+c_{2}\) for \(c_{1}\) and \(c_{2}>0\). We take the Lyapunov function candidate \(V=\frac{1}{2}\mathbf{\xi}^{T}\mathbf{\xi}\). It is easy to show that \[\dot{V} =\mathbf{\xi}^{T}\mathbf{A}\mathbf{\xi}+\mathbf{\xi}^{T}\mathbf{O}_{\mathbf{\xi}}\leq \lambda_{1}(\mathbf{A})\left\|\mathbf{\xi}\right\|^{2}+\left\|\mathbf{\xi}\right\|(c_{1} \left\|\mathbf{\xi}\right\|+c_{2})\] \[=\left[\lambda_{1}(\mathbf{A})+c_{1}\right]\left\|\mathbf{\xi}\right\|^{2 }+c_{2}\left\|\mathbf{\xi}\right\|\] where \(\lambda_{1}(\mathbf{A})\) denotes the greatest eigenvalue of \(\mathbf{A}\). If \(\lambda_{1}(\mathbf{A})+c_{1}<0\), the tracking error is exponentially decreasing under perturbation. The control design is based on the CIEC form and thus the system dynamics should satisfy certain conditions. Here we summarize the conditions: * fully ranked matrix for each sub-order underactuated system \(\operatorname{rank}(\mathbf{D}_{aa}^{(i)})=\operatorname{rank}(\mathbf{D}_{aa}^{(i)} )=\operatorname{rank}(\mathbf{B}_{aa}^{(i)})=n\), \(i\leq k\) and \(\operatorname{rank}(\mathbf{D}_{aa}^{(k+1)})=\operatorname{rank}(\mathbf{D}_{au}^{(k+ 1)})=\operatorname{rank}(\mathbf{B}_{a}^{(k+1)})=z\); * \(\mathbf{D}_{aa}^{(i+1)}-\mathbf{B}_{a}^{(i+1)}\left(\mathbf{B}_{a}^{(i)}\right)^{-1}\mathbf{ D}_{au,i+1}^{(i)}\neq\mathbf{0}\) to guarantee that the each actuated subsystem can display the designed dynamics. ## V Results We present the simulation result to demonstrate and validate the proposed control design in this section. Fig. 2 shows a triple-inverted pendulum system on a moving cart. With four DOFs, only the cart is actuated and moves left/right to follow the given reference trajectory while keeping the inverted pendulum balanced around the vertical position. The dynamics model of a cart-triple inverted pendulum system [14] can be written into the form of \(\mathcal{S}\) with \[\mathbf{D} =\begin{bmatrix}M_{t}&-M_{1}\,\mathrm{c}_{1}&-M_{2}\,\mathrm{c}_ {2}&-M_{3}\,\mathrm{c}_{3}\\ -M_{1}\,\mathrm{c}_{1}&I_{1}&M_{2}l_{1}\,\mathrm{c}_{21}&M_{3}l_{1}\,\mathrm{c }_{31}\\ -M_{2}\,\mathrm{c}_{2}&M_{2}l_{1}\,\mathrm{c}_{21}&I_{2}&M_{3}l_{2}\,\mathrm{c}_ {32}\\ -M_{3}\,\mathrm{c}_{3}&M_{3}l_{1}\,\mathrm{c}_{31}&M_{3}l_{2}\,\mathrm{c}_{32}&I_ {3}\end{bmatrix},\] \[\mathbf{C} =\begin{bmatrix}0&M_{1}\dot{\theta}_{1}\,\mathrm{s}_{1}&M_{2} \dot{\theta}_{2}\,\mathrm{s}_{2}&M_{3}\dot{\theta}_{3}\,\mathrm{s}_{3}\\ 0&0&-M_{2}l_{1}\dot{\theta}_{2}\,\mathrm{s}_{21}&-M_{3}l_{1}\dot{\theta}_{3}\, \mathrm{s}_{31}\\ 0&M_{2}l_{1}\dot{\theta}_{1}\,\mathrm{s}_{31}&0&-M_{3}l_{2}\dot{\theta}_{3}\, \mathrm{s}_{32}\\ 0&M_{3}l_{1}\dot{\theta}_{1}\,\mathrm{s}_{31}&M_{3}l_{2}\dot{\theta}_{2}\, \mathrm{s}_{32}&0\end{bmatrix},\] \[\mathbf{G} =\begin{bmatrix}0\\ -M_{1}g\,\mathrm{s}_{1}\\ -M_{2}g\,\mathrm{s}_{2}\\ -M_{3}g\,\mathrm{s}_{3}\end{bmatrix},\ \mathbf{B}=\begin{bmatrix}1\\ 0\\ 0\end{bmatrix},\] where \(\mathrm{s}_{i}=\sin\theta_{i}\), \(\mathrm{c}_{i}=\cos\theta_{i}\), \(\mathrm{s}_{ij}=\sin(\theta_{i}-\theta_{j})\), and \(\mathrm{c}_{ij}=\cos(\theta_{i}-\theta_{j})\). The variables are defined as \(M_{t}=m_{c}+m_{1}+m_{2}+m_{3}\), \(M_{1}=m_{1}a_{1}+(m_{2}+m_{3})l_{1}\), \(M_{2}=m_{2}a_{2}+m_{3}l_{2}\), \(M_{3}=m_{3}a_{3}\), \(I_{1}=J_{1}+m_{1}a_{1}^{2}+(m_{2}+m_{3})l_{1}^{2}\), \(I_{2}=J_{2}+m_{2}a_{2}^{2}+m_{3}l_{2}^{2}\), \(I_{3}=J_{3}+m_{3}a_{3}^{2}\). The length and distance from the joint to COM of each link are \(l_{i}\) and \(a_{i}\) respectively. The mass and the moment of inertia of each link are \(m_{i}\) and \(J_{i}\). The gravity constant is \(g\). Let \(q_{a}^{(1)}=x,q_{a}^{(2)}=\theta_{1},q_{a}^{(3)}=\theta_{2},q^{(4)}=\theta_{3}\), we rewrite the system dynamics into the CIEC form. In particular, the \(\mathcal{S}_{a}^{1}\) dynamics is explicitly given as \[\mathcal{S}_{a}^{1} :\ \left(J_{1}M_{t}-M_{1}^{2}\,\mathrm{c}_{1}^{2}\right)\ddot{ \theta}_{1}+M_{2}\left(\mathrm{c}_{21}\,l_{1}M_{t}-\mathrm{c}_{2}\,\mathrm{c}_{1} \,M_{1}\right)\ddot{\theta}_{2}\] \[\ \ +M_{3}\,\mathrm{(c}_{32}\,l_{2}M_{t}-\mathrm{c}_{3}\,\mathrm{c}_{ 1}\,M_{1}\right)\ddot{\theta}_{3}\] \[\ \ -\left(M_{2}l_{1}\dot{\theta}_{2}^{2}\,\mathrm{s}_{12}+M_{3}l_{1} \dot{\theta}_{3}^{2}\,\mathrm{s}_{32}+M_{1}g\,\mathrm{s}_{1}\right)M_{t}\] \[\ \ +M_{1}\,\mathrm{c}_{1}\left(M_{1}\dot{\theta}_{1}^{2}\,\mathrm{s }_{1}+M_{2}\dot{\theta}_{2}^{2}\,\mathrm{s}_{2}-M_{3}\dot{\theta}_ under EIC-based control and the proposed control design. Under the CIEC-based control design, the cart follows the given reference trajectory, and all three unactuated links were kept balanced on the BEM as shown in Fig. 3(a) and Fig. 3(b). While the system becomes unstable (see Fig. 3(e) and Fig. 3(f)) when the EIC-based control is applied, which validates the analysis in Section II. In EIC-based control, the cart position coordinates carry the task of balancing all three links. While the CIEC-based control only assigns the task of balance link \(\theta_{1}\) to the cart. The tracking errors are shown in Fig. 3(c) and Fig. 3(d). We further summarize the steady tracking error in Table. I (mean and standard deviation). The relative error is obtained by normalizing the tracking error with the reference' (or BEM profile) amplitude. Since the system is in a cascaded form, the tracking error in the internal system would affect the tracking performance in the external system. It is observed in Table I that \(|e_{1}|>|e_{2}|>|e_{3}|\) in terms of the mean errors for both absolute and relative errors. The part of \(\mathbf{q}_{a}^{(i)}\) motion effect serves as the control input to drive \(\mathbf{q}_{a}^{(i-1)}\) to its BEM. \(q_{a}^{(3)}\) (equivalently \(\theta_{2}\)) would not achieve the best tracking performance until the \(q_{a}^{(4)}\) (equivalently \(\theta_{3}\)) perfectly follows its BEM. In such a way, the task of balancing \(\mathcal{S}^{k+1}\) is placed at the highest priority and all other unactuated systems are balanced one by one sequentially. ## VI Conclusion This paper proposed a cascaded nonlinear control framework for highly underactuated balance robots (i.e., there are more unactuated coordinates). To achieve simultaneous trajectory tracking and balance control, the proposed framework converts a highly underactuated robot system to a series of cascaded virtually actuated subsystems. The tracking control inputs are sequentially designed layer by layer until the last subsystem. The control input then is updated from the last subsystem to the first one to incorporate the balance task. Under such a sequential design and updating, we show the closed-loop system dynamics is stable. We validate the control design with numerical simulation on a triple-inverted pendulum cart system. In the future, we plan to extend such a framework with machine learning-based techniques to achieve guaranteed performance and test the framework with physical robot systems. Fig. 3: Tracking control of a triple-inverted pendulum cart. (a) and (b) shows the cart position and pendulum angles under the proposed control. (c) and (d) shows the tracking errors. (e) and (f) shows the cart position and pendulum rotation angles under the EIC-based control. ## Appendix I Proof for Lemma 1 Inserting \(\mathbf{u}_{i}^{\rm int}\) into \(\mathcal{S}_{a}^{i}\) leads to \(\mathbf{D}_{aa}^{(i)}\tilde{\mathbf{q}}_{a}^{(i)}+\mathbf{D}_{aa}^{(i)}\tilde{\mathbf{q}}_{a}^{(i )}+\mathbf{H}_{a}^{(i)}=\mathbf{B}_{a}^{(i)}\left(\mathbf{B}_{a}^{(i)}\right)^{-1}\left(\bm {D}_{aa}^{(i)}\mathbf{v}_{i}^{\rm int}+\mathbf{D}_{aa}^{(i)}\tilde{\mathbf{q}}_{a}^{(i)}+ \mathbf{H}_{a}^{(i)}\right)\). After simplification, we can obtain that \(\tilde{\mathbf{q}}_{a}^{(i)}=\mathbf{v}_{i}^{\rm int}\). Next we show that \(\mathcal{S}_{a}^{i+1}\) under the control input \(\mathbf{u}_{i}^{\rm int}\) displays the dynamics behavior \(\tilde{\mathbf{q}}_{a}^{(i+1)}=\mathbf{v}_{i+1}^{\rm int}\). Substituting \(\mathbf{u}_{i}^{\rm int}\) into \(\mathcal{S}_{a}^{i+1}\) we obtain \[\mathbf{D}_{aa}^{(i+1)}\tilde{\mathbf{q}}_{a}^{(i+1)}\!+\!\mathbf{D}_{au}^{(i+1)}\tilde{ \mathbf{q}}_{u}^{(i+1)}\!+\!\mathbf{H}_{a}^{(i+1)}=\mathbf{B}_{a}^{(i+1)}\mathbf{u}_{i}^{\rm int} \tag{21}\] The right hand side of (21) is simplified by inserting the explicit form of \(\mathbf{u}_{i}^{\rm int}\) as \[{\rm RHS}=\mathbf{B}_{a}^{(i+1)}\left(\mathbf{B}_{a}^{(i)}\right)^{-1}\left(\mathbf{D}_{ aa}^{(i)}\mathbf{v}_{i}^{\rm int}+\mathbf{D}_{au}^{(i)}\bar{\mathbf{q}}_{u}^{(i)}+\mathbf{H}_{a}^{ (i)}\right),\] where \[\mathbf{B}_{a}^{(i+1)}\left(\mathbf{B}_{a}^{(i)}\right)^{-1}\mathbf{D}_{aa}^{ (i)}\mathbf{v}_{i}^{\rm int}\] \[=\mathbf{B}_{a}^{(i+1)}\mathbf{u}_{i+1}^{\rm int}-\mathbf{B}_{a}^{(i+1)} \left(\mathbf{B}_{a}^{(i)}\right)^{-1}\left(\mathbf{D}_{au}^{(i)}\left[\begin{matrix} \mathbf{v}_{i+1}^{\rm int}\\ \tilde{\mathbf{q}}_{u}^{(i+1)}\end{matrix}\right]+\mathbf{H}_{a}^{(i)}\right)\] \[=\mathbf{D}_{aa}^{(i+1)}\mathbf{v}_{k+1}^{\rm int}+\mathbf{D}_{au}^{(i+1)} \tilde{\mathbf{q}}_{u}^{(i+1)}+\mathbf{H}_{a}^{(i+1)}\] \[\qquad\qquad-\mathbf{B}_{a}^{(i+1)}\left(\mathbf{B}_{a}^{(i)}\right)^{-1} \left(\mathbf{D}_{au}^{(i)}\left[\begin{matrix}\mathbf{v}_{i+1}^{\rm int}\\ \tilde{\mathbf{q}}_{u}^{(i+1)}\end{matrix}\right]+\mathbf{H}_{a}^{(i)}\right).\] Thus, the right-hand side of (21) becomes \[{\rm RHS}= \mathbf{D}_{aa}^{(i+1)}\mathbf{v}_{k+1}^{\rm int}+\mathbf{D}_{au}^{(i+1)} \tilde{\mathbf{q}}_{u}^{(i+1)}+\mathbf{H}_{a}^{(i+1)}\] \[-\mathbf{B}_{a}^{(i+1)}\!\left(\mathbf{B}_{a}^{(i)}\right)^{-1}\mathbf{D}_{au }^{(i)}\left[\begin{matrix}\tilde{\mathbf{q}}_{a}^{(i+1)}-\mathbf{v}_{i+1}^{\rm int}\\ \mathbf{0}\end{matrix}\right].\] Using above equation, (21) is rewritten into \[\left[\mathbf{D}_{aa}^{(i+1)}-\mathbf{B}_{a}^{(i+1)}\left(\mathbf{B}_{a}^{(i)}\right)^{-1 }\mathbf{D}_{au,i+1}^{(i)}\right]\left(\tilde{\mathbf{q}}_{a}^{(i+1)}-\mathbf{v}_{i+1}^{ \rm int}\right)=\mathbf{0}.\] If \(\mathbf{D}_{aa}^{(i+1)}-\mathbf{B}_{a}^{(i+1)}\left(\mathbf{B}_{a}^{(i)}\right)^{-1}\mathbf{D}_ {au,i+1}^{(i)}\neq\mathbf{0}\), the solution for above equation becomes \(\tilde{\mathbf{q}}_{a}^{(i+1)}=\mathbf{v}_{i+1}^{\rm int}\), which is exactly the designed control input. The proof is continued until \(\mathcal{S}^{k+1}\). Due to the page limit, it is not presented here.
2309.09925
Recycling Krylov Subspaces for Efficient Partitioned Solution of Aerostructural Adjoint Systems
Robust and efficient solvers for coupled-adjoint linear systems are crucial to successful aerostructural optimization. Monolithic and partitioned strategies can be applied. The monolithic approach is expected to offer better robustness and efficiency for strong fluid-structure interactions. However, it requires a high implementation cost and convergence may depend on appropriate scaling and initialization strategies. On the other hand, the modularity of the partitioned method enables a straightforward implementation while its convergence may require relaxation. In addition, a partitioned solver leads to a higher number of iterations to get the same level of convergence as the monolithic one. The objective of this paper is to accelerate the fluid-structure coupled-adjoint partitioned solver by considering techniques borrowed from approximate invariant subspace recycling strategies adapted to sequences of linear systems with varying right-hand sides. Indeed, in a partitioned framework, the structural source term attached to the fluid block of equations affects the right-hand side with the nice property of quickly converging to a constant value. We also consider deflation of approximate eigenvectors in conjunction with advanced inner-outer Krylov solvers for the fluid block equations. We demonstrate the benefit of these techniques by computing the coupled derivatives of an aeroelastic configuration of the ONERA-M6 fixed wing in transonic flow. For this exercise the fluid grid was coupled to a structural model specifically designed to exhibit a high flexibility. All computations are performed using RANS flow modeling and a fully linearized one-equation Spalart-Allmaras turbulence model. Numerical simulations show up to 39% reduction in matrix-vector products for GCRO-DR and up to 19% for the nested FGCRO-DR solver.
Christophe Blondeau, Mehdi Jadoui
2023-09-18T16:40:02Z
http://arxiv.org/abs/2309.09925v3
# Recycling Krylov Subspaces for Efficient Partitioned Solution of Aerostructural Adjoint Systems ###### Abstract Robust and efficient solvers for coupled-adjoint linear systems are crucial to successful aerostructural optimization. Monolithic and partitioned strategies can be applied. The monolithic approach is expected to offer better robustness and efficiency for strong fluid-structure interactions. However, it requires a high implementation cost and convergence may depend on appropriate scaling and initialization strategies. On the other hand, the modularity of the partitioned method enables a straightforward implementation while its convergence may require relaxation. In addition, a partitioned solver leads to a higher number of iterations to get the same level of convergence as the monolithic one. The objective of this paper is to accelerate the fluid-structure coupled-adjoint partitioned solver by considering techniques borrowed from approximate invariant subspace recycling strategies adapted to sequences of linear systems with varying right-hand sides. Indeed, in a partitioned framework, the structural source term attached to the fluid block of equations affects the right-hand side with the nice property of quickly converging to a constant value. We also consider deflation of approximate eigenvectors in conjunction with advanced inner-outer Krylov solvers for the fluid block equations. We demonstrate the benefit of these techniques by computing the coupled derivatives of an aeroelastic configuration of the ONERA-M6 fixed wing in transonic flow. For this exercise the fluid grid was coupled to a structural model specifically designed to exhibit a high flexibility. All computations are performed using RANS flow modeling and a fully linearized one-equation Spalart-Allmaras turbulence model. Numerical simulations show up to \(39\%\) reduction in matrix-vector products for GCRO-DR and up to \(19\%\) for the nested FGCRO-DR solver. keywords: partitioned solver, subspace recycling, coupled adjoint, GCRO-DR + Footnote †: journal: ## 1 Introduction We are interested in robust and efficient solvers for the solution of the coupled-adjoint linear system which is crucial to successful aerostructural optimization. Fine control of an aerodynamic shape or of a structural layout leads to a high-dimensional parameter space. For high-fidelity simulations, gradient-based optimizers in conjunction with the adjoint approach are the methods of choice. However, the coupled adjoint linear system is inherently ill-conditioned as it embeds matrix blocks of different scales and structures. In addition, the fluid block coming from the exact linearization of the RANS equations associated to a turbulence model is often very stiff. Besides, a strong level of fluid-structure interaction is known to be detrimental to the robustness and efficiency of existing solution techniques. For such linear systems, monolithic and partitioned (or segregated) strategies can be applied. The former approach is expected to offer better robustness and efficiency for strong fluid-structure interactions [1; 2]. However, it requires a high implementation cost and convergence may depend on appropriate scaling and initialization strategies. On the other hand, the modularity of the partitioned method enables a straightforward implementation while its convergence may require relaxation. In addition, a partitioned solver leads to a higher number of iterations to get the same level of convergence as the monolithic one. A review of partitioned simulations of fluid-structure interactions involving black-box solvers is proposed in [3]. The partitioned approach simply consists in solving, in an alternating way, the aerodynamic and the structural sub-problems by applying the Linear Block Gauss-Seidel algorithm (LBGS). It accounts for the interdisciplinary coupling by adding a source term to the right-hand side of each set of disciplinary adjoint equations. The modularity of this approach makes it rather interesting since it takes advantage of the specific routines designed for each sub-problem and does not require a high implementation cost. Nevertheless, this approach becomes rapidly inefficient and could even diverge for strong fluid-structure coupling even though the addition of some level of relaxation helps to mitigate this issue. On the other hand, the monolithic approach consists in solving the fluid and structural equations simultaneously making it more robust in the sense that it is less sensitive to the strength of the fluid-structure interaction. The coupled adjoint system is generally solved by using Krylov subspace methods. The challenging aspect of such an approach is then to develop advanced preconditioning strategies combined with numerical ingredients so that Krylov methods reach the best performances in terms of robustness and efficiency. The objective of this work is to improve the efficiency of the existing partitioned solver [4; 5] by considering techniques borrowed from Krylov subspace recycling strategies adapted to sequences of linear systems with varying right-hand sides [6]. We will demonstrate the benefit of these advanced techniques by computing the coupled derivatives for the ONERA-M6 fixed wing in transonic flow [7]. For this exercise the fluid grid is coupled to a structural model specifically designed to exhibit a high flexibility. All computations are performed using RANS flow modeling and a fully linearized one-equation Spalart-Allmaras turbulence model. As an example, Figure 1 illustrates the performance of the existing embedded coupled-adjoint segregated solver for structured grids in a former version of the elsA CFD code 3, applied to the M6 wing test case. Figure 1: Coupled-adjoint relative residual norm convergence history of FGMRES-DR(70,10,35). Impact of CFL on the performance of the LU-SGS preconditioner. These convergence curves will serve as a reference for comparison with the improved solver efficiency allowed by the developments performed in this work. A flexible GMRES Krylov solver with deflated restarting is used with an outer Krylov basis of 70 vectors, an inner Krylov basis of size 10 and a deflation subspace of 35 vectors (between fluid cycles). At the beginning of this work, the only available preconditioner for structured grids in elsA was a combination of a Restrictive Additive Schwarz (RAS) domain decomposition method coupled with a Lower-Upper Symmetric Gauss Seidel (LU-SGS) relaxation. This preconditioner is controlled by two parameters: the number of relaxations and the CFL coefficient. The number of relaxations is fixed at 6 and the CFL is varied. In [8] we point out that a proper tuning of the CFL coefficient is crucial for this type of preconditioners and how new block ILU type preconditioners accelerate the rate of convergence. Also, after each fluid-structure coupling, a cold restart is performed and the spectral information from the previous fluid-structure cycle is discarded. As a consequence, a plateau is observed after each restart which dramatically hampers convergence in the first cycles. We will demonstrate that well-chosen subspace recycling strategies will eliminate these convergence stagnations. In a high-fidelity aerostructural optimization context, Zhang and Zingg [9] implemented a robust monolithic solution method for both aerostructural analysis and coupled adjoint problem. A three-field formulation was adopted involving the mesh, the flow and the structural states. The performance of the monolithic method as well as the partitioned one was investigated through a comparative study by varying the level of fluid-structure coupling. For the coupled adjoint problem solution, a GCROT [10] Krylov solver has been used in conjunction with a block Gauss-Seidel preconditioner. The monolithic adjoint solution has been 60% more efficient than the partitioned one for strong coupling. For weak fluid-structure coupling, the monolithic solution still outperformed the partitioned one with a better efficiency of 40%. In terms of computational time, the monolithic method showed 50 % to over 60 % faster than the partitioned method. A similar comparative study of both monolithic and partitioned approaches was performed by Kenway et al. [11] except that a Block Jacobi preconditioner was applied to the coupled adjoint system. The aerodynamic and structural block preconditioners were solved by a preconditioned Krylov method (restarted Generalized Minimal RESidual - GMRES [12]) and a direct factorization method respectively. A Flexible Krylov method (e.g FGMRES [13]) has been used for the coupled adjoint system solution. The Common Research Model (CRM) wing-body-tail configuration was sized by considering two critical load cases: \(1g\) cruise condition with moderate elastic deformation and a \(2.5g\) pull-up with significantly more deflection. For the same memory footprint, the best monolithic solution seems to outperform the best partitioned one by reducing the time by 19% for the \(1g\) load and by 29% for the \(2.5g\) case. These numerical experiments demonstrate the great benefit of using monolithic approach in the strong coupling case but at the price of a robust preconditioner for the Krylov solver. We note however that both studies only considered inviscid flow modeling. These conclusions about the monolithic solver efficiency might be mitigated by the added stiffness of adjoint system matrices produced by a RANS fluid model associated to a linearized turbulence model. Although the satisfactory performance of monolithic solvers, advanced strategies that could accelerate the partitioned algorithm using black-box solvers have received less attention. We recall that a partitioned algorithm consists in approximately solving the aerodynamic adjoint block at each fluid-structure iteration resulting in a sequence of adjoint linear systems with varying right-hand sides. As already mentioned, the structural source term that affects the right-hand side of the fluid block has the nice property of rapidly converging to a constant value. The corollary of this property is that after several fluid-structure couplings, the subsequent fluid systems should greatly benefit from recycling spectral information from the previous fluid-structure cycles. At the start of this work, the current partitioned adjoint solver did not take advantage of recycling and at each update of the structural source term the Krylov solver did a cold start from the previous solution. The principle of deflation is to remove the influence of a system's subspace on the iterative process. This is usually beneficial when directions of certain subspaces hamper convergence. Deflation of an eigenspace can be performed in two ways: the linear system (matrix and right-hand side) is left-multiplied by a projector \(\mathbf{P}\), i.e. deflation by projection [14, 15, 16], or some eigenvectors are added to the Krylov subspace, i.e. deflation by augmentation [17]. A survey of deflation and augmentations techniques can be found in [18]. A specific type of deflation preconditioning aims at solving a rank-deficient projected system, using a Krylov solver, in a certain subspace outside of the problematic subspace. The solution is then complemented with the solution in the latter subspace. In [16] the author makes the post-correction superfluous by using a projection as right-preconditioner instead. For deflation by augmentation, adding eigenvectors to the Krylov subspace can effectively deflate corresponding eigenvalues from the spectrum because when these directions are included in the solution approximation, the convergence of the Krylov solver continues according to the modified spectrum. The deflation by augmentation led to the well-known FGMRES-DR solver [19] and its extension to inner-outer Krylov solver [20, 8]. Unfortunately, the deflated restarted GMRES framework based on subspace augmentation is not adapted for solving sequences of linear systems [6]. Fortunately, some authors have proposed new strategies in order to reuse information accumulated in previous fluid-structure cycles and use it to accelerate the solution of the next linear system. Krylov subspaces recycling methods seem to be the suitable choice. Historically, De Sturler suggested the Generalized Conjugate Residuals with inner Orthogonalization (GCRO) method [21], an improved version of the recursive GMRES (GMRESR) solver [22] by maintaining an orthogonality condition between the outer and the inner spaces generated by GMRESR. This way, it provides the optimal correction to the solution in a global search space. Later, Parks et al. formulated the GCRO-DR algorithm [6] that combines GCRO and deflation techniques by augmentation introduced by Morgan [17]. They demonstrated better performances of GCRO-DR compared to the GMRES-DR in a long sequence of linear systems from a fracture mechanics problem. Carvhalo et al. extended GCRO-DR to the flexible case (FGCRO-DR) [23] and they conducted in-depth analysis of both flexible methods. In particular, they showed that both methods can be algebraically equivalent if a certain colinearity condition is satisfied at each cycle. In 2013, Niu et al. introduced Loose GCRO-DR (LGCRO-DR) [24] for improving the convergence of GCRO-DR by recycling both spectral information and approximate error information. The error is defined as the distance between the current iterate and the exact solution of the system. It is not known by definition but a fair approximation to it can be computed. This idea was initially proposed by Baker et al. [25] and mimics the idea behind GMRESR of including approximations to the error in the current approximation space. This error information is interesting since it represents in some sense the previous Krylov space generated in the previous cycle and subsequently discarded. In addition to that, LGCRO-DR is straightforward and economic to implement. In this work, we investigate advanced Krylov subspace methods using subspace recycling strategies for accelerating the partitioned solver applied to the linear coupled-adjoint system. More specifically, we compare GMRES-DR and FGMRES-DR to GCRO-DR and FGCRO-DR with and without subspace recycling. This work benefits from the recent achievements to improve efficiency of the fluid adjoint solution by applying nested Krylov subspace methods [8]. The numerical experiments are performed on an aeroelastic configuration of the ONERA M6 fixed wing in transonic viscous flow. This paper is organized as follows. In section 2 we briefly recall the theoretical background of aeroelastic and coupled-adjoint equations. The partitioned algorithm is also outlined. The aeroelastic numerical test case is presented in section 3. To support later comparison with GCRO-DR anf FGCRO-DR, we then review the fundamentals of FGMRES-DR in section 4 with a focus on numerical implementation and application to the solving of the fluid and coupled adjoint systems. Section 5 is devoted to the description of the GCRO algorithm with adaptations related to variable preconditioning and subspace recycling. We also take the opportunity to give some insights related to an efficient implementation of the deflation and recycling strategy. The numerical experiments are then repeated with the GCRO-DR solver and show very promising reduction in terms of matrix-vector products compared to the standard implementation. Finally, the flexible case is presented in section 6. ## 2 Aeronstructural adjoint system ### Aeroelastic equilibrium Let us denote the state variables of the coupled system \(\mathbf{W}\) and \(\mathbf{U}\), representing the fluid conservative variables and the structural displacements respectively. At the aeroelastic equilibrium, the state variables and the meshes satisfy the discretized equations of fluid and structural mechanics simultaneously: \[\begin{cases}\mathbf{R}_{a}(\mathbf{X}_{a},\mathbf{W},\mathbf{U})=\mathbf{0} \\ \mathbf{R}_{s}(\mathbf{X}_{s},\mathbf{W},\mathbf{U})=\mathbf{0}\end{cases} \tag{1}\] where \(\mathbf{R}_{a}\) is the discrete aerodynamic residual and \(\mathbf{R}_{s}\) the discrete structural residual. These two blocks of equations are coupled through aerodynamic forces \(\mathbf{Q}_{a}\) loading the skin of the structure and the structural displacements \(\mathbf{U}\) deforming the fluid mesh. The structural mesh is noted \(\mathbf{X}_{s}\). In the following we introduce two aerodynamic grids \(\mathbf{X}_{a}\) and \(\mathbf{X}_{a0}\). \(\mathbf{X}_{a}\) denotes the deformed aerodynamic grid at the aeroelastic equilibrium at the outcome of the aeroelastic analysis. \(\mathbf{X}_{a0}\) is called the reference mesh which supports the aerodynamic shape parametrization. Typically for an aircraft design study the reference mesh is chosen as the jig shape or the flight shape in reference nominal flight conditions. The load, displacement and mesh deformation operators then merely depend on \(\mathbf{X}_{a0}\) for an aeroelastic or coupled-adjoint analysis. The structural loads \(\mathbf{Q}_{s}\) are obtained with a suitable load transfer technique applied to \(\mathbf{Q}_{a}\) such that \[\mathbf{Q}_{s}(\mathbf{Q}_{a}(\mathbf{W},\mathbf{X}_{a}),\mathbf{X}_{a0}, \mathbf{X}_{s})=\mathbf{T}_{surf}^{Q}(\mathbf{X}_{a0},\mathbf{X}_{s})\mathbf{Q }_{a}(\mathbf{W},\mathbf{X}_{a}) \tag{2}\] where \(\mathbf{T}_{surf}^{Q}\) represents a linear load transfer operator. The subscript \(surf\) stipulates that the associated linear operators or data relate to the fluid-structure interface. The structural displacements alter the fluid grid locations through the relation: \[\mathbf{X}_{a}=\mathbf{X}_{a0}+\delta\mathbf{X}_{a}(\delta\mathbf{X}_{a,surf}, \mathbf{X}_{a0})=\mathbf{X}_{a0}+\mathbf{T}_{vol}(\mathbf{X}_{a0})\delta \mathbf{X}_{a,surf} \tag{3}\] with \(\mathbf{T}_{vol}(\mathbf{X}_{a0})\) the volume operator performing the deformation of the fluid domain. The vector \(\delta\mathbf{X}_{a,surf}\) corresponds to the displacements of the fluid nodes at the fluid-structure interface. \[\delta\mathbf{X}_{a,surf}=\delta\mathbf{X}_{a,surf}(\mathbf{X}_{a0},\mathbf{X }_{s},\mathbf{U})=\mathbf{T}_{surf}^{U}(\mathbf{X}_{a0},\mathbf{X}_{s}) \mathbf{U} \tag{4}\] where \(\mathbf{T}_{surf}^{U}(\mathbf{X}_{a0},\mathbf{X}_{s})\) represents a linear displacement transfer operator. ### Partitioned strategy for the coupled adjoint system Let us consider a scalar aeroelastic objective function \(J(\mathbf{W},\mathbf{U},\mathbf{X}_{a},\mathbf{X}_{s})\) and a design parameter \(p\). One way to obtain the coupled adjoint equations is to formulate an augmented objective function by adding the total variation of the residuals \(\mathbf{R}_{s}\) and \(\mathbf{R}_{a}\) to the total derivative \(dJ/dp\). More specifically, we define \(d\tilde{J}/dp\) as \[\frac{d\tilde{J}}{dp}=\frac{dJ}{dp}+\lambda_{a}^{T}\frac{d\mathbf{R}_{a}}{dp} +\lambda_{s}^{T}\frac{d\mathbf{R}_{s}}{dp}, \tag{5}\] where \[\frac{dJ}{dp}=\frac{\partial J}{\partial\mathbf{W}}\frac{d\mathbf{W}}{dp}+ \frac{\partial J}{\partial\mathbf{X}_{a}}\frac{d\mathbf{X}_{a}}{dp}+\frac{ \partial J}{\partial\mathbf{X}_{s}}\frac{d\mathbf{X}_{s}}{dp}+\frac{\partial J }{\partial\mathbf{U}}\frac{\partial\mathbf{U}}{\partial p}. \tag{6}\] In Eq. (5) the total variations of residuals are exactly zero since they represent constraints related to the satisfaction of the equilibrium equations at the outcome of the aeroelastic analysis. For simplicity, we restrict here to the specific case of a shape design parameter not affecting the structural geometry nor the structural stiffness. In addition, the explicit dependency of the objective function with respect to the structural states is dropped, i.e., we consider only derivatives of aerodynamic coefficients. The full derivation for the general case can be found in [26]. It is worth to mention that these assumptions do not lead to any loss of generality of the work presented in this paper since we focus on solution techniques for the adjoint system. As \(\mathbf{X}_{s}\) does not depend on the design parameter \(p\), we have \(d\mathbf{X}_{s}/dp=\mathbf{0}\). Under the same assumption we also have \(d\mathbf{K}/dp=\mathbf{0}\). After some algebra manipulation we end up with the following expression for \(d\tilde{J}/dp\) in which the total derivatives \(d\mathbf{W}/dp\) and \(d\mathbf{U}/dp\) have been factored out: \[\begin{split}\frac{d\tilde{J}}{dp}&=\left(\frac{ \partial J}{\partial\mathbf{W}}+\lambda_{a}^{T}\frac{\partial\mathbf{R}_{a}}{ \partial\mathbf{W}}-\lambda_{s}^{T}\mathbf{C}\right)\frac{d\mathbf{W}}{dp}+ \left(\frac{\partial J}{\partial\mathbf{X}_{a}}\mathbf{A}+\lambda_{a}^{T} \frac{\partial\mathbf{R}_{a}}{\partial\mathbf{X}_{a}}\mathbf{A}+\lambda_{s}^{ T}(\mathbf{K}-\mathbf{D})\right)\frac{d\mathbf{U}}{dp}\\ &+\left(\frac{\partial J}{\partial\mathbf{X}_{a}}+\lambda_{a}^{T }\frac{\partial\mathbf{R}_{a}}{\partial\mathbf{X}_{a}}\right)\mathbf{B}\frac{ d\mathbf{X}_{a0}}{dp}-\lambda_{s}^{T}\mathbf{E}\frac{d\mathbf{X}_{a0}}{dp} \end{split} \tag{7}\] Constant matrices \(\mathbf{A}\) to \(\mathbf{E}\) are defined analytically with the following formulas (see [26, 4]): \[\mathbf{A} =\mathbf{T}_{vol}\mathbf{T}_{surf}^{U} \tag{8}\] \[\mathbf{B} =\frac{\partial\mathbf{X}_{a}}{\partial\mathbf{X}_{a0}}=\mathbf{ I}+\frac{\partial\mathbf{A}}{\partial\mathbf{X}_{a0}}\mathbf{U}\] (9) \[\mathbf{C} =\mathbf{T}_{surf}^{Q}\frac{\partial\mathbf{Q}_{a}}{\partial \mathbf{W}}\] (10) \[\mathbf{D} =\mathbf{T}_{surf}^{Q}\frac{\partial\mathbf{Q}_{a}}{\partial \mathbf{X}_{a}}\mathbf{T}_{surf}^{U}\] (11) \[\mathbf{E} =\mathbf{T}_{surf}^{Q}\frac{\partial\mathbf{Q}_{a}}{\partial \mathbf{X}_{a}}\mathbf{B}+\frac{\partial\mathbf{Q}_{s}}{\partial\mathbf{X}_{a0}} \tag{12}\] The coupled adjoint linear system is obtained by canceling factors related to \(d\mathbf{W}/dp\) and \(d\mathbf{U}/dp\) in Eq. (7) to give \[\begin{bmatrix}\left[\frac{\partial\mathbf{R}_{a}}{\partial\mathbf{W}}\right]^ {T}&-\mathbf{C}^{T}\\ \mathbf{A}^{T}\left[\frac{\partial\mathbf{R}_{a}}{\partial\mathbf{X}_{a}}\right] ^{T}&\mathbf{K}^{T}-\mathbf{D}^{T}\end{bmatrix}\begin{bmatrix}\lambda_{a}\\ \lambda_{s}\end{bmatrix}=\begin{bmatrix}-\left[\frac{\partial J}{\partial \mathbf{W}}\right]^{T}\\ -\mathbf{A}^{T}\left[\frac{\partial J}{\partial\mathbf{X}_{a}}\right]^{T} \end{bmatrix} \tag{13}\] The process for solving the adjoint system follows an iterative block scheme. Algorithm 1 details the Linear Block Gauss-Seidel (LBGS) scheme applied for the solution of system (13). In this derivation, we use the structural flexibility \(\mathbf{S}\) which is a small reduced matrix relating the set of structural forces to the set of structural displacments pertaining to the fluid-structure coupling. The relaxation factor \(\theta_{s}\) has been introduced on the adjoint vector \(\lambda_{s}\). Assuming that the coupled system is solved to machine accuracy, the total derivative reconstruction is given by \[\frac{dJ}{dp}=\left(\frac{\partial J}{\partial\mathbf{X}_{a}}+\lambda_{a}^{T} \frac{\partial\mathbf{R}_{a}}{\partial\mathbf{X}_{a}}\right)\mathbf{B}\frac{ d\mathbf{X}_{a0}}{dp}-\lambda_{s}^{T}\mathbf{E}\frac{d\mathbf{X}_{a0}}{dp} \tag{14}\] In the expression above the computation of the product of the geometrical sensitivities with the matrix \(\mathbf{B}\) is not trivial. If one already has at hand a linearized version of the operator \(\mathbf{A}\), i.e. of \(\mathbf{T}_{surf}^{U}\) and \(\mathbf{T}_{vol}\), it can be applied to \(d\mathbf{X}_{a0}/dp\) as many times as the number of design variables. This is the most straightforward manner but the benefit of the adjoint formulation is then mitigated by the cost of the gradient assembly. The other way is to transpose the first term in the right-hand side of Eq. (14) and compute products like \([\partial\mathbf{T}_{surf}^{U}/\partial\mathbf{X}_{a0}]^{T}\mathbf{v}\) and \([\partial\mathbf{T}_{vol}/\partial\mathbf{X}_{a0}]^{T}\mathbf{v}\), where \(\mathbf{v}\) has fluid grid size. We call this mode the geometrical adjoint of the mesh deformation and displacement transfer operators. These two modes of gradient assembly have been implemented in the coupled-adjoint module of the elsA software. The iterations stop after a maximum number \(n_{cpl}\) of fluid-structure couplings or when the relative fluid and structural residuals get lower than a prescribed tolerance: \(r_{A}\leq\epsilon_{A}\) and \(r_{S}\leq\epsilon_{S}\). Typically we choose \(\epsilon_{A}=\epsilon_{S}=10^{-6}\). ``` Data:\(\mathbf{U},\mathbf{W}\)\(\mathbf{X}_{a},\mathbf{X}_{s},\mathbf{X}_{a0},\lambda_{a}^{0},\lambda_{s}^{0}, \theta_{s},\epsilon_{A},\epsilon_{S},n_{cpl}\) 1RHS\({}_{stru}\leftarrow\mathbf{0}\) 2if\(\lambda_{s}^{0}\neq\mathbf{0}\)then\(\mathbf{RHS}_{stru}\leftarrow\left(\mathbf{T}_{surf}^{Q}\frac{\partial\mathbf{Q}_{a}}{ \partial\mathbf{W}}\right)^{T}\mathbf{S}^{T}\lambda_{s}^{0}\)\(\triangleright\) Restart from a previous structural adjoint solution 3if\(\lambda_{a}^{0}=\mathbf{0}\)then\(\left[\frac{\partial\mathbf{R}_{a}}{\partial\mathbf{W}}\right]^{T}\lambda_{a}^{0}=- \left[\frac{\partial J}{\partial\mathbf{W}}\right]^{T}+\mathbf{RHS}_{stru}\)\(\triangleright\) Approximate solution of the fluid adjoint problem 4for\(k\gets 1,n_{cpl}\)do 5\(A_{Xs,surf}\leftarrow\left(\frac{\partial\mathbf{Q}_{a}}{\partial\mathbf{X}_{a,surf}}\right)^{T}\left(\mathbf{T}_{surf}^{Q}\right)^{T}\mathbf{S}^{T}\lambda _{s}^{k-1}\)\(\triangleright\) Structural geometric adjoint 6\(A_{Xa}\leftarrow-\left[(\lambda_{a}^{k-1})^{T}\frac{\partial\mathbf{R}_{a}}{ \partial\mathbf{X}_{a}}+\frac{\partial J}{\partial\mathbf{X}_{a}}\right)^{T}\)\(\triangleright\) Aerodynamic geometric adjoint 7\(A_{Xa,surf}\leftarrow(\mathbf{T}_{vol})^{T}\mathbf{A}_{Xa}\)\(\triangleright\) Mesh deformation adjoint 8\(\lambda_{s}^{k}\leftarrow(\mathbf{T}_{surf}^{U})^{T}(\mathbf{A}_{Xs,surf}+ \mathbf{A}_{Xa,surf})\)\(\triangleright\) Structural adjoint vector 9\(\lambda_{s}^{k}\leftarrow(1-\theta_{s})\lambda_{s}^{k-1}+\theta_{s}\lambda_{s} ^{k}\)\(\triangleright\) Relaxation (optional) 10RHS\({}_{stru}\leftarrow\left(\mathbf{T}_{surf}^{Q}\frac{\partial\mathbf{Q}_{a}}{ \partial\mathbf{W}}\right)^{T}\mathbf{S}^{T}\lambda_{s}^{k}\)\(\triangleright\) Update structural rhs 11\(\left[\frac{\partial\mathbf{R}_{a}}{\partial\mathbf{W}}\right]^{T}\lambda_{a}^ {k}=-\left[\frac{\partial J}{\partial\mathbf{W}}\right]^{T}+\mathbf{RHS}_{ stru}\)\(\triangleright\) Approximate solution of the fluid adjoint problem 12if(\(r_{A}\leq\epsilon_{A}\) and \(r_{S}\leq\epsilon_{S}\))then\(\triangleright\) check norm of fluid and structure relative residuals 13 14 end for \(\frac{dJ}{dp}\leftarrow\left(\frac{\partial J}{\partial\mathbf{X}_{a}}+\lambda_ {a}^{T}\frac{\partial\mathbf{R}_{a}}{\partial\mathbf{X}_{a}}\right)\mathbf{B} \frac{d\mathbf{X}_{a0}}{dp}-\lambda_{s}^{T}\mathbf{E}\frac{d\mathbf{X}_{a0}}{ dp}\)\(\triangleright\) Objective function gradient assembly ``` **Algorithm 1**Partitioned LBGS strategy for coupled-adjoint solution ## 3 ONERA-M6 wing aeroelastic analysis In this study the numerical experiments have been performed with the well known ONERA-M6 fixed wing configuration which has been extensively used for CFD solvers validation in transonic flow conditions. In this work we use the RANS solver provided by elsA for the steady rigid and aeroelastic analyses [27; 28]. The elsA adjoint and coupled-adjoint solvers have also been used for the computation of rigid and flexible derivatives. The latest improvements to the Krylov solvers for the solution of the adjoint linear system are described in [8]. A multi-block structured mesh featuring a C-H topology is used (Fig. 2). It consists of 3.8 million grids divided into 42 blocks. The flight conditions are a free-stream Mach number of 0.84 at an angle of attack of 3.06 degrees. The convective fluxes are discretized by an upwind Roe scheme associated to a Monotonic Upstream-centered Scheme for Conservation Laws (MUSCL) reconstruction and a Van Albada flux limiter. The one-equation Spalart-Allmaras turbulence model is selected. The surface contours in the bottom plot of Fig. 3 below show typical results for the ONERA-M6 wing. The pressure coefficient contours identify a lambda-shock along the mid-chord of the wing. For the aeroelastic analysis a simple but realistic finite element model has been designed (Fig. 2). The stiffness of this model can be easily tuned to get stronger or weaker fluid-structure interaction. The pressure coefficient contours at the aeroelastic equilibrium are plotted in the upper part of Fig. 3 and can be compared to the rigid contours. The maximum vertical displacement is 0.14 meters corresponding to 11.7 % of the wing span. To get a better insight of the effect of flexibility on the pressure distribution, we report in Fig. 4 the Cp distributions for two sections at \(y=0.60\) m and \(y=1.12\) m. The vertical displacement distributions associated to the front and rear spars as well as the twist increment distribution are plotted in Fig. 5. The rigid analysis results in a lift coefficient \(C_{L}=0.27\) whereas the aeroelastic analysis, at the same angle of attack, results in a lower lift coefficient \(C_{L}=0.23\). Figure 2: M6 wing aeroelastic model: 42 block-structured RANS CFD mesh and FEM internal layout. Figure 4: Comparison of rigid and aeroelastic pressure coefficient section plots at y=0.60m and y=1.12m. Figure 3: Pressure coefficient contour plots for the rigid and aeroelastic steady flows. ## 4 Minimal residual Krylov subspace methods combined with spectral deflation In this section we focus on a particular minimal residual norm Krylov subspace method for the solution of linear systems with a non-symmetric real coefficient matrix of type \[\mathbf{A}\mathbf{x}=\mathbf{b},\qquad\mathbf{A}\in\mathbb{R}^{N\times N};\quad \mathbf{b},\ \mathbf{x}\ \in\mathbb{R}^{N}, \tag{15}\] with the initial guess \(\mathbf{x}_{0}\) and the associated residual \(\mathbf{r}_{0}=\mathbf{b}-\mathbf{A}\mathbf{x}_{0}\). The GMRES method [12] computes the correction \(\mathbf{z}_{i}\) in the \(i\)th Krylov subspace \(\mathcal{K}_{i}(\mathbf{A},\mathbf{r}_{0})\equiv\mathrm{span}\{\mathbf{r}_{0},\mathbf{A}\mathbf{r}_{0},\mathbf{A}^{2}\mathbf{r}_{0},\cdots,\mathbf{A}^{i-1 }\mathbf{r}_{0}\}\) that minimizes the norm of the residual \(\mathbf{r}_{i}=\mathbf{b}-\mathbf{A}(\mathbf{x}_{0}+\mathbf{z}_{i})=\mathbf{ r}_{0}-\mathbf{A}\mathbf{z}_{i}\). The relation between the minimal residual correction \(\mathbf{z}_{i}\) and the orthogonality of the new residual \(\mathbf{r}_{i}\) to the _shifted Krylov space_\(\mathcal{AK}_{i}(\mathbf{A},\mathbf{r}_{0})\equiv\mathrm{span}\{\mathbf{A} \mathbf{r}_{0},\mathbf{A}^{2}\mathbf{r}_{0},\cdots,\mathbf{A}^{i}\mathbf{r}_{ 0}\}\) is given by the following theorem [29]: **Theorem 4.1**.: _The vector \(\mathbf{z}_{i}\in\mathcal{K}_{i}(\mathbf{A},\mathbf{r}_{0})\) satisfies \(\mathbf{z}_{i}=\underset{\mathbf{z}\in\mathcal{K}_{i}(\mathbf{A},\mathbf{r}_{ 0})}{\mathrm{argmin}}\|\mathbf{r}_{0}-\mathbf{A}\mathbf{z}\|_{2}\Leftrightarrow \mathbf{r}_{i}\perp\mathcal{AK}_{i}(\mathbf{A},\mathbf{r}_{0})\)_ This is known as the optimality property of the residual. In this work, a right-preconditioned system is considered so that (15) becomes \[\mathbf{A}\mathcal{M}(\mathbf{t}) =\mathbf{b}, \tag{16}\] \[\mathbf{x} =\mathcal{M}(\mathbf{t}) \tag{17}\] with \(\mathbf{t}\in\mathbb{R}^{N}\) and \(\mathcal{M}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) the preconditioning operator which may be a nonlinear function. ### Flexible GMRES algorithm with right preconditioning Saad proposed a minimal residual norm subspace method based on the standard GMRES approach [12] that allows a variable nonlinear preconditioning function \(\mathcal{M}_{j}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) at each iteration \(j\)[13]. Starting from an initial guess \(\mathbf{x}_{0}\), the flexible Arnoldi relation is written as: \[\mathbf{A}\mathbf{Z}_{m}=\mathbf{V}_{m+1}\bar{\mathbf{H}}_{m}, \tag{18}\] Figure 5: Vertical displacement and twist increment distribution at aeroelastic equilibrium. where the matrices \(\mathbf{V}_{m+1}\in\mathbb{R}^{N\times(m+1)},\mathbf{Z}_{m}\in\mathbb{R}^{N\times m}\) and \(\bar{\mathbf{H}}_{m}\in\mathbb{R}^{(m+1)\times m}\) stand for the orthonormal basis of the Krylov space, the solution space and the upper Hessenberg matrix respectively. The approximate solution is written as \(\mathbf{x}_{m}=\mathbf{x}_{0}+\mathbf{Z}_{m}\mathbf{y}_{m}\) where \(\mathbf{y}_{m}\) minimizes \(||\mathbf{r}_{0}-\mathbf{A}\mathbf{Z}_{m}\mathbf{y}||_{2}\) over \(\mathbf{x}_{0}+\mathrm{span}\{\mathbf{Z}_{m}\}\), with \(\mathbf{Z}_{m}=\boldsymbol{\mathcal{M}}(\mathbf{V}_{m})=[\mathcal{M}_{1}( \mathbf{v}_{1}),\cdots,\mathcal{M}_{m}(\mathbf{v}_{m})]\) and both \(\mathbf{Z}_{m}\) and \(\mathbf{V}_{m}\) need to be stored. We point out that the operator \(\boldsymbol{\mathcal{M}}\) represents the action of the nonlinear operators \(\mathcal{M}_{j}\) on the set of basis vectors \(\mathbf{v}_{j}\). The restarted FGMRES(\(m\), \(m_{i}\)) pseudocode is presented in Algorithm 2. We denote by \(m_{i}\) the size of the Krylov subspace associated to the GMRES solver devoted to the inner linear system. We point out that the stopping criterion is essentially based on the least-squares relative residual \(||\mathbf{c}-\bar{\mathbf{H}}_{m}\mathbf{y}_{m}||/||\mathbf{b}||\), with \(\mathbf{c}=\|\mathbf{r}_{0}\|\mathbf{e}_{1}\) (see [30] for several definitions of stopping criteria and some practical considerations for the implementation of the key points of the algorithm). The latter is a cheap approximation of the true residual \(\|\mathbf{A}\mathbf{x}-\mathbf{b}\|/\|\mathbf{b}\|\), assuming \(\mathbf{V}_{m+1}\) is orthonormal. In an inner-outer FGMRES the preconditioning operation \(\mathbf{z}_{j}=\mathcal{M}_{j}(\mathbf{v}_{j})\) of step 4 can be thought of as a means of approximately solving \(\mathbf{A}\mathbf{z}_{j}=\mathbf{v}_{j}\) where \(\mathbf{M}_{j}^{-1}\approx\mathbf{A}^{-1}\) is the inner preconditioner. To prevent unnecessary propagation of rounding errors, the relative true residual is only computed at the end of each cycle and is used to construct the first vector of the next Krylov subspace basis. In the case of large and ill-conditioned linear systems, least-squares and true residuals may differ due to loss of orthogonality during the construction of the Krylov basis. A standard way to tackle such a phenomenon is to ask for a second iteration of the Modified Gram-Schmidt algorithm (loop from line 6 to 9) in order to strengthen the orthogonality of the Krylov basis. In this work, we consider the standard GMRES with stationary right-preconditioning on one hand, and the specific class of flexible nested GMRES strategy on the other hand where an inner GMRES acts as an iterative preconditioner. Two stationary preconditioning strategies are considered. The first one consists in a block version of a standard LU-SGS iterative solver. LU-SGS is applied to a first order diagonally dominant upwind approximation of the flux Jacobian matrix inspired by [31]. This operator is based on a first order spatial discretization of the convective and of the viscous fluxes using a simplifying thin layer assumption [32]. This strategy leads to a very compact stencil for the preconditioning matrix which will be denoted by \(\mathbf{J}_{O1}^{APP}\) in the sequel of this document. ``` 1 Choose an initial guess \(\mathbf{x}_{0}\), a convergence threshold \(\epsilon\) and krylov size \(m\) 2 Compute \(\mathbf{r}_{0}=\mathbf{b}-\mathbf{A}\mathbf{x}_{0}\), \(\beta=||\mathbf{r}_{0}||\), \(\mathbf{c}=\beta\mathbf{e}_{1}\) and \(\mathbf{v}_{1}=\mathbf{r}_{0}/\beta\) 3for\(j\gets 1\)to\(m\)do 4\(\mathbf{z}_{j}=\mathcal{M}_{j}(\mathbf{v}_{j})\)\(\triangleright\) Inner iteration solved with GMRES(\(m_{i}\)) 5\(\mathbf{w}=\mathbf{A}\mathbf{z}_{j}\) 6for\(i\gets 1\)to\(j\)do 7\(h_{i,j}=\mathbf{v}_{i}^{T}\mathbf{w}\) 8\(\mathbf{w}=\mathbf{w}-h_{i,j}\mathbf{v}_{i}\) 9 end for 10\(h_{j+1,j}=||\mathbf{w}||_{2}\) and \(\mathbf{v}_{j+1}=\mathbf{w}/h_{j+1,j}\) 11 Solve the least-squares problem \(min_{\mathbf{y}}||\mathbf{c}-\bar{\mathbf{H}}_{j}\mathbf{y}||_{2}\) for \(\mathbf{y}_{j}\) 12 Exit if \(||\mathbf{c}-\bar{\mathbf{H}}_{j}\mathbf{y}_{j}||/||\mathbf{b}||\leq\epsilon\) 13 end for 14Compute \(\mathbf{x}_{m}=\mathbf{x}_{0}+\mathbf{Z}_{m}\mathbf{y}_{m}\) where \(\mathbf{Z}_{m}=[\mathbf{z}_{1},...,\mathbf{z}_{m}]\) 15 Set \(\mathbf{x}_{0}=\mathbf{x}_{m}\) and go to 2 ``` **Algorithm 2**Right-preconditioned FGMRES(\(m\),\(m_{i}\)) The second one is a Block Incomplete LU (BILU(\(k\))) factorization applied to either an approximate or exact first-order flux Jacobian matrix. For the so-called first-order exact Jacobian matrix \(\mathbf{J}_{O1}^{EXA}\) a first-order spatial Roe scheme is used for the discretization of the mean-flow convective fluxes and a 5-point corrected centered discretization scheme is used for the diffusive fluxes. More specifically, the spatial gradients at the cell interfaces are modified to avoid high frequency oscillations (see [33] or [34]). The BILU(\(k\)) preconditioner will be applied either to the first-order approximate Jacobian matrix \(\mathbf{J}_{O1}^{APP}\), or to the first-order exact Jacobian matrix \(\mathbf{J}_{O1}^{EXA}\). \(\mathbf{J}_{O1}^{EXA}\) is different from \(\mathbf{J}_{O1}^{APP}\) when it comes to memory footprint. Indeed, \(\mathbf{J}_{O1}^{EXA}\) has a 9-point stencil in 2D whereas a 5-point stencil is associated with \(\mathbf{J}_{O1}^{APP}\). In 3D, we have a 7-point stencil for \(\mathbf{J}_{O1}^{APP}\) and a stencil of 19 points for \(\mathbf{J}_{O1}^{EXA}\). Consequently, a better robustness is achieved but at the price of about twice the storage for \(\mathbf{J}_{O1}^{EXA}\) compared to \(\mathbf{J}_{O1}^{APP}\). For a better understanding, we reproduce the 3D stencils in Fig. 6 below: We point out that the relevant numerical ingredients that characterize the GMRES algorithm are the matrix product (step 5 in algorithm 2), the preconditioning strategy (step 4) and the scalar product (step 7). These algebraic operations are global in conjunction with a domain decomposition method. More specifically, the globalization of the preconditioner (step 4) is achieved with a Restricted Additive Schwarz method [35]. In addition, the product by the operator \(\mathbf{A}\) (step 5) is exact. We thus get a global and parallel FGMRES(\(m\),\(m_{i}\)). ### Deflated restarting The main drawback of the restarted GMRES(\(m\)) and FGMRES(\(m\), \(m_{i}\)) is the loss of spectral information contained in the current Krylov subspace during the restarting procedure. Let us recall the definition of a Ritz pair [36] as it plays an important role in the strategy of deflated restarting. **Definition 4.1** (Ritz pair): _Consider a subspace \(\mathcal{U}\) of \(\mathbb{R}^{N}\). Given an operator \(\mathbf{B}\in\mathbb{R}^{N\times N}\), a scalar \(\lambda\in\mathbb{R}\) and \(\mathbf{y}\in\mathbb{R}^{N}\), (\(\lambda\), \(\mathbf{y}\)) is a Ritz pair of \(\mathbf{B}\) with respect to \(\mathcal{U}\) if and only if the residual of the eigenvalue problem \(\mathbf{By}=\lambda\mathbf{y}\) satisfies the following orthogonality condition:_ \[(\mathbf{By}-\lambda\mathbf{y})\perp\mathcal{U}\quad\forall\mathbf{y}\in \mathcal{U}. \tag{19}\] _Choosing \(\mathcal{U}\equiv\mathcal{K}_{m}(\mathbf{A},\mathbf{r}_{0})\) and \(\mathbf{B}\equiv\mathbf{A}\), we have_ \[(\mathbf{A}\mathbf{y}-\lambda\mathbf{y})\perp\mathcal{K}_{m}(\mathbf{A}, \mathbf{r}_{0})\quad\forall\mathbf{y}\in\mathcal{K}_{m}(\mathbf{A},\mathbf{r} _{0}). \tag{20}\] _Recalling \(\mathbf{V}_{m}\) is an orthonormal basis of \(\mathcal{K}_{m}(\mathbf{A},\mathbf{r}_{0})\), we can write \(\mathbf{y}=\mathbf{V}_{m}\mathbf{g}\), \(\mathbf{g}\in\mathbb{R}^{m}\), and using the standard Arnoldi relation \(\mathbf{AV}_{m}=\mathbf{V}_{m+1}\bar{\mathbf{H}}_{m}\) in (20), we obtain the standard eigenvalue problem_ \[\mathbf{H}_{m}\mathbf{g}=\lambda\mathbf{g}, \tag{21}\] Figure 6: Stencils of the various Jacobian matrices. Stencil (b) is used for BILU(k) preconditioners and stencil (c) for BILU(k) or iterative LU-SGS preconditioners. where \(\mathbf{H}_{m}=[\mathbf{I}_{m}\ \mathbf{0}_{m\times 1}]\,\mathbf{H}_{m}\). Thus, the spectral residual norm of the Ritz pair \(\{\lambda,\mathbf{y}=\mathbf{V}_{m}\mathbf{g}\}\) satisfies: \[\|\mathbf{A}(\mathbf{V}_{m}\mathbf{g})-\lambda(\mathbf{V}_{m}\mathbf{g})\|=\| \mathbf{A}\mathbf{V}_{m}\mathbf{g}-\mathbf{V}_{m}\mathbf{H}_{m}\mathbf{g}\|= \|\mathbf{V}_{m+1}\mathbf{\bar{H}}_{m}\mathbf{g}-\mathbf{V}_{m}\mathbf{H}_{m} \mathbf{g}\|=|h_{m+1,m}||\mathbf{e}_{m}^{T}\mathbf{g}|. \tag{22}\] From a small value of \(|h_{m+1,m}||\mathbf{e}_{m}^{T}\mathbf{g}|\), the Arnoldi method takes the Ritz pair as a good approximation of the eigenvalue-eigenvector pair of the operator \(\mathbf{A}\)[37, 38]. Indeed, neglecting the last row of the rectangular upper Hessenberg matrix \(\mathbf{\bar{H}}_{m}\) leads to: \[\mathbf{H}_{m}=\mathbf{V}_{m}^{T}\mathbf{A}\mathbf{V}_{m}. \tag{23}\] Therefore, the spectrum of \(\mathbf{H}_{m}\) naturally approximates a part of the spectrum of \(\mathbf{A}\). The idea of deflation techniques is to keep relevant spectral information from \(\mathbf{H}_{m}\) in the search space of the next cycle to expect a better convergence of the Krylov iterative methods. In [19], Giraud et al. take into account both smallest and largest eigenvalues to maximize the deflation effect. In contrast, Morgan [17] only deflates the smallest ones. This last strategy will be adopted for our numerical experiments. Actually, Ritz values of the operator \(\mathbf{A}\) give a good approximation of its exterior eigenvalues, i.e., of largest magnitude. Unfortunately, interior eigenvalues are of greater interest because they are generally responsible for the convergence stagnation. An alternative which does better at finding eigenvalues nearest zero was proposed by Morgan [39, 40] who introduced the harmonic Ritz values of \(\mathbf{A}\) with respect to \(\mathcal{K}_{m}(\mathbf{A},\mathbf{r}_{0})\), which are equivalent to the Ritz values of \(\mathbf{A}^{-1}\) with respect to \(\mathcal{AK}_{m}(\mathbf{A},\mathbf{r}_{0})\). In practice we will use the following definition **Definition 4.2** (harmonic Ritz pair).: Consider a subspace \(\mathcal{U}\) of \(\mathbb{R}^{N}\). Given an operator \(\mathbf{B}\in\mathbb{R}^{N\times N}\), a scalar \(\lambda\in\mathbb{R}\) and \(\mathbf{y}\in\mathbb{R}^{N}\), (\(\lambda\), \(\mathbf{y}\)) is a harmonic Ritz pair of \(\mathbf{B}\) with respect to \(\mathcal{U}\) if and only if the residual of the eigenvalue problem \(\mathbf{B}\mathbf{y}=\lambda\mathbf{y}\) satisfies the following orthogonality condition: \[(\mathbf{B}\mathbf{y}-\lambda\mathbf{y})\perp\mathbf{B}\mathcal{U}\quad \forall\mathbf{y}\in\mathcal{U}. \tag{24}\] Choosing \(\mathcal{U}\equiv\mathcal{K}_{m}(\mathbf{A},\mathbf{r}_{0})\) and \(\mathbf{B}\equiv\mathbf{A}\), we have, \[(\mathbf{A}\mathbf{y}-\lambda\mathbf{y})\perp\mathcal{AK}_{m}(\mathbf{A}, \mathbf{r}_{0})\quad\forall\mathbf{y}\in\mathcal{K}_{m}(\mathbf{A},\mathbf{r}_ {0}). \tag{25}\] Recalling \(\mathbf{V}_{m}\) is an orthonormal basis of \(\mathcal{K}_{m}(\mathbf{A},\mathbf{r}_{0})\), we can write \(\mathbf{y}=\mathbf{V}_{m}\mathbf{g}\), \(\mathbf{g}\in\mathbb{R}^{m}\), and using the flexible Arnoldi relation (18) in (25), we obtain the following generalized eigenvalue problem \[(\mathbf{A}\mathbf{Z}_{m})^{T}(\mathbf{A}\mathbf{Z}_{m}\mathbf{g} -\lambda\mathbf{V}_{m}\mathbf{g})=\mathbf{0} \tag{26}\] \[\Leftrightarrow \mathbf{\bar{H}}_{m}^{T}\mathbf{\bar{H}}_{m}\mathbf{g}=\lambda \mathbf{\bar{H}}_{m}^{T}\mathbf{V}_{m+1}^{T}\mathbf{V}_{m}\mathbf{g}\] (27) \[\Leftrightarrow \boxed{\mathbf{\bar{H}}_{m}^{T}\mathbf{\bar{H}}_{m}\mathbf{g}= \lambda\mathbf{H}_{m}^{T}\mathbf{g}}\] After some algebraic manipulations, (27) can be reformulated as a standard eigenvalue problem (see [36] for this formula): \[(\mathbf{H}_{m}+h_{m+1,m}^{2}\mathbf{H}_{m}^{-T}\mathbf{e}_{m}\mathbf{e}_{m}^ {T})\mathbf{g}=\lambda\mathbf{g}, \tag{28}\] where \(\lambda\) is a harmonic Ritz value and the corresponding harmonic Ritz vector is \(\mathbf{y}=\mathbf{V}_{m}\mathbf{g}\). To follow definition 4.2, we use the identity \(\mathbf{V}_{m}^{T}\mathbf{V}_{m}=\mathbf{I}\) into equation (26) which gives \[((\mathbf{A}\mathbf{Z}_{m}\mathbf{V}_{m}^{T})\mathbf{V}_{m})^{T}((\mathbf{A} \mathbf{Z}_{m}\mathbf{V}_{m}^{T})\mathbf{V}_{m}\mathbf{g}-\lambda\mathbf{V}_{ m}\mathbf{g})=\mathbf{0}. \tag{29}\] Thus, \(\mathbf{Y}_{m}=\{\mathbf{y}_{1},\cdots,\mathbf{y}_{m}\}\) corresponds to harmonic Ritz vectors of \(\mathbf{A}\mathbf{Z}_{m}\mathbf{V}_{m}^{T}\) with respect to \(\mathrm{range}(\mathbf{V}_{m})\). To underline the link with FGCRO-DR in section 6, we denote by _strategy B_ this deflation strategy. Obviously, in exact arithmetic solutions to (27) and (28) are identical. But it is not the case in finite precision since the operator \(\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}\) is usually ill-conditioned in the fully linearized turbulence case. Therefore, the accurate estimation of the eigenvectors could be strongly altered and lead to stagnation of the relative true residual of the GMRES process. As mentioned by Giraud et al. [19], the flexible Arnoldi relation obtained at each restart within the FGMRES with deflated restarting (FGMRES-DR) framework given by \[\mathbf{AZ}_{k}=\mathbf{V}_{k+1}\bar{\mathbf{H}}_{k}, \tag{30}\] holds with \(\mathbf{Z}_{k}=\mathbf{Z}_{m}\mathbf{P}_{k}\), \(\mathbf{V}_{k+1}=\mathbf{V}_{m+1}\mathbf{P}_{k+1}\) and \(\bar{\mathbf{H}}_{k}=\mathbf{P}_{k+1}\bar{\mathbf{H}}_{m}\mathbf{P}_{k}\) where \(\mathbf{P}_{k}\in\mathbb{R}^{m\times k}\) corresponds to the orthonormal matrix whose columns are spanned by the set of the \(k\) retained eigenvectors of (28). Also, the right-hand side of the least-squares problem is computed at each restart as \(\mathbf{c}=\mathbf{V}_{m+1}^{T}\mathbf{r}_{m}\) which requires \((2N-1)(m+1)\) operations. Rollins and Fichtner [41] have proposed an efficient way to compute \(\mathbf{c}\) so that we can save some inner products. Indeed, they notice that the residual \(\mathbf{r}_{m}\) is a linear combination of the columns of the deflation subspace \(\mathbf{V}_{k+1}\). Consequently, we compute \(\mathbf{c}=[(\mathbf{r}_{m}^{T}\mathbf{V}_{k+1})^{T}\quad\mathbf{0}_{1\times( m-k)}]^{T}\) with \((2N-1)(k+1)\) operations only. Also, they improved the construction of \(\mathbf{P}_{k+1}\in\mathbb{R}^{(m+1)\times(k+1)}\) and in particular the last column of this matrix which is usually chosen as the vector \(\mathbf{c}-\bar{\mathbf{H}}_{m}\mathbf{y}_{m}\). More specifically, they demonstrated that the vector \(\mathbf{c}-\bar{\mathbf{H}}_{m}\mathbf{y}_{m}\) is colinear to the vector \([-\delta\mathbf{f}^{T}\;1]^{T}\) with \(\mathbf{f}=\mathbf{H}_{m}^{-T}\mathbf{e}_{m}\) and \(\delta=h_{m+1,m}\). The explicit relation is given below: \[\mathbf{c}-\bar{\mathbf{H}}_{m}\mathbf{y}_{m}=\begin{pmatrix}-\delta\mathbf{ f}\\ 1\end{pmatrix}\left(\frac{\omega-\delta\mathbf{f}^{T}\mathbf{v}}{1+\delta^{2} \mathbf{f}^{T}\mathbf{f}}\right), \tag{31}\] where \(\mathbf{v}\) and \(\omega\) are the first \(m\) rows of \(\mathbf{c}\) and the last element of \(\mathbf{c}\) respectively. This formulation reduces the propagation of rounding errors in the Arnoldi relation at restart. ### Alternative deflation strategy for FGMRES-DR From the expression of the solution \(\mathbf{x}_{m}=\mathbf{x}_{0}+\mathbf{Z}_{m}\mathbf{y}_{m}\), where \(\mathbf{Z}_{m}=[\mathbf{z}_{1},...,\mathbf{z}_{m}]\), it seems legitimate to write the Harmonic Ritz vectors as \(\mathbf{y}=\mathbf{Z}_{m}\mathbf{g}\), \(\mathbf{g}\in\mathbb{R}^{m}\). From (24) and (25) we have \[(\mathbf{A}\mathbf{y}-\lambda\mathbf{y})\perp\mathrm{span}( \mathbf{A}\mathbf{Z}_{m})\quad\forall\mathbf{y}\in\mathrm{span}(\mathbf{Z}_{m})\] \[\Leftrightarrow (\mathbf{A}\mathbf{Z}_{m})^{T}(\mathbf{A}\mathbf{Z}_{m}\mathbf{g} -\lambda\mathbf{Z}_{m}\mathbf{g})=\mathbf{0} \tag{32}\] \[\Leftrightarrow \boxed{\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}\mathbf{g}= \lambda\bar{\mathbf{H}}_{m}^{T}\mathbf{V}_{m+1}^{T}\mathbf{Z}_{m}\mathbf{g}}\] Thus, \(\mathbf{Y}_{m}=\{\mathbf{y}_{1},\cdots,\mathbf{y}_{m}\}\) corresponds to harmonic Ritz vectors of \(\mathbf{A}\) or equivalently of \(\mathbf{A}\mathbf{Z}_{m}\mathbf{Z}_{m}^{\dagger}\) with respect to \(\mathrm{range}(\mathbf{Z}_{m})\). Noticing that the last row of \(\bar{\mathbf{H}}_{m}\) is null except the rightmost element \(h_{m+1,m}\), and defining \(\mathbf{f}=\mathbf{H}_{m}^{-T}\mathbf{e}_{m}\), we can reformulate (32) as \[[\mathbf{H}_{m}+h_{m+1,m}^{2}\mathbf{f}\,\mathbf{e}_{m}^{T}]\mathbf{g}=\lambda [\mathbf{I}_{m}\quad h_{m+1,m}\mathbf{f}]\mathbf{V}_{m+1}^{T}\mathbf{Z}_{m} \mathbf{g}. \tag{33}\] The product \(\mathbf{V}_{m+1}^{T}\mathbf{Z}_{m}\) brings an additional cost to the deflation process. A block form of this term can be found as \[\mathbf{V}_{m+1}^{T}\mathbf{Z}_{m}=\begin{bmatrix}\mathbf{V}_{k+1}^{T} \mathbf{Z}_{k}&\mathbf{V}_{k+1}^{T}\mathbf{Z}_{m-k}\\ \mathbf{V}_{m-k}^{T}\mathbf{Z}_{k}&\mathbf{V}_{m-k}^{T}\mathbf{Z}_{m-k}\\ \end{bmatrix}. \tag{34}\] Thanks to steps 10 and 11 in algorithm 17 the leading block \(\mathbf{V}_{k+1}^{T}\mathbf{Z}_{k}\) can be written as \[(\mathbf{V}_{k+1}^{T}\mathbf{Z}_{k})^{(i)}=\bar{\mathbf{P}}_{k+1}^{T}(\mathbf{V}_{ m+1}^{T}\mathbf{Z}_{m})^{(i-1)}\bar{\mathbf{P}}_{k}, \tag{35}\] where the superscript is related to the cycle index. Thus storing \(\mathbf{V}_{m+1}^{T}\mathbf{Z}_{m}\) at the end of each cycle allows us to compute at a cheap cost the \((k+1)\times k\) block \(\mathbf{V}_{k+1}^{T}\mathbf{Z}_{k}\) for the next cycle. To underline the link with FGCRO-DR in section 6, we denote by _strategy A_ this formulation. ### Application of GMRES-DR and FGMRES-DR to the solution of the fluid adjoint system In this section we apply the GMRES-DR Krylov solver and its flexible variant (variable preconditioning), to the solution of the fluid adjoint system of the M6 wing. We recall that the discretization of the fluid RANS equations relies on an upwind Roe scheme associated to a Van Albada limiter and that the Spalart Allmaras turbulence model is fully linearized. Various stationary preconditioners are considered. First, we consider the legacy RAS+LU-SGS(nrelax, CFL), with nrelax = 6 and CFL = 100 selected as the best options, applied to the first-order approximate Jacobian operator. Then, the BILU(k) decomposition type preconditioners are used. More specifically, BILU(0) is applied to the first-order approximate flux Jacobian operators (approximate and exact) and BILU(1) is applied to the first-order approximate Jacobian matrix only. We start with a GMRES(\(m\), \(k\)) solver with a krylov basis of size \(m\) = 120 and a deflation subspace of size \(k\) = 40. Figure 7a plots the convergence of the relative least-squares residual for the above preconditioning strategies. The least-squares residuals all converge up to a prescribed threshold of \(10^{-8}\). However, below \(10^{-6}\) some erratic behavior happens for the BILU preconditioner applied to the first-order approximate Jacobian operator. The square symbols correspond to the true residuals computed at the end of each GMRES cycle. Except for the preconditioning strategy based on the BILU(0) factorization of the first-order Jacobian matrix, some residual stagnation is observed. This is explained by the propagation of rounding errors in the Arnoldi relation during the restart procedure. A simple remedy for this is to perform a cold restart when the true and least-squares residuals differ too much. In this work, we set this criterion at \(\epsilon\) = 5% of relative discrepancy: \(\epsilon\) = \(|\mathbf{r}_{true}-\mathbf{r}_{leastsq}|/\mathbf{r}_{true}\) = 0.05. Convergence improvements are clearly illustrated in Figure 7b where the true and least-squares residuals now match accurately. Also, it is not surprising to see a plateau appear when restarting. The second set of numerical experiments consisted in the solution of the adjoint linear system but with a nested GMRES strategy where the inner GMRES is right preconditioned by the same variants of stationary preconditioners. Figure 8 shows the corresponding convergence histories of the relative and least-squares residuals in terms of number of iterations and matrix-vector products. The FGMRES-DR(\(m\), \(m_{i}\), \(k\)) solver is applied with a size of Krylov basis \(m\) = 70, a size \(m_{i}\) = 10 of the inner Krylov basis and a size \(k\) = 40 for the deflation subspace. This time, the true and least-squares residuals always match which confirms the superior numerical stability of the flexible Krylov solver. For both standard and flexible GMRES-DR, the best convergence rate is obtained using the BILU(0) factorization of the exact first-order Jacobian matrix. FGMRES-DR reaches the tolerance threshold in 175 iterations compared to 1500 for GMRES-DR. However, the number of matrix-vector products is lower for the latter (1500 compared to 1920), illustrating a higher computational cost associated to FGMRES-DR for this specific numerical test. ### Application of GMRES-DR and FGMRES-DR to the solution of the fluid-structure coupled-adjoint system We are now interested in the application of GMRES-DR and FGMRES-DR to the solution of the coupled-adjoint system 13. The solution process is presented in algorithm 1. The fluid-structure coupling is triggered when the ratio \(\rho=\mathbf{r}_{i}/\mathbf{r}_{i-1}\) between the residual \(\mathbf{r}_{i}\) at the end of the current cycle and the residual \(\mathbf{r}_{i-1}\) at the end of the previous cycle is less than a prescribed threshold. Relaxation of the structural adjoint vector (parameter \(\theta_{s}\) at step 9 in algorithm 1) can be activated to stabilize the convergence Figure 8: Adjoint relative residual norm convergence history of FGMRES-DR(70,10,35). Impact of various preconditioners. The true and least-squares residuals match with numerical accuracy Figure 7: Adjoint relative residual norm convergence history of GMRES-DR(120,40). Impact of various preconditioners. In 7a some stagnation of the true residual occurs due to propagation of rounding errors during the restarting process. In 7b a cold restart, corresponding to the convergence plateaus, allows to suppress this stagnation at the price of a higher computational cost. of the partitioned solver. A popular choice is to adapt \(\theta_{s}\) dynamically by using an autoregressive predictor such as the well-known Aitken strategy [42; 43]. In order to select appropriate values for \(\rho\) ans \(\theta_{s}\), we carried out a parameter study with \(\rho\in[0.4,0.5,0.6,0.7]\) in combination with a constant relaxation parameter \(\theta_{s}=1.0\) on one hand, and a dynamic relaxation parameter \(\theta_{s}^{\star}\), with \(\theta_{s}^{0}=1.0\), on the other hand. The associated convergence curves are presented in Figure 9. For a constant relaxation parameter, the best convergence rate is achieved for a residual ratio \(\rho=0.6\) (plain black line). The impact of dynamic relaxation (dash-dotted lines) is systematically negative except for \(\rho=0.7\) where the efficiency of the combination of \(\rho=0.6\) with \(\theta_{s}=1.0\) is retrieved. In light of these results, it seems more appropriate to adopt a constant parameter relaxation strategy for our specific test case. However, higher-order Aitken techniques could be applied to better capture the nonlinear profile of the convergence, but this is out of the scope of this paper. Consequently, in the remaining of this document we will select \(\rho=0.6\) and \(\theta_{s}=1.0\) for the partitioned solver. In Figure 10 we plot the convergence of the fluid relative least-squares residual for GMRES-DR(120,40) associated to several preconditioners. The preconditioners RAS+LUSGS(6,100) and BILU(1) both applied to the approximate first-order Jacobian operator perform similarly. As before, BILU(0) applied to the exact first-order Jacobian shows the best efficiency. Contrary to the purely fluid case, the cycles of the coupled solver are often shorter which disadvantages the less efficient preconditioners. The same computation is repeated using a FGMRES-DR(70,10,35) Krylov solver. This time, BILU(1) applied to the approximate first-order Jacobian operator is not competitive and BILU(0) applied to the exact first-order Jacobian far outperforms other strategies. Again, the nested solver requires roughly 4 times less iterations than its non-flexible counterpart whereas the number of matrix-vector products is 10800 compared to 4400. Figure 9: Coupled-adjoint relative residual norm convergence history of GMRES-DR(120,40) right-preconditioned by BILU(\(\mathbf{J}_{O1}^{EXA}\)). Impact of constant and dynamic relaxation parameter. ## 5 Generalized Conjugate Residual with inner Orthogonalization and Deflated Restarting: GCRO-DR As already explained, the current implementation of our partitioned solver does not take advantage of spectral information produced during the Krylov solver instances applied to the sequence of preceding right-hand sides. An advanced inner-outer GMRES-DR solver is used for the approximate solution of the fluid block between consecutive fluid-structure couplings but a simple restart is performed when the structural source term is updated. Unfortunately, deflated restarting based on subspace augmentation Figure 11: Coupled-adjoint relative residual norm convergence history of FGMRES-DR(70,10,35). Impact of various preconditioners. Figure 10: Coupled-adjoint relative residual norm convergence history of GMRES-DR(120,40). Impact of various preconditioners. by appending approximate Ritz vectors to the Krylov subspace is not suitable for sequences of linear systems. Indeed, if the Ritz vectors are obtained from a previous linear system with another matrix or even another right-hand side, the concatenation of the recycled subspace to the current Krylov subspace does not form a Krylov subspace for the current problem. It is necessary to introduce a new Krylov solver that uses recycling of any given subspace without restriction. A famous one is the generalized conjugate residual with inner orthogonalization (GCRO). It belongs to the family of inner-outer methods [44] where the outer method is based on GCR, a minimum residual norm method proposed by Eisenstat, Elman and Schultz [45]. The inner solver is GMRES applied to a projected system matrix. ### The GCRO Krylov solver As previously mentioned, we focus on the solution of a sequence of linear systems with a varying right-hand side only. This section briefly introduces the Generalized Conjugate Residual with inner Orthogonalization (GCRO) algorithm with deflated restarting. In this framework, deflation can re-use spectral information from a previous cycle or from a previous linear system. We start by recalling the original formulation of the generalized conjugate residual method (GCR)[45]. The idea is to introduce the concept of optimality for the solution residual. We want to solve \[\mathbf{A}\mathbf{x}^{(s)}=\mathbf{b}^{(s)},\quad s=1,2,... \tag{36}\] where \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and \(\mathbf{b}^{(s)}\in\mathbb{R}^{n}\) changes from one system to the next. The GCR method relies on a given full-rank matrix \(\mathbf{U}_{k}\in\mathbb{R}^{n\times k}\) and an orthonormal matrix \(\mathbf{C}_{k}\in\mathbb{R}^{n\times k}\) as the image of \(\mathbf{U}_{k}\) by \(\mathbf{A}\) satisfying the relations \[\mathbf{A}\mathbf{U}_{k} =\mathbf{C}_{k}, \tag{37}\] \[\mathbf{C}_{k}^{T}\mathbf{C}_{k} =\mathbf{I}_{k}. \tag{38}\] For the sake of understanding, suppose that we want to solve the first system of the sequence of linear systems, i.e., \(s=1\). Given an initial guess \(\mathbf{x}_{0}^{(1)}\), the principle is to compute an approximation to the solution \(\mathbf{x}^{(1)}\in\mathbf{x}_{0}^{(1)}+\text{range}(\mathbf{U}_{k})\) that minimizes the corresponding residual norm over the approximation space \(\text{range}(\mathbf{U}_{k})\). More precisely, GCR solves the following minimization problem \[\mathbf{x}^{(1)}=\underset{\mathbf{x}\in\mathbf{x}_{0}^{(1)}+\text{range}( \mathbf{U}_{k})}{\text{argmin}}\|\mathbf{b}^{(1)}-\mathbf{A}\mathbf{x}\|_{2}, \tag{39}\] The optimal solution of (39) over the subspace \(\mathbf{x}_{0}^{(1)}+\text{range}(\mathbf{U}_{k})\) is defined by \[\mathbf{x}^{(1)}=\mathbf{x}_{0}^{(1)}+\mathbf{U}_{k}\mathbf{C}_{k}^{T} \mathbf{r}_{0}^{(1)} \tag{40}\] where \(\mathbf{r}_{0}^{(1)}=\mathbf{b}^{(1)}-\mathbf{A}\mathbf{x}_{0}^{(1)}\). Consequently, the corresponding residual vector satisfies \[\mathbf{r}_{k}^{(1)}=\mathbf{b}^{(1)}-\mathbf{A}\mathbf{x}^{(1)}=(\mathbf{I}- \mathbf{A}\mathbf{U}_{k}\mathbf{C}_{k}^{T})\mathbf{r}_{0}^{(1)}=(\mathbf{I}- \mathbf{C}_{k}\mathbf{C}_{k}^{T})\mathbf{r}_{0}^{(1)},\quad\mathbf{r}_{k}^{(1) }\perp\text{range}(\mathbf{C}_{k}). \tag{41}\] The orthogonality of the residual \(\mathbf{r}_{k}^{(1)}\) to the subspace \(\mathcal{AK}_{k}(\mathbf{A},\mathbf{r}_{0}^{(1)})\) spanned by the columns of \(\mathbf{C}_{k}\) is known as the optimality property of the residual. In practice, GCR is not considered as a means to solve the linear system. Instead, for the first system or when no spectral information is available from a previous solve, it is replaced by GMRES which computes an implicit representation of the matrices \(\mathbf{U}_{k}\) and \(\mathbf{C}_{k}\)[10]. In order to simplify the notations, we temporarily omit the index of the current system in the equations. Given an orthonormal basis \(\mathbf{C}_{k}\) of an outer subspace and the corresponding residual \((\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T})\mathbf{r}_{0}\) after \(k\) steps of GCR, GCRO(\(m\)) obtains the next iterates \(\mathbf{r}_{k+1}\) and \(\mathbf{x}_{k+1}\) by performing \(m\) steps of GMRES applied to the projected operator \(\mathbf{A}_{\mathbf{C}_{k}}=(\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T}) \mathbf{A}\), thereby maintaining optimality of the inner residual with respect to the outer space. Let \(\mathbf{V}_{m+1}\in\mathbb{R}^{n\times(m+1)}\) be an orthonormal basis for \(\mathcal{K}_{m+1}(\mathbf{A}_{\mathbf{C}_{k}},\mathbf{r}_{k})\) with \(\mathbf{v}_{1}=\mathbf{r}_{k}/\|\mathbf{r}_{k}\|\), if we apply an inner GMRES we have the following Arnoldi relation: \[\mathbf{A}_{\mathbf{C}_{k}}\mathbf{V}_{m}=(\mathbf{I}-\mathbf{C}_{k}\mathbf{C }_{k}^{T})\mathbf{A}\mathbf{V}_{m}=\mathbf{V}_{m+1}\bar{\mathbf{H}}_{m}\quad \text{with}\quad\bar{\mathbf{H}}_{m}=\mathbf{V}_{m+1}^{T}\mathbf{A}_{\mathbf{ C}_{k}}\mathbf{V}_{m},\quad\bar{\mathbf{H}}_{m}\in\mathbb{R}^{(m+1)\times m} \tag{42}\] This equation can be expanded as \[\mathbf{A}\mathbf{V}_{m}=\mathbf{C}_{k}\mathbf{B}_{m}+\mathbf{V}_{m+1}\bar{ \mathbf{H}}_{m}\quad\text{with}\quad\mathbf{B}_{m}=\mathbf{C}_{k}^{T}\mathbf{ A}\mathbf{V}_{m},\quad\mathbf{B}_{m}\in\mathbb{R}^{k\times m} \tag{43}\] At the end of the inner GMRES cycle, using (42), the residual can be written as \[\mathbf{r}_{k+1}=\mathbf{r}_{k}-\mathbf{A}_{\mathbf{C}_{k}}\mathbf{V}_{m} \mathbf{y}_{m}=\mathbf{r}_{k}-\mathbf{V}_{m+1}\bar{\mathbf{H}}_{m}\mathbf{y}_ {m}, \tag{44}\] and the corresponding solution reads \[\mathbf{x}_{k+1}=\mathbf{x}_{k}+\mathbf{A}^{-1}\mathbf{A}_{\mathbf{C}_{k}} \mathbf{V}_{m}\mathbf{y}_{m}, \tag{45}\] where, using (37), we have \(\mathbf{A}^{-1}\mathbf{A}_{\mathbf{C}_{k}}=\mathbf{I}-\mathbf{U}_{k}\mathbf{C }_{k}^{T}\mathbf{A}\). Using (40) and (45) we obtain the expression for the solution at the end of the first cycle: \[\mathbf{x}_{k+1} =\mathbf{x}_{0}^{(1)}+\mathbf{U}_{k}\mathbf{C}_{k}^{T}\mathbf{r} _{0}^{(1)}+(\mathbf{I}-\mathbf{U}_{k}\mathbf{C}_{k}^{T}\mathbf{A})\mathbf{V}_ {m}\mathbf{y}_{m}\] \[=\mathbf{x}_{0}^{(1)}+\mathbf{U}_{k}\mathbf{C}_{k}^{T}\left( \mathbf{r}_{0}^{(1)}-\mathbf{A}\mathbf{V}_{m}\mathbf{y}_{m}\right)+\mathbf{V}_ {m}\mathbf{y}_{m}\] \[=\mathbf{x}_{0}^{(1)}+\mathbf{U}_{k}\mathbf{z}_{k}+\mathbf{V}_{m} \mathbf{y}_{m}, \tag{46}\] with \[\mathbf{z}_{k}=\mathbf{C}_{k}^{T}\mathbf{r}_{0}^{(1)}-\mathbf{B}_{m}\mathbf{y} _{m},\quad\mathbf{B}_{m}=\mathbf{C}_{k}^{T}\mathbf{A}\mathbf{V}_{m}. \tag{47}\] We easily show that the residual computed in the inner GMRES equals the true outer residual: \[\mathbf{r}_{k+1}=\mathbf{b}-\mathbf{A}\mathbf{x}_{k+1}=\mathbf{b}-\mathbf{A} \mathbf{x}_{k}-\mathbf{A}_{\mathbf{C}_{k}}\mathbf{V}_{m}\mathbf{y}_{m}= \mathbf{r}_{k}-\mathbf{V}_{m+1}\bar{\mathbf{H}}_{m}\mathbf{y}_{m}. \tag{48}\] Because \(\text{range}(\mathbf{C}_{k})\bot\text{range}(\mathbf{V}_{m+1})\) by construction, it follows that \(\mathbf{x}_{k+1}\in\mathbf{x}_{0}^{(1)}+\text{range}(\mathbf{U}_{k})\oplus \text{range}(\mathbf{V}_{m})\) is the unique solution of the global residual minimization problem over the subspace spanned by the complementary subspaces \(\text{range}(\mathbf{U}_{k})\) and \(\text{range}(\mathbf{V}_{m})\) (see Theorem 2.2 and its proof in [21]). Now, introducing the orthogonal decomposition \(\bar{\mathbf{H}}_{m}=\bar{\mathbf{Q}}_{m}\mathbf{R}_{m}\), the vector of reduced coordinates \(\mathbf{y}_{m}\) is such that \[\mathbf{y}_{m}=\underset{\mathbf{y}\in\mathbb{R}^{m}}{\operatorname{argmin}}\| \mathbf{r}_{k+1}\|_{2}=\underset{\mathbf{y}\in\mathbb{R}^{m}}{\operatorname{ argmin}}\|\mathbf{r}_{k}-\mathbf{A}_{\mathbf{C}_{k}}\mathbf{V}_{m}\mathbf{y}\|_{2}= \mathbf{R}_{m}^{-1}\bar{\mathbf{Q}}_{m}^{T}\|\mathbf{r}_{k}\|_{2}\mathbf{e}_{1} \tag{49}\] In the above Arnoldi process (43) of the inner GMRES(\(m\)), the vectors \(\mathbf{A}\mathbf{v}_{i}\) are first orthogonalized against \(\mathbf{C}_{k}\), thus constructing \(\mathbf{V}_{m+1}\) such that \(\mathbf{C}_{k}^{T}\mathbf{V}_{m+1}=\mathbf{0}\). ### The GCRO-DR Krylov solver So far, we have not yet explained how the matrix \(\mathbf{C}_{k}\) is built. A fundamental difference between GCRO and GCRO-DR is that GCRO is a nested Krylov solver where the outer spaces \(\mathbf{U}_{k}\) and \(\mathbf{C}_{k}\) keep growing and GMRES(\(m\)) is applied at each iteration \(k\), whereas GCRO-DR supplements given deflation subspaces derived from a previous cycle with \((m-k)\) iterations of GMRES. To comply to the GCRO-DR formalism, we first slightly change the notation in (42) and (43) to point out that we perform \(m-k\) steps (instead of \(m\)) of inner GMRES. This produces the Arnoldi relation \[\mathbf{A}_{\mathbf{C}_{k}}\mathbf{V}_{m-k}=\mathbf{V}_{m-k+1}\bar{\mathbf{H}}_{ m-k}\quad\Leftrightarrow\quad\bar{\mathbf{H}}_{m-k}=\mathbf{V}_{m-k+1}^{T} \mathbf{A}_{\mathbf{C}_{k}}\mathbf{V}_{m-k},\quad\bar{\mathbf{H}}_{m-k}\in \mathbb{R}^{(m-k+1)\times(m-k)} \tag{50}\] Combining (37) and (50) we have \[\mathbf{A}\left[\mathbf{U}_{k}\quad\mathbf{V}_{m-k}\right]=[\mathbf{C}_{k} \quad\mathbf{V}_{m-k+1}]\begin{bmatrix}\mathbf{I}_{k}&\mathbf{B}_{m-k}\\ \mathbf{0}&\bar{\mathbf{H}}_{m-k}\end{bmatrix}, \tag{51}\] with \(\mathbf{B}_{m-k}=\mathbf{C}_{k}^{T}\mathbf{AV}_{m-k},\mathbf{B}_{m-k}\in \mathbb{R}^{k\times(m-k)}\). Numerical tests suggest that the rightmost matrix in (51) is ill-conditioned. To reduce unnecessary ill-conditioning we can compute the diagonal matrix \(\mathbf{D}_{k}=\text{diag}(\|\mathbf{u}_{1}\|^{-1},\|\mathbf{u}_{2}\|^{-1}, \ldots,\|\mathbf{u}_{k}\|^{-1})\) such that \(\bar{\mathbf{U}}_{k}=\mathbf{U}_{k}\mathbf{D}_{k}\), has unit columns. Defining \[\hat{\mathbf{V}}_{m}=[\tilde{\mathbf{U}}_{k}\quad\mathbf{V}_{m-k}],\quad\hat {\mathbf{W}}_{m+1}=[\mathbf{C}_{k}\quad\mathbf{V}_{m-k+1}],\quad\bar{\mathbf{ H}}_{m}=\begin{bmatrix}\mathbf{D}_{k}&\mathbf{B}_{m-k}\\ \mathbf{0}&\bar{\mathbf{H}}_{m-k}\end{bmatrix},\] the Arnoldi relation (51) can be cast into \[\mathbf{A}\hat{\mathbf{V}}_{m}=\hat{\mathbf{W}}_{m+1}\bar{\mathbf{H}}_{m}. \tag{52}\] From (46) we have \(\mathbf{x}_{1}=\mathbf{x}_{0}+\tilde{\mathbf{U}}_{k}\mathbf{z}_{k}+\mathbf{V} _{m-k}\mathbf{y}_{m-k}=\mathbf{x}_{0}+\hat{\mathbf{V}}_{m}\mathbf{y}_{m}\) with \(\mathbf{y}_{m}=\text{argmin}_{\mathbf{y}}\|\bar{\mathbf{H}}_{m}\mathbf{y}- \hat{\mathbf{W}}_{m+1}^{T}\mathbf{r}_{0}\|_{2}\). The expanded form of this least-squares problem reads \[\mathbf{y}_{m}=[\mathbf{z}_{k},\mathbf{y}_{m-k}]=\text{argmin}_{\mathbf{z}, \mathbf{y}}\left\|\begin{bmatrix}\mathbf{D}_{k}&\mathbf{B}_{m-k}\\ \mathbf{0}&\bar{\mathbf{H}}_{m-k}\end{bmatrix}\begin{bmatrix}\mathbf{z}\\ \mathbf{y}\end{bmatrix}-\begin{bmatrix}\mathbf{C}_{k}^{T}\mathbf{r}_{0}\\ \mathbf{V}_{m-k+1}^{T}\mathbf{r}_{0}\end{bmatrix}\right\|_{2} \tag{53}\] In [21] and [46] the authors suggest to solve (53) blockwise, first for \(\mathbf{y}_{m-k}\) and then set \(\mathbf{z}_{k}=\mathbf{D}_{k}^{-1}[\mathbf{C}_{k}^{T}\mathbf{r}_{0}-\mathbf{B} _{m-k}\mathbf{y}_{m-k}]\). Recall that the first Krylov vector in \(\mathbf{V}_{m-k+1}\) is \(\mathbf{v}_{k+1}=(\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T})\mathbf{r}_{0}\), which simplifies the expression for the right-hand side of the least-squares sub-problem to \(\mathbf{V}_{m-k+1}^{T}\mathbf{r}_{0}=\mathbf{V}_{m-k+1}^{T}(\mathbf{I}- \mathbf{C}_{k}\mathbf{C}_{k}^{T})\mathbf{r}_{0}=\beta\mathbf{e}_{1}^{m-k+1}\), where \(\beta=\|(\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T})\mathbf{r}_{0}\|_{2}\) and \(\mathbf{e}_{1}^{m-k+1}\) is the first unit vector of \(\mathbb{R}^{m-k+1}\). We have now completed the process for the first linear system which formulates the solution from the complementary subspaces \(\tilde{\mathbf{U}}_{k}\) and \(\mathbf{V}_{m-k}\). At this stage, we introduce deflated restarting within the GCRO framework to build the outer space \(\mathbf{C}_{k}\) for the next cycle. Like GMRES-DR, the deflation subspace is spanned by the \(k\) harmonic Ritz vectors corresponding to harmonic Ritz values of smallest magnitude. Using definition 4.2 with \(\mathcal{U}\equiv\text{span}\{\hat{\mathbf{V}}_{m}\}\) and \(\mathbf{B}\equiv\mathbf{A}\) we can write \(\mathbf{y}=\hat{\mathbf{V}}_{m}\mathbf{g}\) which gives the orthogonality relation \[(\mathbf{A}\hat{\mathbf{V}}_{m})^{T}(\mathbf{A}\hat{\mathbf{V}}_{m}\mathbf{g}- \theta\hat{\mathbf{V}}_{m}\mathbf{g})=\mathbf{0}. \tag{54}\] Now using (52) in (54) and recalling that \(\hat{\mathbf{W}}_{m+1}\) is orthonormal, we obtain the generalized eigenvalue problem \[\boxed{\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}\mathbf{g}=\theta\bar{ \mathbf{H}}_{m}^{T}\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}\mathbf{g}} \tag{55}\] In practice, for better numerical accuracy, instead of computing the smallest eigenvalues of (55) we prefer to compute the largest eigenvalues \(\theta^{-1}\) of \(\bar{\mathbf{H}}_{m}^{T}\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}\mathbf{ g}=\theta^{-1}\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}\mathbf{g}\). Since \(\hat{\mathbf{W}}_{m+1}=[\mathbf{C}_{k}\quad\mathbf{V}_{m-k+1}]\), the matrix product \(\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}\) has the following block structure: \[\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}=\begin{bmatrix}\mathbf{C}_{k}^ {T}\tilde{\mathbf{U}}_{k}&\mathbf{0}_{k\times(m-k)}\\ \mathbf{V}_{m-k+1}^{T}\tilde{\mathbf{U}}_{k}&\begin{bmatrix}\mathbf{I}_{m-k}\\ \mathbf{0}_{1\times(m-k)}\end{bmatrix}\end{bmatrix}\!. \tag{56}\] We can further simplify this expression. Let \(\mathbf{v}_{k+1}=\mathbf{r}_{k}/\|\mathbf{r}_{k}\|\), we have \(\hat{\mathbf{V}}_{m}=[\tilde{\mathbf{U}}_{k},\mathbf{v}_{k+1},\mathbf{V}_{m-k-1}]\) and \(\hat{\mathbf{W}}_{m+1}=[\mathbf{C}_{k},\mathbf{v}_{k+1},\mathbf{V}_{m-k}]\). It can also be shown that \(\mathrm{range}(\tilde{\mathbf{U}}_{k},\mathbf{v}_{k+1})=\mathrm{range}( \mathbf{C}_{k},\mathbf{v}_{k+1})=\mathrm{range}(\mathbf{Y}_{k},\mathbf{v}_{k+1})\), \(\mathbf{Y}_{k}=\hat{\mathbf{V}}_{m}\mathbf{P}_{k}\) being the deflation subspace spanned by the Ritz vectors (see [23]). It follows that the bottom left \((m-k)\times(k+1)\) block in (56) is null, which gives \[\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}=\left[\begin{array}{c|c|c|c} \mathbf{C}_{k}^{T}\tilde{\mathbf{U}}_{k}&\mathbf{0}_{k\times 1}&\mathbf{0}_{k \times(m-k-1)}\\ \mathbf{v}_{1}^{T}\tilde{\mathbf{U}}_{k}&1&\mathbf{0}_{1\times(m-k-1)}\\ \hline\mathbf{V}_{m-k}^{T}[\tilde{\mathbf{U}}_{k},\mathbf{r}_{k}/\|\mathbf{r}_ {k}\|]&\left[\begin{array}{c}\mathbf{I}_{m-k-1}\\ \mathbf{0}_{1\times(m-k-1)}\end{array}\right]\end{array}\right]=\left[ \begin{array}{c|c|c}\mathbf{C}_{k}^{T}\tilde{\mathbf{U}}_{k}&\mathbf{0}_{k \times 1}&\mathbf{0}_{k\times(m-k-1)}\\ \mathbf{v}_{1}^{T}\tilde{\mathbf{U}}_{k}&1&\mathbf{0}_{1\times(m-k-1)}\\ \hline\mathbf{0}_{(m-k)\times(k+1)}&\left[\begin{array}{c}\mathbf{I}_{m-k-1 }\\ \mathbf{0}_{1\times(m-k-1)}\end{array}\right]\end{array}\right]. \tag{57}\] In [23] the authors suggest a number of improvements, mainly based on recursion formulas, to save computational cost in forming the block \(\mathbf{C}_{k}^{T}\tilde{\mathbf{U}}_{k}\) in (57) at a cost independent of the problem size. The deflation subspace spanned by the Ritz vectors \(\mathbf{Y}_{k}=\hat{\mathbf{V}}_{m}\mathbf{P}_{k}\) is then shifted by \(\mathbf{A}\) to give \[\mathbf{A}\mathbf{Y}_{k}=\hat{\mathbf{W}}_{m+1}\bar{\mathbf{H}}_{m}\mathbf{P}_ {k}. \tag{58}\] Introducing in (58) the reduced QR factorization \(\bar{\mathbf{H}}_{m}\mathbf{P}_{k}=\bar{\mathbf{Q}}_{m}\mathbf{R}\), \(\bar{\mathbf{Q}}_{m}\in\mathbb{R}^{(m+1)\times k}\), we obtain \(\mathbf{A}\mathbf{Y}_{k}=(\hat{\mathbf{W}}_{m+1}\bar{\mathbf{Q}}_{m})\mathbf{ R}=\mathbf{C}_{k}\mathbf{R}\). The outer residual subspace for the next cycle is then given by \[\mathbf{C}_{k}=\hat{\mathbf{W}}_{m+1}\bar{\mathbf{Q}}_{m}, \tag{59}\] and, from \(\mathbf{C}_{k}=\mathbf{A}\mathbf{Y}_{k}\mathbf{R}^{-1}=\mathbf{A}\mathbf{U}_{k}\), it follows directly the definition of the corresponding solution subspace \[\mathbf{U}_{k}=\mathbf{Y}_{k}\mathbf{R}^{-1}. \tag{60}\] We note that (59) implies that \(\mathbf{C}_{k}\) is an orthonormal matrix since \(\hat{\mathbf{W}}_{m+1}\) and \(\bar{\mathbf{Q}}_{m}\) are also orthonormal. We are now in a position to perform the next cycle \((i+1)\) and obtain the next iterates \(\mathbf{r}_{i+1}^{(s)}\) and \(\mathbf{x}_{i+1}^{(s)}\) for the current system \((s)\) by performing \(m-k\) steps of GMRES applied to the projected operator \(\mathbf{A}_{\mathbf{C}_{k}}=(\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T}) \mathbf{A}\). We still have to detail the construction of the deflation subspaces \(\mathbf{U}_{k}\) and \(\mathbf{C}_{k}\) at the end of the initial GMRES cycle on one hand, and at the beginning of a subsequent linear system on the other hand. _First cycle of initial linear system_ After \(m\) iterations of GMRES we generate \(\mathbf{V}_{m+1}\) and \(\bar{\mathbf{H}}_{m}\) which are related by the Arnoldi relation \(\mathbf{A}\mathbf{V}_{m}=\mathbf{V}_{m+1}\bar{\mathbf{H}}_{m}\). We then solve the standard eigenvalue problem (28) to obtain the matrix \(\mathbf{P}_{k}\in\mathbb{R}^{m\times k}\) which columns correspond to the retained eigenvectors used for the construction of the harmonic Ritz vectors \(\mathbf{Y}_{k}=\mathbf{V}_{m}\mathbf{P}_{k}\). Defining \([\mathbf{Q},\mathbf{R}]\) as the reduced QR-factorization of \(\bar{\mathbf{H}}_{m}\mathbf{P}_{k}\), we have \[\mathbf{A}\mathbf{Y}_{k}=\mathbf{A}\mathbf{V}_{m}\mathbf{P}_{k}=\mathbf{V}_{m +1}\mathbf{\bar{H}}_{m}\mathbf{P}_{k}=(\mathbf{V}_{m+1}\mathbf{Q})\mathbf{R}. \tag{61}\] Since \(\mathbf{V}_{m+1}\) and \(\mathbf{Q}\) are orthonormal matrices and we are looking for a relation of the type (37), we directly deduce from (61) the following identities to be used for the next cycle: \[\mathbf{C}_{k} =\mathbf{V}_{m+1}\mathbf{Q} \tag{62}\] \[\mathbf{U}_{k} =\mathbf{Y}_{k}\mathbf{R}^{-1} \tag{63}\] At the end of the GMRES(\(m\)) cycle, the optimality property \(\mathbf{r}_{1}^{(1)}\perp\mathcal{AK}_{m}(\mathbf{A},\mathbf{r}_{0}^{(1)})\) of the residual \(\mathbf{r}_{1}^{(1)}=\mathbf{r}_{0}^{(1)}-\mathbf{A}\mathbf{V}_{m}\mathbf{y}_{m}\) can be easily verified. Recall that the reduced coordinates \(\mathbf{y}_{m}\) solve the least-squares problem \(\mathbf{y}_{m}=\underset{\mathbf{y}\in\mathbb{R}^{m}}{\mathrm{argmin}}\| \mathbf{V}_{m+1}^{T}\mathbf{r}_{0}^{(1)}-\bar{\mathbf{H}}_{m}\mathbf{y}\|_{2}\), which gives \(\mathbf{y}_{m}=(\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m})^{-1}\bar{ \mathbf{H}}_{m}^{T}\mathbf{V}_{m+1}^{T}\mathbf{r}_{0}^{(1)}=\mathbf{y}_{m+1}^{ (1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}= \mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+ 1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)} \mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+ 1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)} \mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+ 1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)} \mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+ 1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)} \mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+ 1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)} \mathbf{y}_{m+1}^{(1)}\mathbf{y}_{m+1}^{(1)}\mathbf{y}_ \[\mathbf{r}_{1}^{(s)} =\mathbf{r}_{k}^{(s)}-\mathbf{V}_{m-k+1}\bar{\mathbf{H}}_{m-k} \mathbf{y}_{m-k} \tag{66}\] \[\mathbf{x}_{1}^{(s)} =\mathbf{x}_{k}^{(s)}+\mathbf{A}^{-1}\mathbf{A}_{\mathbf{C}_{k}^{ (s)}}\mathbf{V}_{m-k}\mathbf{y}_{m-k}\] \[=\mathbf{x}_{k}^{(s)}+(\mathbf{I}-\mathbf{U}_{k}^{(s)}\mathbf{C}_ {k}^{(s)T}\mathbf{A})\mathbf{V}_{m-k}\mathbf{y}_{m-k}\] \[=\mathbf{x}_{k}^{(s)}+(\mathbf{V}_{m-k}-\mathbf{U}_{k}^{(s)} \mathbf{B}_{m-k})\mathbf{y}_{m-k}. \tag{67}\] The above expressions make explicitly appear the correction to the outer iterates \(\mathbf{x}_{k}^{(s)}\) and \(\mathbf{r}_{k}^{(s)}\). Equivalent formulas can be obtained by considering the generalized Arnoldi relation (52): \[\mathbf{r}_{1}^{(s)} =\mathbf{r}_{0}^{(s)}-\mathbf{\hat{W}}_{m+1}\bar{\mathbf{H}}_{m} \mathbf{y}_{m} \tag{68}\] \[\mathbf{x}_{1}^{(s)} =\mathbf{x}_{0}^{(s)}+\mathbf{\hat{V}}_{m}\mathbf{y}_{m} \tag{69}\] The vector of reduced coordinates \(\mathbf{y}_{m}\) is the solution of (53) and its partition \(\mathbf{y}_{m-k}\) is the solution of \(\operatorname{argmin}_{\mathbf{y}}\left\|\bar{\mathbf{H}}_{m-k}\mathbf{y}- \beta\mathbf{e}_{1}^{m-k+1}\right\|_{2}\), \(\beta=\|(\mathbf{I}-\mathbf{C}_{k}^{(s)}\mathbf{C}_{k}^{(s)T})\mathbf{r}_{0} ^{(s)}\|_{2}\). _Reformulation of the generalized eigenvalue problem_ At the end of each cycle of GCRO-DR we compute the Ritz eigenpairs from the following generalized eigenvalue problem: \[\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}\mathbf{g}=\theta\bar{\mathbf{H}}_{m}^ {T}\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}\mathbf{g} \tag{70}\] However, direct construction of both sides of this equation as product of matrices may lead to highly ill-conditioned operators. In the following we propose to reformulate (70) by exploiting the specific structure of \(\bar{\mathbf{H}}_{m}\) and \(\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}\). First, noticing that the last row of \(\bar{\mathbf{H}}_{m}\) is null except the last term \(h_{m+1,m}\), we can write \(\bar{\mathbf{H}}_{m}^{T}=[\mathbf{H}_{m}^{T}\quad h_{m+1,m}\mathbf{e}_{m}]\). Thus, we have \(\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}=\mathbf{H}_{m}^{T}\mathbf{H}_{m }+h_{m+1,m}^{2}\mathbf{e}_{m}\mathbf{e}_{m}^{T}\). Inserting these relations in (70) gives \[[\mathbf{H}_{m}^{T}\mathbf{H}_{m}+h_{m+1,m}^{2}\mathbf{e}_{m} \mathbf{e}_{m}^{T}]\mathbf{g}=\theta[\mathbf{H}_{m}^{T}\quad h_{m+1,m}\mathbf{ e}_{m}]\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}\mathbf{g}. \tag{71}\] Left multiplying by \(\mathbf{H}_{m}^{-T}\) and defining \(\mathbf{f}=\mathbf{H}_{m}^{-T}\mathbf{e}_{m}\), we obtain \[[\mathbf{H}_{m}+h_{m+1,m}^{2}\mathbf{f}\,\mathbf{e}_{m}^{T}]\mathbf{g}=\theta [\mathbf{I}_{m}\quad h_{m+1,m}\mathbf{f}]\hat{\mathbf{W}}_{m+1}^{T}\hat{ \mathbf{V}}_{m}\mathbf{g}. \tag{72}\] From (57) we know that the last row of \(\hat{\mathbf{W}}_{m+1}^{T}\hat{\mathbf{V}}_{m}\) is null which finally leads to the following eigenvalue problem \[\boxed{[\mathbf{H}_{m}+h_{m+1,m}^{2}\mathbf{f}\,\mathbf{e}_{m}^{T}]\mathbf{g}= \theta\left[\begin{array}{cc}\hat{\mathbf{W}}_{k+1}^{T}\hat{\mathbf{V}}_{k +1}&\mathbf{0}_{(k+1)\times(m-k-1)}\\ \mathbf{0}_{(m-k-1)\times(k+1)}&\mathbf{I}_{m-k-1}\end{array}\right]\mathbf{g }}, \tag{73}\] where the structure of the block \(\hat{\mathbf{W}}_{k+1}^{T}\hat{\mathbf{V}}_{k+1}\) has been highlighted in (57). ``` 1 Choose \(m\), the size of the Krylov subspace and \(k\), the size of the deflated/recycled subspace. Let \(tol\) be the relative convergence threshold. Choose an initial guess \(\mathbf{x}_{0}\). Compute \(\mathbf{r}_{0}=\mathbf{b}-\mathbf{A}\mathbf{x}_{0}\), and set \(i=1\). 2if\(\mathbf{C}_{k}\) and \(\mathbf{U}_{k}\) are available (from a previous linear system)then 3\(\mathbf{x}_{1}=\mathbf{x}_{0}+\mathbf{U}_{k}\mathbf{C}_{k}^{T}\mathbf{r}_{0}\)\(\triangleright\) Do not update \(\mathbf{U}_{k}\) and \(\mathbf{C}_{k}\) since only \(\mathbf{b}\) varies 4\(\mathbf{r}_{1}=\mathbf{r}_{0}-\mathbf{C}_{k}\mathbf{C}_{k}^{T}\mathbf{r}_{0}\) 5else 6\(\mathbf{v}_{1}=\mathbf{r}_{0}/\|\mathbf{r}_{0}\|\) 7\(\mathbf{c}=\|\mathbf{r}_{0}\|\mathbf{e}_{1}\) 8 Apply \(m\) steps of GMRES to generate \(\mathbf{V}_{m+1}\) and \(\bar{\mathbf{H}}_{m}\). 9 solve \(\min_{\mathbf{y}}\|\mathbf{c}-\mathbf{\bar{H}}_{m}\mathbf{y}\|_{2}\) for \(\mathbf{y}_{m}\) 10\(\mathbf{x}_{1}=\mathbf{x}_{0}+\mathbf{V}_{m}\mathbf{y}_{m}\) 11\(\mathbf{r}_{1}=\mathbf{V}_{m+1}(\mathbf{c}-\mathbf{\bar{H}}_{m}\mathbf{y}_{m})\) 12 Select the \(k\) deflation eigenvectors of \(\mathbf{H}_{m}\mathbf{g}=\lambda\mathbf{g}\) associated to the smallest eigenvalues and concatenate into \(\mathbf{P}_{k}\) 13 Let [Q,R] be the reduced QR factorization of \(\bar{\mathbf{H}}_{m}\mathbf{P}_{k}\). 14\(\mathbf{C}_{k}=\mathbf{V}_{m+1}\mathbf{Q}\) 15\(\mathbf{U}_{k}=\hat{\mathbf{V}}_{m}\mathbf{P}_{k}\mathbf{R}^{-1}\) 16 17 end if 18while\(\|\mathbf{r}_{i}\|_{2}>tol\)do 19\(i=i+1\) Apply \((m-k)\) steps of Arnoldi with the projected operator \((\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T})\mathbf{A}\) starting with \(\mathbf{v}_{1}=\mathbf{r}_{i-1}/\|\mathbf{r}_{i-1}\|_{2}\) to generate \(\mathbf{V}_{m-k+1}\), \(\bar{\mathbf{H}}_{m-k}\), and \(\mathbf{B}_{m-k}\). 20\(\hat{\mathbf{W}}_{m+1}=[\mathbf{C}_{k}\quad\mathbf{V}_{m-k+1}]\) 21\(\hat{\mathbf{V}}_{m}=[\bar{\mathbf{U}}_{k}\quad\mathbf{V}_{m-k}]\) 22\(\bar{\mathbf{H}}_{m}=\begin{bmatrix}\mathbf{D}_{k}&\mathbf{B}_{m-k}\\ \mathbf{0}&\bar{\mathbf{H}}_{m-k}\end{bmatrix}\) 23\(\mathbf{y}_{m}=\operatorname*{argmin}_{\mathbf{y}}\|\bar{\mathbf{H}}_{m} \mathbf{y}-\hat{\mathbf{W}}_{m+1}^{T}\mathbf{r}_{i-1}\|_{2}\) 24\(\mathbf{x}_{i}=\mathbf{x}_{i-1}+\hat{\mathbf{V}}_{m}\mathbf{y}_{m}\) 25\(\mathbf{r}_{i}=\mathbf{r}_{i-1}-\hat{\mathbf{W}}_{m+1}\bar{\mathbf{H}}_{m} \mathbf{y}_{m}\) 26 Solve \(\mathbf{f}=\mathbf{H}_{m}^{-T}\mathbf{e}_{m}\) 27 Select the \(k\) deflated eigenvectors of \([\mathbf{H}_{m}+h_{m+1,m}^{2}\mathbf{f}\,\mathbf{e}_{m}^{T}]\mathbf{g}=\theta \begin{bmatrix}\hat{\mathbf{W}}_{k+1}^{T}\hat{\mathbf{V}}_{k+1}&\mathbf{0}_{ (k+1)\times(m-k-1)}\\ \mathbf{0}_{(m-k-1)\times(k+1)}&\mathbf{I}_{m-k-1}\end{bmatrix}\mathbf{g}\) associated to the smallest eigenvalues and concatenate into \(\mathbf{P}_{k}\) 28\(\mathbf{Y}_{k}=\hat{\mathbf{V}}_{m}\mathbf{P}_{k}\) 29 Let [Q,R] be the reduced QR factorization of \(\bar{\mathbf{H}}_{m}\mathbf{P}_{k}\). 30\(\mathbf{C}_{k}=\hat{\mathbf{V}}_{m+1}\mathbf{Q}\) 31\(\mathbf{U}_{k}=\mathbf{Y}_{k}\mathbf{R}^{-1}\) 32 33 end if 34Recycle \(\mathbf{U}_{k}\) and \(\mathbf{C}_{k}\) for the next linear system ``` **Algorithm 4**GCRO-DR(\(m\),\(k\)) for a sequence of linear systems \(\mathbf{A}\mathbf{x}^{(s)}=\mathbf{b}^{(s)}\). ### Application of GCRO-DR to the solution of the fluid adjoint system As for GMRES-DR in section 4.4, we first apply GCRO-DR(120,40) to the purely fluid adjoint system. The convergence curves are plotted in Figure 12. This Figure should be compared to the corresponding Figure (a)a for GMRES-DR. First, we note that GCRO-DR performs much better than GMRES-DR in terms of numerical robustness since the true and least-squares relative residuals match perfectly except for the preconditioner BILU(0) applied to the approximate first-order Jacobian matrix. However, the discrepancy only appears below a threshold of \(5\times 10^{-7}\). Again, this loss of accuracy can be suppressed by resorting to a cold restart (see Figure (b)b). In theory, GMRES-DR and GCRO-DR are algebraically equivalent [23] but in practice our numerical experiments show that this equivalence is only observed for relative residuals above \(1\times 10^{-5}\), except for the most accurate preconditioner BILU(0) applied to the exact first-order Jacobian matrix. This perfect numerical equivalence is illustrated in Figure 13. Figure 12: Adjoint relative residual norm convergence history of GGCRO-DR(120,40). Impact of various preconditioners. In (a)a an erratic convergence of the least-squares residual associated to a stagnation of the true residual occurs for the BILU(0) factorization applied to the first-order approximate Jacobian matrix. In (b)b a cold restart allows to restore the convergence. ### Application of GCRO-DR to the solution of the fluid-structure coupled-adjoint system As exposed in section 5, unlike GMRES-DR which is based on an augmentation deflation strategy, GCRO-DR is able to accelerate the solution of a sequence of linear systems with a slowly varying system matrix and/or right-hand side. The strategy relies on recycling an approximate invariant subspace from one system to the next. In the context of coupled-adjoint linear systems, only the right-hand side of the fluid block of equations changes during the partitioned solution strategy. In Figure 14 we illustrate the acceleration of convergence for different recycling strategies. The black plain line corresponds to the convergence of the standard GMRES-DR or GCRO-DR solver preconditioned by BILU(0) applied to the first-order exact Jacobian matrix. For each new fluid-structure cycle a cold restart is performed, discarding any previous spectral information. The corresponding plateaus are clearly observable at the beginning of each cycle. The remaining curves present the convergence for a subspace recycling activated from an increasing fluid-structure cycle index. As can be seen, even recycling starting from the second cycle improves convergence compared to the standard Krylov solver. The best option seems here to recycle from the second fluid-structure coupling. This is very promising because one would have expected that the quality of the spectral information to be recycled, i.e., the distance between the true invariant subspace and its approximation, would not be small enough to get such an acceleration. A maximum saving of 1780 matrix-vector products out of 4580 (\(\sim\)39%) is achieved. Figure 13: Illustration of the algebraic and numerical equivalence of GMRES-DR and GCRO-DR for the preconditioner BILU(0) applied to the exact first-order Jacobian matrix. We recall the solver parameters: \(m=120\) and \(k=40\). In an attempt to understand the convergence of the approximate invariant subspace, we propose to compute the distance between \(\mathbf{C}^{(i-1)}\) and \(\mathbf{C}^{(i)}\) which is the best estimation of the true distance to the invariant subspace of the system matrix that we can compute. Since the size of the recycling subspace varies (in this case 30% of the size of the Krylov space for the last fluid cycle) we use the Grassmann distance formula between vector spaces of different dimensions [47]: \[d_{p}(\mathbf{C}^{(i-1)},\mathbf{C}^{(i)})=\left(\sum_{i=1}^{p}\theta_{i}^{2} \right)^{1/2},\quad p=\min(k_{i-1},k_{i}), \tag{74}\] where \(\theta_{i}\), the principal angles between columns of \(\mathbf{C}^{(i-1)}\) and \(\mathbf{C}^{(i)}\), are computed via the singular value decomposition of \(\mathbf{C}^{(i-1)}{}^{T}\mathbf{C}^{(i)}\). In order to interpret this distance criterion during convergence, we plot a normalized version of (74): \(\tilde{d}_{p}=d_{p}/\sqrt{p}\). In Figures 15a and 15b we report the histories of the subspace distance \(d_{p}\) and of the smaller subspace dimension \(p\). In case of no recycling, the distances show higher values meaning that the Krylov solver continues to search after an optimal projection space up to convergence. When recycling is activated, the last distances are about three times lower which confirms that spectral information was actually propagated between cycles. Figure 14: Coupled-adjoint relative residual norm convergence history of GCRO-DR(120,40). Impact of recycling strategy. Recycling spectral information starting from cycle 2 and above always improves convergence. A maximum saving of 1780 matrix-vector products out of 4580 (\(\sim\)39%) is achieved. To conclude our numerical experiments, Figure 16 compares two instances of GCRO-DR with subspace recycling combined to the BILU(1) preconditioner applied to the first-order approximate Jacobian operator on one hand, and to BILU(0) applied to the first-order exact flux Jacobian operator on the other hand. Even for the simpler preconditioner a remarkable acceleration is observed. Figure 16: Coupled-adjoint relative residual norm convergence history of GCRO-DR(120,40). Impact of recycling strategy for two BILU preconditioners. Figure 15: Impact of subspace recycling for GCRO-DR(120,40). Monitoring of distance between approximate invariant subspaces \(\mathbf{C}^{(i-1)}\) and \(\mathbf{C}^{(i)}\) is reported in (a)a. The number of vectors used to compute the subspace distance is also reported in (b)b. Flexible Generalized Conjugate Residual with inner Orthogonalization and Deflated Restarting: FGCRO-DR ### The FGCRO-DR Krylov solver Similar to GCRO-DR, FGCRO-DR relies on a given full-rank matrix \(\mathbf{Z}_{k}\in\mathbb{R}^{n\times k}\) and an orthonormal matrix \(\mathbf{C}_{k}\in\mathbb{R}^{n\times k}\) as the image of \(\mathbf{Z}_{k}\) by \(\mathbf{A}\) satisfying the relations \[\mathbf{A}\mathbf{Z}_{k} =\mathbf{C}_{k}, \tag{75}\] \[\mathbf{C}_{k}^{T}\mathbf{C}_{k} =\mathbf{I}_{k}. \tag{76}\] The first cycle of FGCRO-DR for solving the initial linear system in the sequence consists in applying \(m\) steps of the flexible Arnoldi process to build \(\mathbf{V}_{m+1}\), \(\mathbf{Z}_{m}\) and \(\bar{\mathbf{H}}_{m}\) (see algorithm 2). At this point, we assume that the following flexible Arnoldi relation exists: \(\mathbf{A}\mathbf{Z}_{m}=\mathbf{V}_{m+1}\bar{\mathbf{H}}_{m}\). Then, we solve the least-squares problem with \(\mathbf{c}=\|\mathbf{r}_{0}\|\mathbf{e}_{1}\) and compute the solution \(\mathbf{x}_{m}=\mathbf{x}_{0}+\mathbf{Z}_{m}\mathbf{y}_{m}\). To initiate the next cycle, we solve the standard eigenvalue problem (28) and retain the eigenpairs associated to the \(k\) eigenvalues with smallest magnitudes. The corresponding eigenvectors form the columns of the matrix \(\mathbf{P}_{k}\). Following definition 4.2, we then define \(\mathbf{Y}_{k}\) as the set of \(k\) harmonic Ritz vectors of \(\mathbf{A}\mathbf{Z}_{m}\mathbf{V}_{m}^{T}\) with respect to \(\mathrm{range}(\mathbf{V}_{m})\). Defining \([\mathbf{Q},\mathbf{R}]\) as the reduced QR-factorization of \(\bar{\mathbf{H}}_{m}\mathbf{P}_{k}\), we have \[\mathbf{A}\mathbf{Z}_{m}\mathbf{P}_{k}=\mathbf{V}_{m+1}\overline{\mathbf{H}}_ {m}\mathbf{P}_{k}=(\mathbf{V}_{m+1}\mathbf{Q})\mathbf{R}. \tag{77}\] We define the subspaces for the next cycle as \(\mathbf{C}_{k}=\mathbf{V}_{m+1}\mathbf{Q}\) and \(\mathbf{Z}_{k}=\mathbf{Z}_{m}\mathbf{P}_{k}\mathbf{R}^{-1}\), assuming that \(\mathbf{R}\) is nonsingular. Since \(\mathbf{V}_{m+1}\) and \(\mathbf{Q}\) are orthonormal matrices, so is \(\mathbf{C}_{k}\). At the end of the \(\mathrm{FGMRES}(m)\) cycle, the optimality property \(\mathbf{r}_{1}^{(1)}\perp\mathrm{span}\{\mathbf{A}\mathbf{Z}_{m}\}\) of the residual \(\mathbf{r}_{1}^{(1)}=\mathbf{r}_{0}^{(1)}-\mathbf{A}\mathbf{Z}_{m}\mathbf{y}_{m}\) can be easily verified. The following theorem holds: **Theorem 6.1**.: _The vector \(\mathbf{z}_{i}\in\mathrm{span}\{\mathbf{Z}_{m}\}\) satisfies \(\mathbf{z}_{i}=\underset{\mathbf{z}}{\mathrm{argmin}}\|\mathbf{r}_{0}-\mathbf{ A}\mathbf{z}\|_{2}\Leftrightarrow\mathbf{r}_{i}\perp\mathrm{span}\{\mathbf{A} \mathbf{Z}_{m}\}\)_ Combining this result with relation (77), we have \[\mathbf{P}_{k}^{T}(\mathbf{A}\mathbf{Z}_{m})^{T}\mathbf{r}_{1}^{(1 )} =\mathbf{0}\] \[(\mathbf{V}_{m+1}\bar{\mathbf{H}}_{m}\mathbf{P}_{k})^{T}\mathbf{ r}_{1}^{(1)} =\mathbf{0}\] \[\Rightarrow\mathbf{R}^{T}\mathbf{C}_{k}^{T}\mathbf{r}_{1}^{(1)} =\mathbf{0}, \tag{78}\] which shows that \(\mathbf{C}_{k}^{T}\mathbf{r}_{1}^{(1)}=\mathbf{0}\), with \(\mathbf{C}_{k}=\mathbf{V}_{m+1}\mathbf{Q}\). To complete the next cycle \((i+1)\) and obtain the next iterates \(\mathbf{r}_{i+1}^{(s)}\) and \(\mathbf{x}_{i+1}^{(s)}\) for the current system \((s)\) we perform \(m-k\) steps of FGMRES applied to the projected operator \(\mathbf{A}_{\mathbf{C}_{k}}=(\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T}) \mathbf{A}\), i.e., we solve the projected system \(\mathbf{P}\mathbf{A}\mathbf{x}=\mathbf{P}\mathbf{b}\) where \(\mathbf{P}=\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T}\) is the orthogonal projector on \(\mathrm{range}(\mathbf{C}_{k})^{\perp}\). This results in the generalized Arnoldi relation \[\mathbf{A}\mathbf{Z}_{m}=\mathbf{V}_{m+1}\overline{\mathbf{H}}_{m}, \tag{79}\] where \[\mathbf{Z}_{m}=\begin{bmatrix}\mathbf{Z}_{k}&\mathbf{Z}_{m-k}\end{bmatrix}, \quad\mathbf{V}_{m+1}=\begin{bmatrix}\mathbf{C}_{k}&\mathbf{V}_{m-k+1}\end{bmatrix} \quad\text{and}\quad\overline{\mathbf{H}}_{m}=\begin{bmatrix}\mathbf{I}_{k}& \mathbf{B}_{m-k}\\ \mathbf{0}&\overline{\mathbf{H}}_{m-k}\end{bmatrix}\!, \tag{80}\] with \(\mathbf{B}_{m-k}=\mathbf{C}_{k}^{T}\mathbf{A}\mathbf{Z}_{m-k},\mathbf{B}_{m-k} \in\mathbb{R}^{k\times(m-k)}\). To build \(\mathbf{V}_{m+1}\) the Arnoldi recurrence restarts from \(\mathbf{v}_{k+1}=\mathbf{r}_{1}/\|\mathbf{r}_{1}\|\) with \(\mathbf{r}_{1}=(\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T})\mathbf{r}_{0}\). To initiate the subsequent cycle or to solve the next linear system, we compute \(k\) eigenvectors of a generalized eigenvalue problem that we will specify in section 6.2 hereafter. These eigenvectors are concatenated in matrix \(\mathbf{P}_{k}\). Again, introducing the reduced QR-factorization of \(\bar{\mathbf{H}}_{m}\mathbf{P}_{k}\), we define the new residual and solution subspaces \[\mathbf{C}_{k} =\mathbf{V}_{m+1}\mathbf{Q} \tag{81}\] \[\mathbf{Z}_{k} =\mathbf{Z}_{m}\mathbf{P}_{k}\mathbf{R}^{-1}. \tag{82}\] To initiate the solving of the next linear system we compute the projected initial solution and residual with \[\mathbf{x}_{1} =\mathbf{x}_{0}+\mathbf{Z}_{k}\mathbf{C}_{k}^{T}\mathbf{r}_{0} \tag{83}\] \[\mathbf{r}_{1} =(\mathbf{I}-\mathbf{C}_{k}\mathbf{C}_{k}^{T})\mathbf{r}_{0} \tag{84}\] ### Flexible deflation strategies From the definition of harmonic Ritz vectors, there exits different ways of deriving the deflated subspace according to the choice of operator \(\mathbf{B}\) and vector space \(\mathcal{U}\). In the technical report [48], Carvalho et al. proposed three formulations (labeled A, B and C) for the projected generalized eigenvalue problem. We review these variants below and give additional insight for strategy B. **Strategy A**: This strategy has already been considered for FGMRES-DR in section 4.3, see relation (32). Defining \(\mathbf{B}\equiv\mathbf{A}\mathbf{Z}_{m}\mathbf{Z}_{m}^{\dagger}\) and \(\mathcal{U}\equiv\text{range}(\mathbf{Z}_{m})\), the harmonic Ritz pair \((\lambda,\mathbf{y}=\mathbf{Z}_{m}\mathbf{g})\) satisfies \[((\mathbf{A}\mathbf{Z}_{m}\mathbf{Z}_{m}^{\dagger})\mathbf{Z}_{m})^{T}(( \mathbf{A}\mathbf{Z}_{m}\mathbf{Z}_{m}^{\dagger})\mathbf{Z}_{m}\mathbf{g}- \lambda\mathbf{Z}_{m}\mathbf{g})=\mathbf{0}. \tag{85}\] Thus, \(\mathbf{Y}_{m}=\{\mathbf{y}_{1},\cdots,\mathbf{y}_{m}\}\) corresponds to the harmonic Ritz vectors of \(\mathbf{A}\mathbf{Z}_{m}\mathbf{Z}_{m}^{\dagger}\) with respect to \(\text{range}(\mathbf{Z}_{m})\). Also, inserting (79) in (85), the eigenpair \((\lambda,\mathbf{g})\) satisfies the generalized eigenvalue problem \[\boxed{\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}\mathbf{g}=\lambda\bar{ \mathbf{H}}_{m}^{T}\mathbf{V}_{m+1}^{T}\mathbf{Z}_{m}\mathbf{g}}. \tag{86}\] **Strategy B**: This strategy has also already been considered for FGMRES-DR in section 4.2, see relations (26) and (27). Letting \(\mathbf{V}_{m}=[\mathbf{C}_{k}\quad\mathbf{V}_{m-k}]\) and choosing \(\mathbf{B}\equiv\mathbf{A}\mathbf{Z}_{m}\mathbf{V}_{m}^{T}\) and \(\mathcal{U}\equiv\text{range}(\mathbf{V}_{m})\), the harmonic Ritz pair \((\lambda,\mathbf{y}=\mathbf{V}_{m}\mathbf{g})\) satisfies \[((\mathbf{A}\mathbf{Z}_{m}\mathbf{V}_{m}^{T})\mathbf{V}_{m})^{T}((\mathbf{A} \mathbf{Z}_{m}\mathbf{V}_{m}^{T})\mathbf{V}_{m}\mathbf{g}-\lambda\mathbf{V}_{m }\mathbf{g})=\mathbf{0}. \tag{87}\] Thus, \(\mathbf{Y}_{m}=\{\mathbf{y}_{1},\cdots,\mathbf{y}_{m}\}\) corresponds to the harmonic Ritz vectors of \(\mathbf{A}\mathbf{Z}_{m}\mathbf{V}_{m}^{T}\) with respect to \(\text{range}(\mathbf{V}_{m})\). Also, the eigenpair \((\lambda,\mathbf{g})\) satisfies the generalized eigenvalue problem \[\boxed{\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}\mathbf{g}=\lambda\mathbf{ H}_{m}^{T}\mathbf{g}}. \tag{88}\] This generalized eigenvalue problem can be reformulated as a standard one, see equation (28). We have \(\hat{\mathbf{H}}_{m}\mathbf{g}=\lambda\mathbf{g}\), with \(\hat{\mathbf{H}}_{m}=[\mathbf{H}_{m}+h^{2}\mathbf{f}\,\mathbf{e}_{m}^{T}]\) and \(\mathbf{H}_{m}=[\mathbf{I}_{m}\quad\mathbf{0}_{m\times 1}]\bar{\mathbf{H}}_{m}\). The block upper triangular structure of \(\bar{\mathbf{H}}_{m}\) is highlighted in (80) with a leading \(k\times k\) identity block. More specifically we have \[\hat{\mathbf{H}}_{m}=\begin{bmatrix}\mathbf{I}_{k}&\tilde{\mathbf{B}}_{m-k}\\ \mathbf{0}&\tilde{\mathbf{H}}_{m-k}\end{bmatrix} \tag{89}\] with \(\tilde{\mathbf{B}}_{m-k}=\mathbf{B}_{m-k}+h^{2}\mathbf{f}_{[1:k]}\mathbf{e}_{m-k}^ {T}\) and \(\tilde{\mathbf{H}}_{m-k}=\mathbf{H}_{m-k}+h^{2}\mathbf{f}_{[k+1:m]}\mathbf{e}_{m- k}^{T}\), \(\mathbf{e}_{m-k}=[0\cdots 0,1]^{T}\in\mathbb{R}^{m-k}\). We assume here that the Hessenberg sub-matrix \(\overline{\mathbf{H}}_{m-k}\) is unreduced, meaning that there are no zero elements on the subdiagonal (the problem of multiple eigenvalues is discussed in [49]). The eigenvalues of (89) are the combined eigenvalues of the diagonal blocks of \(\hat{\mathbf{H}}_{m}\) and therefore satisfy \(\det(\hat{\mathbf{H}}_{m}-\lambda\mathbf{I}_{m})=\det((1-\lambda)\mathbf{I}_{ k})\det(\tilde{\mathbf{H}}_{m-k}-\lambda\mathbf{I}_{m-k})=0\). Thus, we have a unit eigenvalue of algebraic multiplicity \(k\). Now, if \(\lambda=1\) is an eigenvalue of the upper diagonal block \(\mathbf{I}_{k}\), with associated eigenvectors \(\mathbf{g}_{k}=\mathbf{I}_{k}\), then it is also an eigenvalue of the full matrix \(\hat{\mathbf{H}}_{m}\), with the same eigenvectors augmented with zeros, which gives \(\mathbf{g}_{m}^{(\lambda=1)}=[\mathbf{g}_{k}\quad\mathbf{0}_{(m-k)\times k}]^ {T}=[\mathbf{I}_{k}\quad\mathbf{0}_{(m-k)\times k}]^{T}\). This shows that the \(k\) eigenvectors associated to the unit eigenvalues are linearly independent. We can also obtain an explicit expression for the complementary eigenvectors associated to the eigenvalues of the lower diagonal block of \(\hat{\mathbf{H}}_{m}\). Let \((\lambda_{m-k},\mathbf{g}_{m-k})\) be an eigenpair of \(\tilde{\mathbf{H}}_{m-k}\). The full eigenvalue problem reads \[\begin{bmatrix}\mathbf{I}_{k}&\tilde{\mathbf{B}}_{m-k}\\ \mathbf{0}&\tilde{\mathbf{H}}_{m-k}\end{bmatrix}\begin{pmatrix}\mathbf{x}_{k} \\ \mathbf{g}_{m-k}\end{pmatrix}=\begin{pmatrix}\mathbf{x}_{k}+\tilde{\mathbf{B}}_{ m-k}\mathbf{g}_{m-k}\\ \lambda_{m-k}\mathbf{g}_{m-k}\end{pmatrix}\!\!. \tag{90}\] We can make \(\mathbf{x}_{k}+\tilde{\mathbf{B}}_{m-k}\mathbf{g}_{m-k}=\lambda_{m-k}\mathbf{x }_{k}\) by choosing \(\mathbf{x}_{k}=-(1-\lambda_{m-k})^{-1}\tilde{\mathbf{B}}_{m-k}\mathbf{g}_{m-k}\). By assumption \(\lambda_{m-k}\neq 1\) since it is not an eigenvalue of the leading block of \(\hat{\mathbf{H}}_{m}\). Thanks to these relations, the deflated subspace associated to strategy B is obtained efficiently. In light of this attractive property, Jolivet and Tournier proposed a block version of GCRO-DR combined with this deflation strategy [50]. **Strategy C**: This third strategy was adopted in [23] where the FGCRO-DR algorithm maintains an additional set of vectors \(\mathbf{W}_{m}\) such that the deflated harmonic Ritz vectors read \(\mathbf{Y}_{k}=\mathbf{W}_{m}\mathbf{P}_{k}\). Similar to \(\mathbf{Z}_{k}\) in (82), a set \(\mathbf{W}_{k}=\mathbf{W}_{m}\mathbf{P}_{k}\mathbf{R}^{-1}\) is propagated between cycles. The complete subspace is built by appending the Krylov vectors generated during the Arnoldi process: \(\mathbf{W}_{m}=[\mathbf{W}_{k}\quad\mathbf{V}_{m-k}]\). This particular subspace was used to prove the algebraic equivalence of FGMRES-DR and FGCRO-DR under a specific colinearity constraint. As a consequence, strict algebraic equivalence of GMRES-DR and GCRO-DR was also demonstrated by the authors. Note that this property was stated in [6] without demonstration. Using definition 4.2 with \(\mathcal{U}\equiv\mathrm{range}(\mathbf{W}_{m})\) and \(\mathbf{B}\equiv\mathbf{A}\mathbf{Z}_{m}\mathbf{W}_{m}^{\dagger}\) we can write \(\mathbf{y}=\mathbf{W}_{m}\mathbf{g}\) which gives the orthogonality constraint \[((\mathbf{A}\mathbf{Z}_{m}\mathbf{W}_{m}^{\dagger})\mathbf{W}_{m})^{T}(( \mathbf{A}\mathbf{Z}_{m}\mathbf{W}_{m}^{\dagger})\mathbf{W}_{m}\mathbf{g}- \lambda\mathbf{W}_{m}\mathbf{g})=\mathbf{0}. \tag{91}\] Now, using (79) and recalling that \(\mathbf{V}_{m+1}\) is orthonormal, we obtain the generalized eigenvalue problem \[\boxed{\bar{\mathbf{H}}_{m}^{T}\bar{\mathbf{H}}_{m}\mathbf{g}=\lambda\bar{ \mathbf{H}}_{m}^{T}\mathbf{V}_{m+1}^{T}\mathbf{W}_{m}\mathbf{g}} \tag{92}\] To reduce ill-conditioning, an alternative formulation for this generalized eigenvalue problem has been proposed in equation (73). ### Application of FGCRO-DR to the fluid adjoint system Our first numerical experiments compare the three deflation strategies detailed in section 6.2 for the solving of the fluid adjoint linear system. We consider increasing sizes of Krylov bases of 30, 50 and 70 vectors. The deflated subspace size is half of the Krylov basis size except for the smaller basis where we keep only one third of the approximation subspace. The variable preconditioner consists in a non-restarted inner GMRES Krylov solver with a basis of size 10. The innermost stationary preconditioner is BILU(0) applied to the first-order exact Jacobian matrix. The corresponding convergence histories are presented in the left-hand side of Figure 17. Clearly, strategy C seems the most effective regardless the size of the Krylov space. However, when the number of vector increases, the three deflation strategies perform similarly. Indeed, the right-hand side of Figure 17 shows very close convergence profiles of FGCRO-DR(70,35) for the three deflation strategies. There is no theoretical evidence that a deflation strategy should perform better than another one. As a consequence, these conclusions are likely to be case dependent. ### Application of FGCRO-DR to the solution of the fluid-structure coupled-adjoint system In this section we study the impact of recycling spectral information between fluid-structure cycles in the context of GCRO-DR with variable preconditioning. From our previous numerical experiments with FGMRES-DR and FGCRO-DR applied to the fluid adjoint system, we choose to recycle a subspace of size half the size of the Krylov subspace, i.e., the same size as the one of the deflated subspace. In [48] the authors suggest that larger recycled subspaces give better performance in the context of a flexible Krylov solver used to solve linear systems arising from a specific class of problems in quantum chromodynamics. For instance, for an outer space of size \(m=20\), they carried out a parameter study by varying the number of deflated vectors from 1 to \(m-1\). Regardless of the selected recycling strategy, the lowest computational cost was achieved for a size of the recycled subspace higher than 50% of the size of the Krylov subspace. In another technical report [51] the same authors solve a sequence of twelve elliptic partial differential equation problems for an increasing grid size. Their numerical experiments show that a reduction in the range of 40% to 45% can be achieved by recycling an approximate invariant subspace of half the size of the Krylov vector space compared to a standard FGMRES-DR applied to each system separately. These results seem to confirm the similar gains observed for the non-flexible GCRO-DR Krylov solver. We recall that in our context, the sequence of linear adjoint systems corresponds to a varying right-hand side arising from the update of the structural source term during the partitioned fluid-structure solution strategy. After a number of fluid-structure cycles, the right-hand side is expected to converge and we end up with a constant linear system to solve. We then apply FGCRO-DR(\(m=70\),\(m_{i}=10\),\(k=35\)), \(k\) being the number of deflated and recycled vectors, combined with the three deflation strategies A, B Figure 17: Impact of the deflation strategy on the convergence of FGCRO-DR for a varying size of the external Krylov subspace. The size of the inner Krylov subspace is 10. Deflation strategy C outperforms strategies A and B. Strategy A seems the most sensitive to the size of the Krylov basis. For the larger Krylov subspace FGCRO-DR(70,35) performs equally well regardless of the chosen strategy. and C. Table 1 collects the corresponding numbers of matrix-vector products. Using recycling helps to improve the convergence rate of flexible GCRO-DR in this application since a reduction of approximately 16% to 19% in terms of matrix-vector products is obtained. Figures 18, 19 and 20 show the convergence histories of FGCRO-DR(70,10,35) for strategy A, B and C respectively. We have reported convergences for a varying starting recycling fluid-structure cycle. Regardless the deflation strategy, the plateau following a fluid-structure coupling restart is always noticeably reduced. A comparison of distances between FGCRO-DR without recycling and FGCRO-DR with a subspace recycling starting from cycle 5 is reported in Figure (a)a. Clearly the subspace distance is much smaller when recycling is activated, whereas it keeps constant for the solver with cold restart after a fluid-structure coupling. The dimension \(p\) in (74) is plotted if Figure (b)b. With recycling, the dimension of the approximation Krylov space is larger which favors a better convergence. Figure 19: Impact of subspace recycling for FGMRES-DR(70,10,35). Monitoring of distance between approximate invariant subspaces \(\mathbf{C}^{(i-1)}\) and \(\mathbf{C}^{(i)}\) is reported in (a)a. The number of vectors used to compute the subspace distance is also reported in (b)b. Figure 20: Impact of approximate invariant subspace recycling strategy B on the relative residual convergence of the coupled-adjoint fluid block for FGCRO-DR(70,10,35), with innermost preconditioner \(\mathrm{BILU0}(\mathbf{L}_{O1}^{\mathbf{L}_{X}\mathbf{A}})\). Recycling spectral information starting from cycle 5 and above always improves convergence. A maximum saving of 1765 matrix-vector products out of 10815 (\(\sim\)16%) is achieved. Figure 21: Impact of approximate invariant subspace recycling strategy C on the relative residual convergence of the coupled-adjoint fluid block for FGCRO-DR(70,10,35), with innermost preconditioner BILU0(\(\mathbf{J}_{O1}^{EXA}\)). Recycling spectral information starting from cycle 5 and above always improves convergence, except when we start recycling from fluid-structure cycle 8. A maximum saving of 2050 matrix-vector products out of 10815 (\(\sim\)19%) is achieved. ## 7 Conclusion In this paper we have accelerated the fluid-structure coupled-adjoint partitioned solver by considering techniques borrowed from approximate invariant subspace recycling strategies adapted to sequences of linear systems with varying right-hand sides. Indeed, in a partitioned framework, the structural source term attached to the fluid block of equations affects the right-hand side with the nice property of quickly converging. This recycling and deflation strategies considered in this work were inspired by the theoretical developments related to GCRO-DR and its flexible variant detailed in [48; 23]. Our objective was to make this paper as self-contained as possible and in that respect, we choose to recall theoretical details of both GCRO-DR and FGCRO-DR. It also gave us the opportunity to point out practical implementation details mainly to make the computation of harmonic Ritz vector more stable. We demonstrate the benefit of these techniques by computing the coupled derivatives for an aeroelastic configuration of the ONERA-M6 fixed wing in transonic flow representative of the numerical complexity of realistic industrial applications. For this exercise the fluid grid was coupled to a structural model specifically designed to exhibit a high flexibility. All computations were performed using RANS flow modeling and a fully linearized one-equation Spalart-Allmaras turbulence model, which typically results in stiff linear systems. For the non flexible GCRO-DR solver, subspace recycling between fluid-structure cycles achieved a reduction of 39% in terms of matrix-vector products with respect to the legacy solver. Even recycling early from the second cycle led to a reduction in computational cost of 37% showing the robustness of the proposed strategy. The conclusions with respect to the Flexible GCRO-DR are more mitigated, even if recycling almost systematically ended up with a reduction of the total number of matrix-vector products. Indeed, gains were achieved when recycling was activated from the fourth fluid-structure cycle only, regardless of the deflation strategy A, B or C. Deflation strategy A seems to be more consistent when comparing convergence profiles for various starting recycling cycles. For our application, a reduction of approximately 16% to 19% in terms of matrix-vector products was still obtained. However, the convergence seems less smooth exhibiting some local stagnations. This suggests that the recycled subspace may not always be suitable and a dynamic recycling approach would certainly be beneficial in this case. This will be part of a future work.
2304.00043
Features of Gaia DR3 Spectroscopic Binaries I. Tidal circularization of Main-Sequence Stars
Previous studies pointed out that many observed samples of short-period binaries display a cutoff period, $P_{\rm cut}$, such that almost all binaries with periods shorter than $P_{\rm cut}$ have circular orbits. This feature is probably due to long-term circularization processes induced by tidal interaction between the two stars of each binary. It seemed as if coeval main-sequence (MS) samples of open clusters display $P_{\rm cut}$ that depends on the sample age. Using the unprecedentedly large sample of MS spectroscopic orbits recently released by $\textit{Gaia}$ we have found that the $P_{\rm cut}$ does not depend on the stellar age but, instead, varies with stellar temperature, decreasing linearly from $6.5$ day at $T_{\rm eff}\sim 5700$ K to $\sim 2.5$ day at $6800$ K. $P_{\rm cut}$ was derived by a new algorithm that relied on clear upper envelopes displayed in the period-eccentricity diagrams. Our $P_{\rm cut}$ determines both the border between the circular and eccentric binaries and the location of the upper envelope. The results are inconsistent with the theory which assumes circularization occurs during the stellar MS phase, a theory that was adopted by many studies. The circularization has probably taken place at the pre-main-sequence phase, as suggested already in 1989 by Zahn and Bouchet, and later by Khaluillin and Khaluillina in 2011. Our results suggest that the weak dependence of $P_{\rm cut}$ on the cluster age is not significant, and/or might be due to the different temperatures of the samples. If indeed true, this has far-reaching implications for the theory of binary and exoplanet circularization, synchronization, and alignment.
Dolev Bashi, Tsevi Mazeh, Simchon Faigler
2023-03-31T18:01:36Z
http://arxiv.org/abs/2304.00043v1
# Features of Gaia DR3 Spectroscopic Binaries ###### Abstract Previous studies pointed out that many observed samples of short-period binaries display a cutoff period, \(P_{\rm cut}\), such that almost all binaries with periods shorter than \(P_{\rm cut}\) have circular orbits. This feature is probably due to long-term circularization processes induced by tidal interaction between the two stars of each binary. It seemed as if coeval main-sequence (MS) samples of open clusters display \(P_{\rm cut}\) that depends on the sample age. Using the unprecedentedly large sample of MS spectroscopic orbits recently released by _Gaia_ we have found that the \(P_{\rm cut}\) does not depend on the stellar age but, instead, varies with stellar temperature, decreasing _linearly_ from 6.5 day at \(T_{\rm eff}\sim 5700\) K to \(\sim 2.5\) day at 6800 K. \(P_{\rm cut}\) was derived by a new algorithm that relied on clear upper envelopes displayed in the period-eccentricity diagrams. Our \(P_{\rm cut}\) determines both the border between the circular and eccentric binaries and the location of the upper envelope. The results are inconsistent with the theory which assumes circularization occurs during the stellar MS phase, a theory that was adopted by many studies. The circularization has probably taken place at the pre-main-sequence phase, as suggested already in 1989 by Zahn and Bouchet, and later by Khaluillin and Khaluillina in 2011. Our results suggest that the weak dependence of \(P_{\rm cut}\) on the cluster age is not significant, and/or might be due to the different temperatures of the samples. If indeed true, this has far-reaching implications for the theory of binary and exoplanet circularization, synchronization, and alignment. keywords: binaries: close - binaries: spectroscopic - methods: statistical ## 1 Introduction The _Gaia_ latest release of Non-Single Star catalogs (Gaia Collaboration et al., 2022, hereafter _NSS_) includes the orbits of 181 327 single-lined spectroscopic binaries (SB1), based on the radial velocities obtained by the space-mission RVS spectrograph (Recio-Blanco et al., 2022; Blomme et al., 2022; Katz et al., 2022). The _NSS_ SB1 catalog is substantially larger than any previously-known catalog, and therefore is a gold mine for investigating the statistical features of short-period binaries (e.g., Duquennoy and Mayor, 1991; Duchene and Kraus, 2013; Torres et al., 2021), like the frequency of binaries as a function of the primary mass (Raghavan et al., 2010; Troup et al., 2016; Moe and Di Stefano, 2017) and mass-ratio distributions (e.g., Mazeh and Goldberg, 1992; Boffin, 2012, 2015; Shahaf et al., 2017). This work utilizes the _NSS_ catalog to follow the tidal circularization of short-period binaries. Based on theoretical work, we expect short-period binaries (\(\lesssim 3-10\) day) to be circularized (e.g., Kopal, 1956; Mayor and Mermilliod, 1984; Giuricin et al., 1984; Zahn, 2008) by the tidal interaction between the two components, whose strength is a strong function of the binary separation and primary radius (e.g., Zahn, 1975, 1977, 1989). This expectation caused Gaia Collaboration et al. (2022) to question the validity of the very short-period binaries, with periods shorter than one day and large eccentricities. We, therefore, rely on the work of Bashi et al. (2022), who confronted the _NSS_ SB1 orbits with the LAMOST and GALAH RV databases, and constructed a clean sample of 91 740 _Gaia_ SB1 orbits. This sample is the basis of our period-eccentricity relation. Observational evidence for tidal circularization, based on small samples of spectroscopic binaries, was already pointed out by Mayor and Mermilliod (1984) and Mathieu and Mazeh (1988). They considered samples of coeval spectroscopic binaries and showed that each sample displays a circularization "cutoff period", \(P_{\rm cut}\), out to which most binaries have circular orbits, while most binaries of longer periods have eccentric orbits (see also Mathieu et al., 2004). The idea behind the cutoff period is the strong dependence of the tidal interaction on the binary separation and therefore on the binary period. Binaries with periods longer than \(P_{\rm cut}\) were not circularized, keeping their primordial eccentricity. In this model, coeval binary samples should display a discontinuity jump of the eccentricity at \(P_{\rm cut}\), which should depend on the sample age -- the older the sample the longer \(P_{\rm cut}\)(e.g., Mathieu and Mazeh, 1988; Mathieu et al., 2004; Geller et al., 2008, 2010; Geller and Mathieu, 2012; Nine et al., 2020; Geller et al., 2021; Zanazzi, 2022). Another avenue to observationally study the tidal circularization was taken by North and Zahn (2003) and Mazeh et al. (2006), and recently by the seminal works of Van Eylen et al. (2016) and Justesen and Albrecht (2021); see also Zanazzi (2022). They considered samples of eclipsing binaries (EB) -- those of OGLE LMC (Wyrzykowski et al., 2003), _Kepler_(Slawson et al., 2011) and _TESS_(Ricker et al., 2015), obtaining the projected eccentricity of those binaries from the timing of the secondary eclipse (Sterne, 1940). These studies used the fact that a light curve of an EB allows for deriving the stellar radii relative to the binary separation, and therefore the ratio between the sum of radii of the two components and the binary separation, a ratio that directly determines the tidal circularization effectiveness (North and Zahn, 2003). Thus, the analysis of EB samples provides a cutoff ratio that divides circular from eccentric orbits. Justesen and Albrecht (2021) derived the cutoff ratio for binaries with different temperatures. Such a ratio is not available for spectroscopic binaries, and therefore one can study only the cutoff period. When studying spectroscopic samples, Meibom and Mathieu (2005) suggested a more composite approach for determining the cutoff period of a binary sample. They considered the eccentricity as a piecewise function of the period, \(P\), with zero eccentricity for \(P<P_{\rm cut}\) and a smooth rising asymptotic function that goes up to eccentricity close to unity for \(P>P_{\rm cut}\). The idea behind the Meibom and Mathieu (2005) approach is that even binaries with \(P>P_{\rm cut}\) are subject to the circularization processes that decreased the primordial eccentricity, but were not strong enough to make the orbit circular. The reduced eccentricities of those binaries depend on the primordial orbital period and eccentricity and the strength of the tidal interaction. The asymptotic function reflects the present characteristics of the whole binary sample and not only the very short-period circularized binaries (see this approach applied, for example, by Nine et al. (2020) and with a different version by Zanazzi (2022)). Following Mazeh (2008) (see also, for example, Pourbaix et al., 2007) we consider here a piece-wise asymptotic function that presents an upper envelope for the eccentricity as a function of the period for a given sample. The function marks the edge of the populated part of the period-eccentricity plane. For a given period, there are almost no binaries with eccentricity larger than the asymptotic-function value at this period, while there is a high probability of finding binaries with eccentricities smaller than that value. In short, we view the period-eccentricity diagram as divided into two regimes, separated by an upper-envelope asymptotic function. We use Mazeh et al. (2016) algorithm to find the best upper envelope by a maximum-likelihood approach, given a sample of spectroscopic binaries. Similar to Meibom and Mathieu (2005), we define the cutoff period, \(P_{\rm cut}\), as the period at which the upper envelope cuts the eccentricity axis, i.e., \(e=0\). In this approach, \(P_{\rm cut}\) denotes both the border between the circular and eccentric binaries and the location of the upper-envelope base. The large cleaned _Gaia_ sample of Bashi et al. (2022) allows us to apply our technique to a few sub-samples of the _Gaia_ SB1s. We concentrate on the binaries with main-sequence (MS) A-, F- and G-primaries, identified by their position on the _Gaia_ Colour-Magnitude Diagram (CMD). Similar to the Justesen and Albrecht (2021) analysis, we divided the sample into temperature bins, and derived their corresponding \(P_{\rm cut}\) independently, finding a clear dependence on the stellar temperature of each bin. Section 2 details the _Gaia_ sample used, including removing binaries with unconstrained eccentricities. Section 3 presents our algorithm to obtain the best parameters of the upper envelope, given a binary sample, and Section 4 tests our algorithm for six simulated binary samples. Section 5 presents our main result -- the dependence of \(P_{\rm cut}\) on the primary temperature of each sub-sample. Section 6 shows that the results are inconsistent with the theory which assumes circularization occurs during the stellar MS phase. Section 7 summarizes our results and discusses their possible implications. ## 2 The sample We wish to study the eccentricity distribution of the _Gaia_ binaries, as a function of the stellar temperature in particular. We, therefore, consider a uniform sample of binaries with unevolved primaries, of derived effective temperature, and orbits with well-constrained eccentricity. Such a reduced sample is composed in this section. ### The main-sequence binaries We first cross-matched the clean sample of Bashi et al. (2022) with the TESS Input Catalog1(TIC; Stassun et al., 2018; Paegert et al., 2021), yielding 89 742 SB1s. Fig. 1 shows these binaries on the _Gaia_ Color-Magnitude Diagram (CMD), clearly displaying the difference between the MS and the evolved stars. To identify the MS binaries we used the limits Footnote 1: [https://tess.mit.edu/science/tess-input-catalogue/](https://tess.mit.edu/science/tess-input-catalogue/) \[{\rm MS:}\begin{cases}-5.5+10.5(G_{\rm BP}-G_{\rm RP})<G,&\text{if $G_{\rm BP}-G_{\rm RP}\leq 1$} \\ 3.1+2(G_{\rm BP}-G_{\rm RP})<G,&\text{if $G_{\rm BP}-G_{\rm RP}>1$}\end{cases}, \tag{1}\] which left us with 17 47 binaries. ### Removing orbits with unconstrained eccentricity Fig. 2 shows the period-eccentricity (\(P\)-\(e\)) diagram for the MS-reduced sample. Two features are clearly seen: * An upper envelope that rises from \(e\sim 0.1\) at period of \(P\sim 3\) day towards \(e\sim 0.8\) at period of \(P\sim 100\) day. * A high concentration of binaries centered at \(P\sim 5\) day and low eccentricity of \(\sim 0.03\). To check which of these low eccentricities are real (see also a discussion by Bashi et al., 2022), we used here the Lucy and Sweeney (1971) approach (see also Hara et al., 2019, in the context of the eccentricity of planetary orbits), who suggested that the actual eccentricity uncertainty is larger than the one derived by the regular Figure 1: Colour-Magnitude Diagram (CMD) of the clean _Gaia_ sample of Bashi et al. (2022) binaries. The colour scale represents the number of binaries falling into each hexagonal bin. The dashed line (see text) delineates our border of the MS binaries. analysis. Thus, Lucy & Sweeney (1971) suggested that many derived eccentricities of their time might be spurious. We, therefore, plot in Fig. 3 the ratio between the derived _Gaia_ eccentricities and their corresponding uncertainties as a function of the orbital periods. Two emerging clusters can be seen for binaries with orbital periods shorter than 100 days. Using K-means clustering (Pedregosa et al., 2011)2, we found the center of the bottom cluster at \((P,e/\Delta e)=(6.37,\,1.32)\), while the upper-cluster center is at \((19.38,\,10.15)\). Using the Lucy & Sweeney (1971) 5% level of significance for testing the hypothesis that \(e=0\), we found most binaries in the lower cluster to be under the threshold of \(e/\Delta e=2.45\). To define a sample of binaries with well-constrained eccentricities, we drew a line perpendicular to the line connecting the two cluster centres through the middle of the gap between the two clusters. This has left us with a sample of \(10\,325\) binaries with at least 95% confidence for non-circular eccentricity, and \(7\,149\) binaries with unconstrained eccentricities. Footnote 2: [https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) Fig. 4 presents the two sub-samples. As can be clearly seen, the dense area of near-circular binaries is populated by the unconstrained eccentricity sources only. Next, we wish to test whether the clear distinction between the two sub-samples of Fig. 4 is a result of an age difference, as one might argue that the nearly circular binaries are old MS binaries that had already completed their tidal circularization. We, therefore, plot in Fig. 5 the age-effective temperature dependence of each of the two sub-sample. The left panel includes a sample of 4 530 binaries with unconstrained eccentricities for which _Gaia_ FLAME stellar effective temperature and age (age_flame) are available. Similarly, the right panel includes a sample of 6 389 sources with non-circular solutions. Overall, a strong correlation between effective temperature and age is evident in both panels, as expected for MS stars, with no apparent age difference. In what follows, we concentrate on analysing the sub-sample of 10 325 binaries with well-constrained eccentricities, deriving the dependence of \(P_{\rm cut}\) on the stellar temperature. We will also show that \(P_{\rm cut}\) does not display a dependence on stellar age. ## 3 Fitting an upper envelope to the period-eccentricity diagram The right panel of Fig. 4 clearly shows that the binary eccentricities do not present a clear cutoff period that separates the circular and the eccentric binaries, but, instead, show a gradual rise toward high values. Furthermore, the eccentricities are not concentrated around an upper envelope but are spread out between almost zero eccentricity and the upper bound. The figure suggests that the base of the envelope can serve as a better definition for the cutoff period. Any circularization analysis should try to account for this feature. To better characterize the \(P\)-\(e\) diagram we define an upper envelope as \[f(P)=\begin{cases}1-\left(\frac{P_{\rm cut}}{P}\right)^{1/\tau}&P_{\rm cut}< P\;,\\ 0&P_{\rm cut}\geq P\;,\end{cases} \tag{2}\] where \(P_{\rm cut}\) is the cutoff period (see also Zanazzi, 2022) and \(\tau\) is a dimensionless parameter that determines the slope of the envelope. The function gets a value of zero at \(P_{\rm cut}\) for any value of \(\tau\), and asymptotically rises to unity as \(P\) gets substantially larger than \(P_{\rm cut}\). Note that the function is simple with only two free parameters -- \(P_{\rm cut}\) and \(\tau\). \(P_{\rm cut}\) defines both the separation between the circular and eccentric orbits and the location of the upper envelope. To get a sense of the role of \(\tau\), one can plot (see the bottom panel of Fig. 6) the analysed sample in the \((\log P,\,\log(1-e))\) plane. In this plane, the upper envelope turns into a straight-line lower envelope with a slope of \(-1/\tau\). Another way to follow the meaning of \(\tau\) is to consider, for example, \(P_{0.8}(P_{\rm cut},\tau)\) -- the period for which the upper envelope gets the value of \(e=0.8\). It is easy to show that \(P_{0.8}=5^{\tau}P_{\rm cut}\). In other words, \(\tau\) measures by how much we have to Figure 3: Ratio between the derived eccentricities and their corresponding uncertainties as a function of the orbital period. Red dashed horizontal line marks the \(e/\Delta e=2.45\) threshold of Lucy & Sweeney (1971). Figure 2: Period-eccentricity diagram for the cleaned MS SB1 sample. increase \(P\), relative to \(P_{\rm cut}\), for the upper envelope to get a value of \(e=0.8\). We then follow Mazeh et al. (2016) and use a modified Fermi function to describe an assumed probability density function (PDF) of binaries above and below the upper envelope in the \((\log P,e)\) plane. The PDF converges to zero above the envelope, and to a positive constant below it. The transition region between the two parts is along the envelope, with a width characterized by a parameter \(\delta\). The probability density at a point \((\log P,e)\) is a function of its distance \(d(\log P,e;\,P_{\rm cut},\tau)\) to the envelope curve along the \(\log P\) axis and can be expressed as: \[d=\log P-\log f^{-1}(e)=\log\left[\frac{P}{P_{\rm cut}}(1+e)^{\tau}\right]\,, \tag{3}\] Figure 4: Period-eccentricity diagram for the cleaned SB1 sample of MS unconstrained eccentricities (left) and non-circular (right) binaries. Figure 5: Stellar Age-effective temperature scatter plot of the unconstrained eccentricities sample (left) and non-circular (right) samples. A similar trend between the two parameters is evident in both panels. where \(f^{-1}(e)\) is the inverse function of \(f(P)\) defined in equation 2. The distance \(d\) is positive for points to the right and below the envelope and negative on the other side. The PDF is \[\mathcal{F}_{\rm PDF}(P,e;\,P_{\rm cut},\tau,\delta)=\mathcal{A}\frac{1}{1+ \exp(-d/\delta)}\, \tag{4}\] where the constant \(\mathcal{A}\), in units of \(1/[\log(P/1\,{\rm day})\,e]\), is defined such that the 2D integral over the \((\log P,\,e)\) plane, using the three parameters -- \(P_{\rm cut},\tau,\delta\), equals unity. To get a sense of this function, one can see that when \(d=0\) the PDF value is simply \(\mathcal{A}/2\), while at \(d=\pm\delta\) it changes between \(0.27\mathcal{A}\) and \(0.73\mathcal{A}\). Obviously, the probability density function is of a statistical nature. One can find binary systems above the upper envelope, either due to erroneous measurements or invalid orbits or because of some special astrophysical circumstances, like a high primordial eccentricity or a distant faint companion that pumped eccentricity into the spectroscopic orbit (e.g., Mazeh & Shaham, 1979; Mazeh, 1990; Jha et al., 2000). The advantage of our approach is that for large enough samples the derivation of the envelope is only slightly sensitive to these 'outlier' binaries, as the fit is to the population as a whole. On the other hand, the results are not biased by the number of binaries in the sample. For each set of parameters of the PDF -- \(P_{\rm cut}\), \(\tau\) and \(\delta\), we derive the likelihood of a given sample of \(N\) binaries, as \[\mathcal{L}=\prod_{i=1}^{N}\mathcal{F}_{\rm PDF}(P_{i},e_{i};P_{\rm cut},\tau,\delta). \tag{5}\] where \(e_{i}\) and \(P_{i}\) are the eccentricity and period of the \(i\)-th binary. We applied our algorithm to the sample of the _Gaia_ well-constrained eccentricities discussed above, using only a reduced sub-sample of \(4\,430\) binaries with orbital periods shorter than \(30\) days. This was done because the density of the binaries with periods larger than \(30\) days is clearly not constant, contrary to the assumption of our algorithm. To find the upper envelope of the reduced MS sample of Fig. 4 we ran an MCMC routine with \(50\) walkers and \(10^{4}\) steps, with uninformative priors on the free parameters: \(\log P_{\rm cut}\sim\mathcal{U}(-1,1)\); \(\tau\sim\mathcal{U}(0,4)\) and \(\delta\sim\mathcal{U}(0.01,0.3)\). We used the Python emcee package (Foreman-Mackey et al., 2013) to find the parameter values (and their uncertainties) that maximize the sample likelihood. Fig. 6 shows our best-fit upper envelope with \(P_{\rm cut}=4.67\) day, \(\tau=1.70\) and \(\delta=0.09\). The bottom panel of the figure displays the analyzed sample in the \((\log P,\,\log(1-e))\) plane. In this plane, the upper envelope turns into a straight-line lower envelope with a slope of \(-1/\tau\), showing how \(\tau\) controls the slope of the upper (lower) envelope. ## 4 Testing our algorithm on simulated binary samples To test our upper-envelope algorithm, we performed a simplified simulation of six binary samples with different circularization efficacy coefficients, all starting with the same flat distribution of eccentricity and log-period. The binaries were numerically evolved with Hut (1981) (see also Terquem & Martin, 2021) equations (Zahn, 1975, 1977; Zahn & Bouchet, 1989; Zahn, 1989) for stars with convective envelopes: \[\frac{de}{dt} =-\frac{18}{7}B\Big{(}\frac{P}{1\,{\rm day}}\Big{)}^{-16/3}\frac{ e}{(1-e^{2})^{13/2}}\Bigg{[}1+\frac{15}{4}e^{2}+\frac{15}{8}e^{4}+\frac{5}{64} ^{6}\] \[\qquad-\frac{11}{18}(1-e^{2})^{3/2}\Big{(}1+\frac{3}{2}e^{2}+ \frac{1}{8}e^{4}\Big{)}\Bigg{]},\ \ {\rm and}\] \[\frac{d}{dt}\Big{(}\frac{P}{1\,{\rm day}}\Big{)} =-\frac{6}{7}B\Big{(}\frac{P}{1\,{\rm day}}\Big{)}^{-13/3}\frac{ 1}{(1-e^{2})^{15/2}}\Bigg{[}1+\frac{31}{2}e^{2}+\frac{255}{8}e^{4}\] \[\qquad+\frac{185}{16}e^{6}+\frac{25}{64}e^{8}-(1-e^{2})^{3/2} \Big{(}1+\frac{15}{2}e^{2}+\frac{45}{8}e^{4}+\frac{5}{16}e^{6}\Big{)}\Bigg{]}. \tag{6}\] We chose the constant \(B\) to control the circularization efficacy over the binary lifetime, which we define as going from \(t=0\) to \(t=1\), where \(t\) is unitless. Note that we ignore the changes in the stellar parameters, like radius and structure, and the stellar rotation, which must account for the varying angular momentum of the orbital motion. Furthermore, equations 6 are a simplified version of the actual tidal interaction, neglecting the dependence on the binary mass ratio, for example. Nevertheless, they are good enough for our purpose here (see, for example, the discussion of Terquem, 2021). We note that if we keep only the leading order of the eccentricity Figure 6: Top: Log period – eccentricity diagram for the cleaned SB1 sample of MS binaries, after removing orbits with unconstrained eccentricities (see text). The red dashed line marks our upper envelope best-fit model, based on binaries with an orbital period shorter than \(30\) days, marked by a vertical dotted line (see Section 3 for more details). Bottom: The same sample plotted on the \((\log P,\,\log(1-e))\) plane. The same upper envelope is now a straight-line lower envelope, with a slope of \(\tau\). in equations 6 (i.e., \(e<<1\)), we get: \[\frac{de}{dt}=-eB\Big{(}\frac{P}{1\;{\rm day}}\Big{)}^{-16/3},\;\;{\rm and} \tag{7}\] \[\frac{d}{dt}\Big{(}\frac{P}{1\;{\rm day}}\Big{)}=-\frac{57}{7}e^{2 }B\Big{(}\frac{P}{1\;{\rm day}}\Big{)}^{-13/3}.\] The approximated equations allow one to derive the eccentricity as a function of the period for a given binary \[\ln(P/P_{0})=\frac{57}{14}(e^{2}-e_{0}^{2})\;, \tag{8}\] where \(P_{0}\) and \(e_{0}\) are the period and eccentricity at the starting point of the circularization track. This equation does not predict the amount of change the eccentricity and period will go through in a given timespan but instead draws an evolutionary trajectory in the (\(\log P\), \(e\)) plane. Table 1, as well as the six panels of Fig. 7, present the results of the integration through the "lifetime" of the samples, for six different values of the circularization parameter, or alternatively for different lifetimes; these values span a range of three orders of magnitudes. Each sample contains 500 binaries of the same lifetime and circularization effectiveness \(B\). Other simulations, not shown in the figure, showed that the best-fit obtained parameters do not depend on the number of binaries in the sample; only the uncertainties of the parameters do. The panels clearly show that the separation between the circular and eccentric orbit is not a straight line but has a curved shape, supporting our approach that uses an upper envelope to characterize the period-eccentricity relation. Those envelopes are included in the figure, together with their corresponding cutoff periods. The relation between the cutoff period \(P_{\rm cut}\) and the circularization parameter \(B\) is presented in Fig. 8 by a tight linear relation, \(\log P_{\rm cut}=(0.198\pm 0.011)\,\log(B)-(0.231\pm 0.042)\), implying that \[P_{\rm cut}\simeq B^{3/16}\;. \tag{9}\] This can be traced back to equations 7 by noting that for small eccentricity the logarithmic derivative of \(e\) is approximated by \(\frac{1}{e}\,\frac{d}{dt}\propto BP^{-16/3}\). \(P_{\rm cut}\) is the period for which \(\frac{\Delta e}{e}\sim 1\) during the "lifetime" of the binary, so that \(BP_{\rm cut}^{-16/3}\sim 1\). \begin{table} \begin{tabular}{c c c c} \hline B & \(P_{\rm cut}\) [day] & \(\tau\) & \(\delta\) \\ \hline 100 & \(1.42\pm 0.05\) & \(1.61\pm 0.05\) & \(0.020\pm 0.007\) \\ 500 & \(2.04\pm 0.05\) & \(1.62\pm 0.04\) & \(0.020\pm 0.005\) \\ 1500 & \(2.57\pm 0.07\) & \(1.66\pm 0.04\) & \(0.022\pm 0.005\) \\ 6000 & \(3.24\pm 0.08\) & \(1.70\pm 0.05\) & \(0.018\pm 0.005\) \\ 25000 & \(4.35\pm 0.08\) & \(1.67\pm 0.04\) & \(0.013\pm 0.003\) \\ 100000 & \(5.75\pm 0.12\) & \(1.72\pm 0.05\) & \(0.015\pm 0.004\) \\ \hline \end{tabular} \end{table} Table 1: Best-fitted values for \(P_{\rm cut}\), \(\tau\), \(\delta\) of our simulated binary sample of 500 binaries for six bins of circularization parameter \(B\) Figure 7: Simulated binary samples of 500 binaries each that were integrated through the circularization process following equations 8. The red lines show the upper envelopes, while the red area marks the transition region of \(\pm\delta\) width along the envelope. ## 5 Cutoff period vs. effective temperature in MS binaries The large number of _Gaia_ MS binaries at hand enables us to look for the dependence of the cutoff period on the stellar temperature, which is readily available for most binaries. Fig. 9 shows a histogram of stellar temperatures of the 4 376 orbits we have at hand (see above). We proceeded with the 3 959 binaries in the range \(5300-7700\) K. We divided those binaries into 18 temperature bins and fitted an upper envelope to each sub-sample with the our algorithm. We show in Fig. 10 the upper envelope best-fit model (red curve) and transition region of \(\pm\delta\) width along the envelope (red area) of these binary samples. The results are summarized in Table 2 and Fig. 11, where the points and error bars mark our median and 16%, 84% percentiles. The figure and the table offer a few features. * \(P_{\rm cut}\) drops linearity from \(\sim 6.5\) day to \(\sim 2.5\) day when moving from \(T_{\rm eff}\) of 5500 to 6800. * The linear model for the 13 bins is \[\frac{P_{\rm cut}}{\rm day}=A+B\left(\frac{T_{\rm eff}}{1000K}\right)\,,\] (10) where \(A=27.6\pm 2.5\) and \(B=-3.69\pm 0.39\). * The linear fit is quite tight, suggesting that the uncertainties in \(P_{\rm cut}\) are overestimated by a factor of \(\sim 1.7\). This could be expected, as the typical uncertainty in \(T_{\rm eff}\) is of the same order as the bin width we used, inserting extra noise into the bins. * 6800 K range (\(\sim 3\) day), and might be also flat at 5600 - 5300 K. * A possible small jump of \(P_{\rm cut}\) might be seen at the Kraft (1967) break of 6100 K. * There is a strong correlation between \(P_{\rm cut}\) and \(\tau\) of the 18 bins -- the shorter \(P_{\rm cut}\) the larger \(\tau\) is. This reflects the fact that the shape of the different upper envelopes is such that all reach the same eccentricity at a period of \(\sim 30\) day. This can be explained by the fact that binaries with such relatively long periods retain their primordial eccentricities and therefore do not show any dependence on \(T_{\rm eff}\). \begin{table} \begin{tabular}{l c c c c} \hline \(T_{\rm eff}\) [k] & N & \(P_{\rm cut}\) [day] & \(\tau\) & \(\delta\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: Best-fitted values of \(P_{\rm cut}\), \(\tau\), \(\delta\) for 18 temperature bins of the _Gaia_ MS spectroscopic binaries. Figure 8: \(P_{\rm cut}\) as function tidal dissipation effectiveness \(B\) for six simulated samples. Red-dotted line, \(\log P_{\rm cut}=(0.198\pm 0.011)\,\log(B)-(0.231\pm 0.042)\), marks the best-fit linear model. Figure 9: Stellar effective temperature histograms of our non-circular MS, \(P<30\) day, binary sample. Inside the range marked by red vertical dotted lines (\(T_{\rm eff}=5300-7700\)) there are 3 959 binaries. Figure 10: Period-eccentricity diagram with fitted upper envelope (see Table 2 for details) in red dashed line for the 18 sub-samples of the MS _Gaia_ binaries. The red area marks the transition region of \(\pm\delta\) along the envelope. ### Cutoff period dependence on stellar age? As shown in Fig. 5, the MS _Gaia_ binaries show a strong correlation between stellar temperature and age. In this sub-section we examine the dependence of \(P_{\rm cut}\) on both stellar parameters and show that the dependence on age, is weak if existing at all. We divided the age-temperature plane into evenly spaced bins of 250 K and 2 Gyr, and derived \(P_{\rm cut}\) for the 14 bins that contain more than 500 binaries. These \(P_{\rm cut}\) values are colour coded in Fig. 12. We fitted to the 14 bins a two-dimensional function of \[\frac{P_{\rm cut}}{\rm day}=A_{\rm 2D}+B_{\rm 2D}\left(\frac{T_{\rm eff}}{1000 K}\right)+C_{\rm 2D}\left(\frac{\rm Age}{\rm Gyr}\right)\,, \tag{11}\] and ran an MCMC procedure to find the best values. We obtained \(A_{\rm 2D}=25.5\pm 6.9\), \(B_{\rm 2D}=-3.46\pm 1.01\) and \(C_{\rm 2D}=0.08\pm 0.11\). This shows that \(P_{\rm cut}\) strongly depends on stellar temperature and does not depend on the stellar age. ## 6 Approximated scaling law of circularization In this section, we try to scale the tidal interaction with the stellar mass and radius and the orbital period, in order to derive the theoretical expectation for the dependence of \(P_{\rm cut}\) on the stellar temperature and compare it to our observational results. According to Zahn (1975, 1977); Zahn & Bouchet (1989); Zahn (1989) and Zahn (2008), the timescale of the circularization is determined by the turbulent dissipation, in cool stars with convective envelopes, and radiative damping, in hotter stars with radiative envelopes. We consider here the circularization of stars with radiative envelopes only, as at least half of the _Gaia_ binaries analyzed here are of that type. In a very simplified way, we consider only the eccentricity decay and ignore the period change associated with it. This can be close to reality only when the eccentricity is relatively small. Nevertheless, this approximation is probably good enough for the purpose of this section. The approximation of the eccentricity derivative, to first order in \(e\), was already presented in equation 7 \[-\frac{1}{e}\frac{de}{dt}=B_{\rm rad}P^{-7}, \tag{12}\] where we follow the convention of Section 4. As emphasized there, changing the lifetime of a binary while keeping the circularization effectiveness is equivalent in our formulation to varying instead the \(B_{\rm rad}\) parameter by the same factor. Using Claret & Cunha (1997) equation 18 (see also Van Eylen Figure 11: \(P_{\rm cut}\) of 18 _Gaia_ sub-samples as a function of their corresponding temperature. Red-dotted line, \(P_{\rm cut}=-3.69\) (\(\pm 0.39\)) (\(T_{\rm eff}/1000\)K)+27.6 (\(\pm 2.5\)) day, marks the best-fit linear model ignoring the rightmost and four leftmost bins. et al., 2016, equation 3), we write \[B_{\rm rad}\propto\frac{R_{\rm s}^{9}}{M_{\rm*}^{2}}\mathcal{F}(q)E_{2}\,T_{\rm MS }\,, \tag{12}\] where \(\mathcal{F}(q)=q/(1+q)^{5/3}\) is a function of the mass ratio, \(E_{2}\) is a tidal constant evaluated numerically and \(T_{\rm MS}\) is the MS lifetime. Adopting Van Eylen et al. (2016) scaling of \(R_{\rm*}\propto M_{\rm*}^{0.8}\) and \(T_{\rm MS}\propto M_{\rm*}^{-2.9}\), assuming the observed-stars age is a substantial fraction of their MS lifetime, we get \[B_{\rm rad}\propto M_{\rm*}^{1.3}\mathcal{F}(q)E_{2}\,, \tag{13}\] Using equation 43 of Hurley et al. (2002), \(E_{2}\propto M_{\rm*}^{2.8}\), we finally get \[B_{\rm rad}\propto M_{\rm*}^{4.1}\mathcal{F}(q)\,, \tag{14}\] indicating that \(B_{\rm rad}\) is a monotonic _increasing_ function of stellar mass, and consequently of stellar temperature. Following Fig. 8 and Equation 9, we conclude that for radiative-envelope primaries \(P_{\rm cut}\propto B_{\rm rad}^{1/7}\) and therefore \[P_{\rm cut}\propto M_{\rm*}^{0.6}\,. \tag{15}\] This implies that \(P_{\rm cut}\) should also increase with stellar temperature, clearly in contrast with our results that show that \(P_{\rm cut}\) is _decreasing with stellar temperature (or mass)_ in the range of 5500 - 6700 K. ## 7 Discussion We used the cleaned _Gaia_ sample of spectroscopic binaries to study the distribution of the orbital eccentricity as a function of the binary period. The distribution is characterized by an upper envelope (Pourbaix et al., 2007; Mazeh, 2008) that starts at \(P_{\rm cut}\) and eccentricity zero and rises monotonically toward high eccentricities for longer periods. The assumption is that circularization processes induced by tidal interaction between the two stars shaped the eccentricity distribution. The _Gaia_ sample is a new opportunity to confront the theory of circularization with fresh data. We model the upper envelope by a simple function of two parameters, one of which is \(P_{\rm cut}\), while the other, \(\tau\), determines the rise of the envelope. We then follow the Mazeh et al. (2016) approach and use a modified Fermi function to describe a probability density distribution of the binaries above and below the upper envelope in the (\(\log P\), \(e\)) plane. The probability density function converges to zero above the envelope, and to a positive constant below it, with a transition region of a derived width. We use an MCMC routine to find the best parameters of the distribution, given a sample of orbits. The unprecedentedly large sample of _Gaia_ orbits with MS A-, F- and G-type primaries enables us to derive the dependence of the envelope, and \(P_{\rm cut}\) in particular, on the stellar temperature in the range of 5500 - 7500 K. To do that we divide the binaries into sub-samples of different temperatures, and fit the envelope of each sub-sample independently. Our main finding is that \(P_{\rm cut}\) presents a _linearly_ decreasing function of stellar temperature for the G and F stars -- from 6.5 day for 5700 K to \(\sim 2.5\) day for \(T_{\rm eff}\sim 6800\) K, at a rate of \(-3.7\) days/1000 K. The linear slope is at more than \(9\sigma\) significance. The uncertainties of the different \(P_{\rm cut}\) values are larger than the scatter around the linear fit. We suggest these are due to "imperfections" of the samples, the result of different primordial eccentricity and period distributions, binaries with different ages and mass ratios, and stellar temperature spread, as the \(T_{\rm eff}\) uncertainties are probably on the order of 100 K. It is quite surprising that, nevertheless, the linear dependence of \(P_{\rm cut}\) is so pronounced and tight over a range of temperatures that correspond to stellar envelopes of convective and radiative nature alike. In addition, \(P_{\rm cut}\) is probably flat for \(T_{\rm eff}\) at 5500 - 5700 K, and 6800 - 7500 K. At the well-known stellar Kraft brake of \(\sim 6100\) K (see Kraft, 1967) we possibly see a small jump, yet this is still barely significant. We do not see any reason to believe our results are due to some _Gaia_ observational bias. For example, we could not find any dependence of the quality score of Bashi et al. (2022), which estimates the degree of validity of the orbits, on stellar temperatures or orbital periods and eccentricities. As we have shown, the \(P_{\rm cut}\) trend revealed for the G- and F-stars is inconsistent with Zahn's circularization theory, when we assume that circularization took place during the stellar MS lifetime. This inconsistency could indicate that * Zahn dynamical tide theory is not accurate enough to account for the circularization processes. An assertion of studies suggested different approaches (e.g., Alexander, 1973; Hut, 1981; Tassoul, 1987; Hut et al., 1992; Dolginov & Smeel-Chakova, 1992; Goldman & Mazeh, 1991, 1994; Goldman, 2008; Witte & Savonije, 1999, 2001, 2002; Duguid et al., 2020; Zanazzi & Wu, 2021; Terquem, & Martin, 2021; Koenigsberger et al., 2021; Zanazzi, 2022; Barker, 2022; Preece et al., 2022; Wei, 2022), in stars with radiative envelopes in particular; see, for example, a heated discussion between Tassoul & Tassoul (1997) and Rieutord (1992) and Rieutord & Zahn (1997), or -- The eccentricity distribution of the binaries was determined during the pre-main-sequence (PMS) phase of these stars, when the stars were much larger and therefore the circularization processes much faster, as suggested by Mayor & Mermilliod (1984), and worked out by Zahn & Bouchet (1989) and later by Khaliullin & Khaliullina (2011, KK11), using updated PMS models; see also Terquem & Martin (2021). Figure 12: 2D histogram in evenly spaced bins of effective temperature and age. Colour represents \(P_{\rm cut}\) of _Gaia_ binaries in bins with more than 50 sources (otherwise white). Grey crosses mark median and 16%, 84% percentiles in each bin. The number of sources in each bin is displayed on the bin’s top right side. The latter conjecture is interesting, especially because it is coming from Zahn himself. Zahn & Bouchet (1989) claimed that for masses of 0.5 - 1.25\(M_{\odot}\), the circularization took place during the PMS phase with expected \(P_{\rm cut}\) between 7.2 and 8.5 day (see some observational evidence by Mathieu et al. 1992; Mathieu 1994; Melo et al. 2001). In fact, assuming the circularization occurred during the PMS phase, KK11 published a series of expected \(P_{\rm cut}\) for different stellar masses based on updated PMS evolutionary tracks. Although the tracks are prone to a few uncertainties, the table of KK11 displays a trend that might be similar to our results -- \(P_{\rm cut}\)_decreases as a function of the primary mass_. Understanding the theory of tidal circularization is crucial for understanding the evolution of short-period binaries (e.g., Hurley et al. 2002; Fragos et al. 2023). This has deep impact on how we think short-period binaries (e.g., Fabryck & Tremaine 2007), close triple systems (Mazeh & Shaham 1979; Naoz 2016; Toonen et al. 2022), cataclysmic binaries (e.g., Patterson 1984) and X-ray binaries (e.g., Bildsten & Cutler 1992; Podsiadlowski et al. 2002) have evolved. Even some of the black-hole merger models depend on tidal interaction (e.g., Antonini et al. 2017; Belczynski et al. 2002). Finally, tidal interaction could have a crucial role in the formation and evolution of exoplanets (Gu et al. 2003; Ida & Lin 2004; Ogilvie & Lin 2004; Terquen & Papaloizou 2007; Jackson et al. 2008; Rao et al. 2018), hot Jupiters (e.g., Rasio et al. 1996; Wu 2005; Ferraz-Mello et al. 2008; Leconte et al. 2010; Dawson & Johnson 2018) and their orbit alignment with the stellar spin (Dobbs-Dixon et al. 2004; Winn & Holman 2005; Fabrycky & Winn 2009; Scherrer et al. 2010; Lai 2012; Albrecht et al. 2012; Dawson 2014; Mazeh et al. 2015; Albrecht et al. 2021, 2022) in particular. In the future, five observational avenues might be of use in order to deepen our understanding of the tidal interaction in binary systems. First, one can use other samples of spectroscopic binaries, to confirm our results and to find the dependence of \(P_{\rm cut}\) on other parameters, like the mass ratio, stellar age and metallicity. Such samples include, for example, the next planned _Gaia_ release,3 and results of multi-object spectrographs, like RAVE (Matijevic et al. 2011), LAMOST (Cui et al. 2012), APOGEE (Price-Whelan et al. 2020) and the near-future 4MOST (de Jong et al. 2019). Footnote 3: [https://www.cosmos.esa.int/web/gaia/release](https://www.cosmos.esa.int/web/gaia/release) The second avenue, analyzing samples of eclipsing binaries, was already taken by North & Zahn (2003); Mazeh et al. (2006); Van Eylen et al. (2016) and Justesen & Albrecht (2021), as detailed in the introduction. The advantage of using EBs is the capability to accurately derive small eccentricities, which is not possible for SB1s. It would be interesting to use additional samples, like that of _Gaia_ (Mowlavi et al. 2022) and derive \(P_{\rm cut}\) as a function of \(T_{\rm eff}\). The large photometric surveys at work, like ZTF (Chen et al. 2020) and ASAS-SN (Paczynski et al. 2006; Rowan et al. 2022) can yield large samples of additional eclipsing binaries. The third avenue has to do with the obvious realization that the long-term tidal interaction is not limited to circularization but acts to synchronize (see Khaliullin & Khaliullina 2010) and align the binary with the stellar rotation (e.g., Hurley et al. 2002; Naoz & Fabrycky 2014). Therefore, a sample of binaries should also display synchronization and alignment cutoff periods. It would be interesting if we could compare those periods with the circularization period and their dependence on stellar parameters. To follow the stellar rotation one can use the available large set of photometric light curves, like those of OGLE (e.g., Soszynski et al. 2016), _TESS_(Huang et al. 2020) and _Gaia_(Eyer et al. 2022), which can reveal the stellar rotation periods (McQuillan et al. 2013, 2014; Avallone et al. 2022). Synchronization can be assumed, for example, if one detects ellipsoidal variability, which is modulated with the binary period (Faigler & Mazeh 2011; Faigler et al. 2012; Green et al. 2022). The fourth avenue has to do with observed eccentric pseudosynchronized binaries (Hut 1981), discovered in the _Kepler_ lightcurves (e.g., Zimmerman et al. 2017; Saio & Kurtz 2022), which are going through strong tidal interaction. It would be interesting to compare their stellar rotation periods with the tidal theory expectation. In one of these systems, an orbital-period decay driven by tidal interaction has been probably observed (Ou et al. 2021). Finally, recently some observational evidence has been presented for the orbital decay of very close hot Jupiters, like WASP-12 (e.g., Yee et al. 2020; Turner et al. 2021; Wong et al. 2022); see also Harre et al. (2023), Yang & Wei (2022) and Rosario et al. (2022), based on the precise timings of transits that span over more than a decade. Following the planetary period decay in real time (Yang et al. 2022) has the exciting potential to further constrain the theory of tidal interaction. When additional information is available, the observational evidence could be compared with a more realistic tidal model that might be combined with some conjecture about the primordial period-eccentricity distribution. Such a model may include other evolutionary mechanisms, like magnetic breaking (e.g., Mestel 1968; Fleming et al. 2019) and interaction with accretion discs and/or third companions (e.g., Fabrycky & Tremaine 2007; Naoz 2016; Toonen et al. 2022), so it can account for the observed statistical features of the population of short-period binaries. ## Acknowledgements We are deeply indebted to the _NSS_ group and all the _Gaia_ team for producing a vast high-quality catalogue that enabled us to follow the tidal circularizationof short-period binaries. We are extremely grateful to Josh N. Winn and Robert D. Mathieu for their illuminating comments and wise suggestions that have significantly improved this manuscript. The referee contributed very helpful comments on a previous version of the paper, helping us presenting our results in a clearer way. This research was supported by Grant No. 2016069 of the United States-Israel Binational Science Foundation (BSF) to TM, and Grant No. I-1498-303.7/2019 of the German-Israeli Foundation for Scientific Research and Development (GIF) to TM. This work has also made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC; [https://www.cosmos.esa.int/web/gaia/](https://www.cosmos.esa.int/web/gaia/) dpac/consortium). Funding for DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. ## Data availability Data used in this study are available upon request from the corresponding author.
2309.00097
Erd\H os--Ko--Rado type results for partitions via spread approximations
In this paper, we address several Erd\H os--Ko--Rado type questions for families of partitions. Two partitions of $[n]$ are {\it $t$-intersecting} if they share at least $t$ parts, and are {\it partially $t$-intersecting} if some of their parts intersect in at least $t$ elements. The question of what is the largest family of pairwise $t$-intersecting partitions was studied for several classes of partitions: Peter Erd\H os and Sz\'ekely studied partitions of $[n]$ into $\ell$ parts of unrestricted size; Ku and Renshaw studied unrestricted partitions of $[n]$; Meagher and Moura, and then Godsil and Meagher studied partitions into $\ell$ parts of equal size. We improve and generalize the results proved by these authors. Meagher and Moura, following the work of Erd\H os and Sz\'ekely, introduced the notion of partially $t$-intersecting partitions, and conjectured, what should be the largest partially $t$-intersecting family of partitions into $\ell$ parts of equal size $k$. The main result of this paper is the proof of their conjecture for all $t, k$, provided $\ell$ is sufficiently large. All our results are applications of the spread approximation technique, introduced by Zakharov and the author. In order to use it, we need to refine some of the theorems from the original paper. As a byproduct, this makes the present paper a self-contained presentation of the spread approximation technique for $t$-intersecting problems.
Andrey Kupavskii
2023-08-31T19:35:27Z
http://arxiv.org/abs/2309.00097v2
# Erdos-Ko-Rado type results for partitions via spread approximations ###### Abstract. In this paper, we address several Erdos-Ko-Rado type questions for families of partitions. Two partitions of \([n]\) are \(t\)_-intersecting_ if they share at least \(t\) parts, and are _partially \(t\)-intersecting_ if some of their parts intersect in at least \(t\) elements. The question of what is the largest family of pairwise \(t\)-intersecting partitions was studied for several classes of partitions: Peter Erdos and Szekely studied partitions of \([n]\) into \(\ell\) parts of unrestricted size; Ku and Renshaw studied unrestricted partitions of \([n]\); Meagher and Moura, and then Godsil and Meagher studied partitions into \(\ell\) parts of equal size. We improve and generalize the results proved by these authors. Meagher and Moura, following the work of Erdos and Szekely, introduced the notion of partially \(t\)-intersecting partitions, and conjectured, what should be the largest partially \(t\)-intersecting family of partitions into \(\ell\) parts of equal size \(k\). In this paper, we prove their conjecture for all \(t,k\) and \(\ell\) sufficiently large. All our results are applications of the spread approximation technique, introduced by Zakharov and the author. In order to use it, we need to refine some of the theorems from their paper. As a byproduct, this makes the present paper a self-contained presentation of the spread approximation technique for \(t\)-intersecting problems. ## 1. Introduction For a positive integer \(n\), we use notation \([n]=\{1,\ldots,n\}\) and, more generally, \([a,b]=\{a,a+1,\ldots,b\}\). For a set \(X\), the notation \(2^{X}\) and \(\binom{X}{k}\) stand for the collection of all subsets and all \(k\)-element subsets of the set \(X\), respectively. A _family_ is a collection of subsets of \(2^{X}\) for some \(X\). This paper addresses some of the problems that lie in a big class of results in extremal combinatorics, which deal with intersection theorems. A family of sets is _intersecting_ if any two sets in the family have non-empty intersection, and is \(t\)_-intersecting_ if any two sets intersect in at least \(t\) elements. In their seminal paper, Erdos, Ko and Rado [6] determined the largest size of an intersecting family in \(2^{[n]}\) and \(\binom{[n]}{k}\) for all \(n,k\). Later, after a series of papers by Frankl [8], Wilson [17] and others, Ahlswede and Khachatrian [1] determined the size of the largest \(t\)-intersecting family in \(\binom{[n]}{k}\) for any \(n,k,t\). Intersecting questions were investigated for many other structures. See a great survey paper by Ellis [4] on the subject. The first result in this direction was due to Deza and Frankl [3], who determined the largest size of an intersecting family of permutations. We call two permutations \(\sigma,\pi\)_intersecting_ if for some element \(x\) we have \(\sigma(x)=\pi(x)\). Deza and Frankl also found the largest family of \(2\)- and \(3\)-intersecting permutations when and \(n-1\) is a prime number, respectively. They also made a conjecture concerning the size of the largest \(t\)-intersecting family of permutations. The progress on this problem was rather difficult. After a series of papers, Ellis, Friedgut and Pilpel [5] managed to solve it for any constant \(t\), provided \(n>n_{0}(t)\). Recently, the problem was resolved for any \(n\) and \(t\) that satisfy \(n>Ct\log^{2}t\) by Zakharov and the author [11]. This was later improved to \(n>Ct\) by Keller, Lifshitz, Minzer and Sheinfeld [10]. Early approaches to this question were algebraic, based on Hoffman-Delsarte type bounds and representation theory. The approach of [5] combines junta approximations, coming from Boolean Analysis, with representation theory. Zakharov and the author introduced a combinatorial technique of spread approximations that is based on the breakthrough in the Erdos-Rado sunflower problem due to Alweiss, Lovett, Wu and Zhang [2]. The approach of [10] is based on junta approximations and hypercontractivity, belonging to Boolean Analysis. Another class of EKR-type problems for objects of algebraic flavour deals with different classes of partitions. Until now, it was approached either using the Delta-system method or algebraic tools, based on the Delsarte-Hoffman bounds for suitable graphs. In this paper, we show that the technique of spread approximations, and thus an essentially combinatorial approach, allows to gain significant progress in these questions. Consider two partitions \(P=(P_{1},\ldots,P_{\ell_{1}})\) and \(Q=(Q_{1},\ldots,Q_{\ell_{2}})\) of \([n]\). We say that \(P\) and \(Q\)_\(t\)-intersect_ if they share at least \(t\) parts, that is, \(P_{i_{s}}=Q_{j_{s}}\) for two sets of \(t\) indices \(\{i_{1},\ldots,i_{t}\}\) and \(\{j_{1},\ldots,j_{t}\}\). We say that \(P\) and \(Q\)_partially \(t\)-intersect_ if there are parts \(P_{i},Q_{j}\), such that \(|P_{i}\cap Q_{j}|\geq t\). A family of partitions is _(partially) \(t\)-intersecting_ if any two partitions from the family _(partially) \(t\)-intersect_. ### General partitions Let \(\mathcal{B}_{n}\) be the family of all partitions of \([n]\). Recall that \(|\mathcal{B}_{n}|\) is the \(n\)-th Bell number \(B_{n}\). A natural example of \(t\)-intersecting family in \(\mathcal{B}_{n}\) is the family of all partitions that have \(\{1\},\{2\},\ldots,\{t\}\) as parts. The size of this family is \(B_{n-t}\). Ku and Renshaw [12] proved the following result. **Theorem 1** (Ku and Renshaw [12]).: _Let \(n\geq n_{0}(t)\) and assume that \(\mathcal{F}\subset\mathcal{B}_{n}\) is \(t\)-intersecting. Then \(|\mathcal{F}|\leq\mathcal{B}_{n-t}\), with equality only possible if \(\mathcal{F}\) is a family of all partitions with \(t\) fixed singletons._ In their proof, they require \(t\leq c\log n\). The theorem below gives a much better dependence between the parameters. **Theorem 2**.: _The conclusion of Theorem 1 is valid for \(n\geq Ct\log^{3}t\) with some absolute constant \(C\)._ ### Partitions with \(k\) parts Let \(\mathcal{P}_{n}^{\ell}\) be the family of all partitions of \([n]\) into \(\ell\) parts. Recall that \(|\mathcal{P}_{n}^{\ell}|=\genfrac{[}{]}{0.0pt}{}{n}{\ell}\), where the expression on the right is the Stirling number of the second kind. Peter Erdos and Laszlo Szekely [7] proved the following result. **Theorem 3** (Erdos and Szekely [7]).: _Let \(n\geq n_{0}(\ell)\) be large enough. Assume that \(\mathcal{F}\subset\mathcal{P}_{n}^{\ell}\) is \(t\)-intersecting. Then \(|\mathcal{F}|\leq\genfrac{[}{]}{0.0pt}{}{n-t}{\ell-t}\)._ Here, we improve their result as follows. **Theorem 4**.: _Let \(n,\ell,t\) be integers such that \(t\leq\ell-2\), \(n\geq 2\ell\log_{2}n\) and \(n>n_{0}\) is large enough. Assume that \(\mathcal{F}\subset\mathcal{P}_{n}^{\ell}\) is \(t\)-intersecting. Then \(|\mathcal{F}|\leq\begin{cases}n-t\\ \ell-t\end{cases}\). Moreover, if \(\mathcal{F}\) is not contained in a family of all partitions with \(t\) fixed singletons, then \(|\mathcal{F}|\leq\frac{1}{2}\begin{cases}n-t\\ \ell-t\end{cases}\)._ Note that the question is trivial for \(t=\ell-1\), since if two \(k\)-partitions have \(\ell-1\) common parts then the last part is also common, so they coincide. ### Partitions with fixed profile Let \(P=(k_{1},\ldots,k_{\ell})\) be a non-decreasing sequence of positive integers, called a _profile_, and put \(n=\sum_{i=1}^{\ell}k_{i}\). Consider the family of partitions \(\mathcal{U}_{P}\) of \(n\) into \(\ell\) parts, where the \(i\)-th part has size \(k_{i}\) (_partitions with profile \(P\)_). A _canonical \(t\)-intersecting family_ of such partitions is \(\mathcal{A}_{P,t}^{X}\) that is defined by a \(t\)-tuple \(X\) disjoint sets \(X_{1},\ldots,X_{t}\), where \(|X_{i}|=k_{i}\), and consists of all partitions that contain each \(X_{i}\) as one of its parts. A \((k,\ell)\)-partition is a particular type of profiled partitions, when \(k_{1}=\ldots=k_{\ell}=:k\). In other words, it is a partition of \([k\ell]\) into exactly \(\ell\) blocks, each of size \(k\). Let \(\mathcal{U}_{k,\ell}\) stand for the family of all \((k,\ell)\)-partitions, and put \(u_{k,\ell}:=|\mathcal{U}_{k,\ell}|\). Meagher and Moura [13] proved the following theorems. **Theorem 5** ([13]).: _Fix positive integers \(k,\ell\). Let \(\mathcal{F}\subset\mathcal{U}_{k,\ell}\) be an intersecting family of \((k,\ell)\)-partitions. Then \(|\mathcal{F}|\leq u_{k,\ell-1}\), and the equality is only possible for canonical intersecting families._ **Theorem 6** ([13]).: _Fix positive integers \(k,\ell,t\). Suppose that either \((k\geq k_{0}(\ell,t)\) or \((k\geq t+2\) and \(\ell\geq\ell_{0}(k,t))\). Let \(\mathcal{F}\subset\mathcal{U}_{k,\ell}\) be a \(t\)-intersecting family of \((k,\ell)\)-partitions. Then \(|\mathcal{F}|\leq u_{k,\ell-t}\), and the equality is only possible for canonical \(t\)-intersecting families._ In the theorem below, we extend their results to profiled partitions from a very large class of profiles, significantly improve the dependence between the parameters, and give a strong stability result. **Theorem 7**.: _Let \(t,\ell\) be positive integers and \(P=(k_{1},\ldots,k_{\ell})\) be a profile. Assume that the following holds: \(\ell\geq\ell_{0}\), where \(\ell_{0}\) is some absolute constant; \(t\leq\ell/2\); \(k_{t+1}\geq 2\). If \(\mathcal{F}\subset\mathcal{U}_{P}\) is \(t\)-intersecting then \(|\mathcal{F}|\leq|\mathcal{A}_{P,t}^{X}|\). Moreover, if \(\mathcal{F}\) is not contained in \(\mathcal{A}_{P,t}^{X}\) for some \(X\) then \(|\mathcal{A}|\leq\frac{1}{2}|\mathcal{A}_{P,t}^{X}|\)._ ### Partially \(t\)-intersecting partitions In the notation of the previous subsection, let \(P=(k_{1},\ldots,k_{\ell})\) be a profile that additionally satisfies \(k:=k_{\ell}\geq t\). Put \(n=\sum_{i}k_{i}\). What is the largest partially \(t\)-intersecting subfamily of \(\mathcal{U}_{P}\)? A natural candidate is a _canonically partially \(t\)-intersecting family \(\mathcal{C}_{P}^{T}\subset\mathcal{U}_{P}\)_, where \(T\) is a set of size \(t\), and \(\mathcal{C}_{P}^{T}\) consists of all partitions from \(\mathcal{U}_{P}\) that fully contain \(T\) in one of its parts. For the case of \((k,\ell)\)-partitions, a _canonically partially \(t\)-intersecting partition_ is a family \(\mathcal{C}_{k,\ell}^{T}\subset\mathcal{U}_{k,\ell}\) that consists of all partitions that have a part that contains a fixed set \(T\), \(|T|=t\). Meagher and Moura [13] conjectured the following. **Conjecture 1** (Meagher and Moura, [13]).: _Let \(\ell,k,t\) be integers and \(\mathcal{F}\subset\mathcal{U}_{k,\ell}\) be a partially \(t\)-intersecting family. Then, for a \(t\)-element \(T\subset[k\ell]\),_ \[|\mathcal{F}|\leq|\mathcal{C}_{k,\ell}^{T}|.\] Note that any two partitions are partially \(1\)-intersecting, and thus the largest partially \(1\)-intersecting family is \(\mathcal{U}_{k,\ell}\). Meagher-Moura conjecture was proved by Godsil and Meagher [9] for \(k=t=2\) and all \(\ell\). Meagher, Shirazi and Stevens [14] proved it for \(t=2\) and any \(k\), provided \(\ell\) is sufficiently large. The approaches in both papers were algebraic. We prove the following theorem. **Theorem 8**.: _The Meagher-Moura conjecture is true for any \(k,t\), provided \(\ell\) is sufficiently large. Moreover, the only extremal examples have the form \(\mathcal{C}_{k,\ell}^{T}\) for some \(T\)._ This could be generalized to a much larger class of profiled partitions. **Theorem 9**.: _For any \(k,t\) there exists \(\ell\) such that for any \(P\) as above every largest partially \(t\)-intersecting family of \(\mathcal{U}_{P}\) is of the form \(\mathcal{C}_{P}^{T}\) for some \(T\)._ The proof that we present for Theorem 8 goes almost verbatim for Theorem 9, except for the more complicated notation and some slightly different estimates. Thus, we decided to omit it. We also note that in both settings the proofs give a stability result that, informally speaking, tells that any family that is within constant factor of the size of the extremal family must be very close to one of the extremal families. ### The structure of the remainder of the paper In order to prove the theorems, we use the spread approximation method. However, we need to make some improvements in the main ingredients of the method, relaxing the conditions that we impose on the families, as compared to [11]. For this reason, we present the proofs of the results from [11], taking into account the requirements needed for this paper. The proofs, however, stay essentially the same, which makes this paper a self-contained presentation of the version of the spread approximation technique for \(t\)-intersecting problems. In the next section we give an outline of the proofs of our theorems. In Section 3, we present the parts of the spread approximation method that we use in this paper. In Section 4, we give the proofs of our main results. We note that in Section 4 we present certain estimates for Bell numbers, such as bounding the ratio of consecutive Bell numbers, or the lower bound for the number of partitions with parts of sizes at least \(2\), which may be of some use. ## 2. Sketches of the proofs As we have already mentioned, the proofs of our Theorems are based on the spread approximation method, proposed by Zakharov and the author [11]. The steps of the proof are summarized below. 1. Reformulate the problem in terms of families of sets. Since the problems we deal with are for families of partitions, and the method of spread approximation is for families of sets, we need to give an adequate set interpretation to the problems. Theorems 2, 4, and 7 are based on a more straightforward interpretation: each possible part in a partition is treated as a singleton, and thus partitions become sets of size at most \(n\) or exactly \(\ell\), depending on the question. Theorems 8 and 9 require a more complicated interpretation. There, each pair of elements from the original ground set becomes a singleton in the new ground set, and each partition is turned into a set of all pairs of singletons that lie in the same part. 2. Prove that the set family, corresponding to \(\mathcal{U}_{k,\ell}\), is sufficiently spread (this is a certain quasirandomness notion that is crucial for the spread approximations method and that we introduce in the next section). This is again more or less complicated for different settings. For example, in the proof of Theorem 2 to lower bound spreadness, we need to lower bound the ratio \(B(n)/B(n-1)\). 3. Take the extremal intersecting family \(\mathcal{F}\) (depending on the setting) and apply the spread approximation theorem. Get a lower-uniformity family \(\mathcal{S}\), which encodes a family of "partial" partitions and that covers most of our family, as well as a small remainder \(\mathcal{F}^{\prime}\subset\mathcal{F}\) that is not covered by \(\mathcal{S}\). The approximation is given by Theorem 12. 4. Show that this lower-uniformity family \(\mathcal{S}\) is trivial for the extremal family: it consists of a single \(t\)-element set \(T\) (corresponding to different "partial" partitions depending on the setting). This is done using Theorem 13. 5. Show that, for an extremal family \(\mathcal{F}\), the remainder \(\mathcal{F}^{\prime}\) must be empty. This is more of an ad-hoc argument, which requires a significant amount of effort in some cases. In particular, for the case of \(\mathcal{B}(n)\), we need to lower bound the number of partitions with parts of sizes at least 2, and for the case of partially \(t\)-intersecting families, we need to lower bound the probability that two random partitions of the type as in Theorems 8, 9, have a not so small probability to not partially \(t\)-intersect. Such bounds allow us to say that what we "lose" because of having some partitions in \(\mathcal{F}^{\prime}\) cannot be compensated by what we "gain" by adding sets from \(\mathcal{F}^{\prime}\) (keeping in mind that we have upper bounds on the size of \(\mathcal{F}^{\prime}\)). For Theorems 4 and 7, steps 3 and 5 are omitted, since these families are sufficiently spread so that we can apply Theorem 13 directly to \(\mathcal{F}\) instead of \(\mathcal{S}\). Thanks to that, we get the strongest stability results in these cases. We note that Theorem 13 alone can be seen as a strengthening of one of the important parts of the Delta-system method. In the third and fourth step we need to do some refinement of the method, proposed by Kupavskii and Zakharov. ## 3. Spread approximations For a set \(X\subset[n]\) and a family \(\mathcal{G}\subset 2^{[n]}\) we use the following standard notation: \[\mathcal{G}(X):= \{A\setminus X:G\in\mathcal{G},X\subset A\},\] \[\mathcal{G}(\bar{X}):= \{A:A\in\mathcal{G},A\cap X=\emptyset\}.\] We think of \(\mathcal{G}(X),\mathcal{G}(\bar{X})\) as of a subfamilies of \(2^{[n]\setminus X}\). Let \(r>1\) be some real number. We say that a family \(\mathcal{F}\) is \(r\)_-spread_ if, for any set \(X\), \(|\mathcal{F}(X)|\leq r^{-|X|}|\mathcal{F}|\). We denote by \(\|\mathcal{F}\|\) the average size of a set sampled from \(\mathcal{F}\): \(\|\mathcal{F}\|:=\frac{1}{|\mathcal{F}|}\sum_{S\in\mathcal{F}}|S|\). Note that \(\|\mathcal{F}\|\) is at most the size of the largest set in \(\mathcal{F}\). We say that \(W\) is a \(p\)_-random subset_ of \([n]\) if each element of \([n]\) is included in \(W\) with probability \(p\) and independently of others. The following statement is a variant due to Tao [15] of the breakthrough result that was proved by Alweiss, Lowett, Wu and Zhang [2]. **Theorem 10** ([2], a sharpening due to [15]).: _If for some \(n,r\geq 1\) a family \(\mathcal{F}\subset 2^{[n]}\) is \(r\)-spread and \(W\) is an \((m\delta)\)-random subset of \([n]\), then_ \[\Pr[\exists F\in\mathcal{F}\ :\ F\subset W]\geq 1-\Big{(}\frac{5}{\log_{2}(r \delta)}\Big{)}^{m}\|\mathcal{F}\|.\] Recall that an \(\ell\)_-sunflower_ is a collection of \(\ell\) sets \(F_{1},\ldots,F_{\ell}\) such that for any \(i\neq j\) we have \(F_{i}\cap F_{j}=\cap_{i=1}^{\ell}F_{i}\). (In particular, \(\ell\) pairwise disjoint sets form an \(\ell\)-sunflower.) The set \(\cap F_{i}\) is called the _core_. The theorem implies an important strengthening on the size of the family that guarantees the existence of an \(\ell\)-sunflower. Namely, it implies that any family \(\mathcal{F}\) of \(k\)-sets with \[|\mathcal{F}|>\big{(}C\ell\log_{2}(k\ell)\big{)}^{k} \tag{1}\] contains an \(\ell\)-sunflower, where \(C\) is an absolute constant and can be taken to be \(2^{10}\). To construct spread approximations, we will need the following easy observation. **Observation 11**.: _Given \(r>1\) and a family \(\mathcal{F}\subset 2^{[n]}\), let \(X\) be a maximal set that satisfies \(|\mathcal{F}(X)|\geq r^{-|X|}|\mathcal{F}|\). Then \(\mathcal{F}(X)\) is \(r\)-spread as a family in \(2^{[n]\setminus X}\)._ Proof.: Indeed, for any \(B\supsetneq X\) of size \(b\) we have \[|\mathcal{F}(B)|\leq r^{-b}|\mathcal{F}|\leq r^{-b+|X|}|\mathcal{F}(X)|.\] The next theorem allows to construct low-uniformity approximations for sufficiently spread families. **Theorem 12**.: _Let \(n,k,t\geq 1\) be some integers and \(\mathcal{A}\subset 2^{[n]}\) be a family. Consider a family \(\mathcal{F}\subset\mathcal{A}\cap\binom{[n]}{\leq k}\) that is \(t\)-intersecting. Let \(q,r,r_{0}\geq 1\) satisfy the following: \(r>2^{12}\log_{2}(2k)\), \(r\geq 2q\) and \(r_{0}>r\). Assume that \(\mathcal{A}\) is \(r_{0}\)-spread._ _Then there exists a \(t\)-intersecting family \(\mathcal{S}\) of sets of size at most \(q\) (a spread approximation of \(\mathcal{F}\)) and a'remainder' \(\mathcal{F}^{\prime}\subset\mathcal{F}\) such that_ * _We have_ \(\mathcal{F}\setminus\mathcal{F}^{\prime}\subset\mathcal{A}[\mathcal{S}]\)_;_ * _for any_ \(B\in\mathcal{S}\) _there is a family_ \(\mathcal{F}_{B}\subset\mathcal{F}\) _such that_ \(\mathcal{F}_{B}(B)\) _is_ \(r\)_-spread;_ * \(|\mathcal{F}^{\prime}|\leq(r_{0}/r)^{-q-1}|\mathcal{A}|\)_._ The crucial difference with [11, Theorem 11] is that we only require \(r\)-spreadness from \(\mathcal{A}\), instead of \((r,t)\)-sreadness (which means that any subfamily of the form \(\mathcal{A}(T)\) with \(|T|\leq t\) must be \(r\)-spread). This is crucial in our application to Theorem 8, because the set interpretation of \(\mathcal{U}_{k,\ell}\) is an \(r\)-spread, but not an \((r,t)\)-spread family. The proof of this theorem is essentially the same as in [11]. We present it here for completeness. Proof.: The theorem is obtained using the following procedure. For \(i=1,2,\ldots\) with \(\mathcal{F}^{1}:=\mathcal{F}\) we do the following steps. 1. Find an inclusion-maximal set \(S_{i}\) such that \(|\mathcal{F}^{i}(S_{i})|\geq r^{-|S_{i}||}|\mathcal{F}^{i}|\); 2. If \(|S_{i}|>q\) or \(\mathcal{F}^{i}=\emptyset\) then stop. Otherwise, put \(\mathcal{F}^{i+1}:=\mathcal{F}^{i}\setminus\mathcal{F}^{i}[S_{i}]\). The family \(\mathcal{F}^{i}(S_{i})\) is \(r\)-spread by Observation 11. Let \(m\) be the step of the procedure for \(\mathcal{F}\) at which we stop. Put \(\mathcal{S}:=\{S_{1},\ldots,S_{m-1}\}\). Clearly, \(|S_{i}|\leq q\) for each \(i\in[m-1]\). The family \(\mathcal{F}_{B}\) promised in (ii) is defined to be \(\mathcal{F}^{i}[S_{i}]\) for \(B=S_{i}\). Next, note that if \(\mathcal{F}^{m}\) is non-empty, then \[|\mathcal{F}^{m}|\leq r^{|S_{m}|}|\mathcal{F}^{m}(S_{m})|\leq r^{|S_{m}|}| \mathcal{A}(S_{m})|\leq(r/r_{0})^{|S_{m}|}|\mathcal{A}|,\] where in the last inequality we used the \(r_{0}\)-spreadness of \(\mathcal{A}\). We put \(\mathcal{F}^{\prime}:=\mathcal{F}^{m}\). Since either \(|S_{m}|>q\) or \(\mathcal{F}^{\prime}=\emptyset\), we have \(|\mathcal{F}^{\prime}|\leq(r_{0}/r)^{-q-1}|\mathcal{A}|\). The only thing is left to verify is the \(t\)-intersection property. Take any (not necessarily distinct) \(S_{i},S_{j}\in\mathcal{S}\) and assume that \(|S_{i}\cap S_{j}|<t\). Recall that \(\mathcal{G}_{i}:=\mathcal{F}^{i}(S_{i})\) and \(\mathcal{G}_{j}:=\mathcal{F}^{j}(S_{j})\) are both \(r\)-spread. \[|\mathcal{G}_{j}(\bar{S}_{i})|\geq|\mathcal{G}_{j}|-\sum_{x\in S_{i}\setminus S _{j}}|\mathcal{G}_{j}[\{x\}]|\geq\Big{(}1-\frac{|S_{i}|}{r}\Big{)}|\mathcal{G} _{j}|\geq\frac{1}{2}|\mathcal{G}_{j}|.\] In the last inequality we used that \(|S_{i}|\leq q\) and that \(r\geq 2q.\) The same is valid for \(\mathcal{G}_{i}(\bar{S}_{j})\). Note that both \(\mathcal{G}_{j}^{\prime}:=\mathcal{G}_{j}(\bar{S}_{i})\) and \(\mathcal{G}_{i}^{\prime}:=\mathcal{G}_{i}(\bar{S}_{j})\) are subfamilies of \(2^{[n]\setminus(S_{i}\cup S_{j})}.\) Because of the last displayed inequality and the trivial inclusion \(\mathcal{G}_{j}^{\prime}(Y)\subset\mathcal{G}_{j}(Y)\), valid for any \(Y\), both \(\mathcal{G}_{i}^{\prime}\) and \(\mathcal{G}_{j}^{\prime}\) are \(\frac{r}{2}\)-spread, where \(\frac{r}{2}>2^{11}\log_{2}(2k)\). We are about to apply Theorem 10. Let us put \(m=\log_{2}(2k)\), \(r/2\) playing the role of \(r\), and \(\delta=(2\log_{2}(2k))^{-1}\). Note that \(m\delta=\frac{1}{2}\) and \(\frac{r}{2}\delta>2^{10}\) by our choice of \(r\). Theorem 10 implies that a \(\frac{1}{2}\)-random subset \(W\) of \([n]\setminus(S_{i}\cup S_{j})\) contains a set from \(\mathcal{G}_{j}^{\prime}\) with probability strictly bigger than \[1-\Big{(}\frac{5}{\log_{2}2^{10}}\Big{)}^{\log_{2}2k}k=1-2^{-\log_{2}2k}k= \frac{1}{2}.\] Consider a random partition of \([n]\setminus(S_{i}\cup S_{j})\) into \(2\) parts \(U_{i},U_{j}\), including each element with probability \(1/2\) in each of the parts. Then both \(U_{\ell}\), \(\ell\in\{i,j\}\), are distributed as \(W\) above. Thus, the probability that there is \(F_{\ell}\in\mathcal{G}_{\ell}^{\prime}\) such that \(F_{\ell}\subset U_{\ell}\) is strictly bigger than \(\frac{1}{2}\). Using the union bound, we conclude that, with positive probability, it holds that there are such \(F_{\ell}\), \(F_{\ell}\subset U_{\ell}\), for each \(\ell\in\{i,j\}\). Fix such choices of \(U_{\ell}\) and \(F_{\ell}\), \(\ell\in\{i,j\}\). Then, on the one hand, both \(F_{i}\cup S_{i}\) and \(F_{j}\cup S_{j}\) belong to \(\mathcal{F}\) and, on the other hand, \(|(F_{i}\cup S_{i})\cap(F_{j}\cup S_{j})|=|S_{i}\cap S_{j}|<t\), a contradiction with \(\mathcal{F}\) being \(t\)-intersecting. An important second step is to show that the approximation family \(\mathcal{S}\) is trivial for an extremal family. In [11, Theorem 12], the authors worked with \((r,t)\)-spread families, which we cannot afford in the proof of Theorem 8 for the family \(\mathcal{U}_{k,\ell}\), and thus have to find ways around it. Let us say that a family \(\mathcal{A}\subset 2^{[n]}\) is _weakly \((r,t)\)-spread_ if there exists a set \(T\subset[n]\) of size \(t\) such that for any nonnegative integer \(s\) and a set \(U\subset[n]\) of size \(t+s\) we have \(|\mathcal{A}(U)|\leq r^{-s}|\mathcal{A}(T)|\). Informally speaking, this akin to \(r\)-spreadness for the family \(\mathcal{A}(T)\), where \(T\) of size \(t\) is chosen so that \(|\mathcal{A}(T)|\) is maximal. Recall that a \(t\)-intersecting family \(\mathcal{S}\) is _non-trivial_ if \(|\cap_{F\in\mathcal{S}}F|<t\). **Theorem 13**.: _Let \(\varepsilon\in(0,1]\), \(n,r,q,t\geq 1\) be such that \(\varepsilon r\geq 2^{17}q\log_{2}q\). Let \(\mathcal{A}\subset 2^{[n]}\) be a weakly \((r,t)\)-spread family and let \(\mathcal{S}\subset\binom{[n]}{\leq q}\) be a non-trivial \(t\)-intersecting family. Then there exists a \(t\)-element set \(T\) such that \(|\mathcal{A}[\mathcal{S}]|\leq\varepsilon|\mathcal{A}[T]|\)._ The proof again follows closely the proof of [11, Theorem 12], with a few changes. Again, we present it here in full for completeness. In what follows, assume that \(T\) is a set of size \(t\) that maximizes \(|\mathcal{A}(T)|\). For the proof, we will need the following simple observation. **Observation 14**.: _For any positive integers \(n,p\), a family \(\mathcal{A}\subset 2^{[n]}\) and a \(t\)-intersecting family \(\mathcal{S}\subset\binom{[n]}{\leq p}\) there exists a \(t\)-intersecting family \(\mathcal{T}\subset\binom{[n]}{\leq p}\) such that \(\mathcal{A}[\mathcal{S}]\subset\mathcal{A}[\mathcal{T}]\) and for any \(T\in\mathcal{T}\) and any proper subset \(X\subsetneq T\) there exists \(T^{\prime}\in\mathcal{T}\) such that \(|X\cap T^{\prime}|<t\)._ One natural way to choose such \(\mathcal{T}\) is to repeatedly replace sets in \(\mathcal{S}\) by their proper subsets while preserving the \(t\)-intersecting property. In terms of Theorem 13, let us iteratively define the following series of families. 1. Let \(\mathcal{T}_{0}\) be a family given by Observation 14 when applied to \(\mathcal{A}\) and \(\mathcal{S}\) with \(p=q\). 2. For \(i=0,\ldots,q-t\) we put \(\mathcal{W}_{i}=\mathcal{T}_{i}\cap\binom{[n]}{q-i}\) and let \(\mathcal{T}_{i+1}\) be the family given by Observation 14 when applied to the families \(\mathcal{A}\) (playing the role of \(\mathcal{A}\)) and \(\mathcal{T}_{i}\setminus\mathcal{W}_{i}\) playing the role of \(\mathcal{S}\) with \(p=q-i-1\). Remark that \(\mathcal{T}_{i}\) is \(t\)-intersecting for each \(i=0,\ldots,q-t\) by definition. We summarize the properties of these series of families in the following lemma. **Lemma 15**.: _The following properties hold for each \(i=0,\ldots,q-t\)._ 1. _All sets in_ \(\mathcal{T}_{i}\) _have size at most_ \(q-i\)_._ 2. _We have_ \(\mathcal{A}[\mathcal{T}_{i-1}]\subset\mathcal{A}[\mathcal{T}_{i}]\cup \mathcal{A}[\mathcal{W}_{i-1}]\)_._ 3. _The family_ \(\mathcal{T}_{i}\) _does not have a sunflower with_ \(q-i-t+2\) _petals._ 4. _We have_ \(|\mathcal{W}_{i}|\leq(C_{0}q\log_{2}q)^{q-i-t}\) _with some absolute constant_ \(C_{0}<2^{15}\)_._ 5. _If_ \(\mathcal{T}_{i}\) _consists of a single_ \(t\)_-element set_ \(X\) _and this is not the case for_ \(\mathcal{T}_{i-1}\) _then_ \(|\mathcal{A}[\mathcal{T}_{i-1}\setminus\mathcal{W}_{i-1}]|\leq\frac{q}{r}| \mathcal{A}[T]|\)_._ The lemma is essentially identical to [11, Lemma 14], with a difference in (v), where we replace a concrete \(X\) with a "universal" \(T\) on the right-hand side. Proof.: (i) This easily follows by induction on \(i\) from the fact that all sets in \(\mathcal{S}\) have size at most \(q\) and the definition of \(\mathcal{T}_{i}\). (ii) We have \(\mathcal{A}[\mathcal{T}_{i-1}]=\mathcal{A}[\mathcal{T}_{i-1}\setminus\mathcal{ W}_{i-1}]\cup\mathcal{A}[\mathcal{W}_{i-1}]\) and, by the definition of \(\mathcal{T}_{i}\), we have \(\mathcal{A}[\mathcal{T}_{i}]\supset\mathcal{A}[\mathcal{T}_{i-1}\setminus \mathcal{W}_{i-1}]\). (iii) Assume there is a sunflower \(T_{1},\ldots,T_{q-i-t+2}\in\mathcal{T}_{i}\) with core \(X\). Assume that a set \(T^{\prime}\in\mathcal{T}_{i}\) intersects \(X\) in \(t-j\) elements, \(j>0\). Then \(T^{\prime}\) intersects each \(T_{\ell}\) in at least \(j\) elements, implying \(|T^{\prime}|\geq t-j+(q-i-t+2)j=t+(q-i-t+1)j\geq q-i+1\), a contradiction with the fact that all sets in \(\mathcal{T}_{i}\) have size at most \(q-i\). So any \(T^{\prime}\in\mathcal{T}_{i}\) must intersect \(X\) in at least \(t\) elements. This, however, contradicts the property of \(\mathcal{T}_{i}\) guaranteed by Observation 14: the set \(X\subsetneq T_{j}\) intersects all sets from \(\mathcal{T}_{i}\) in at least \(t\) elements. (iv) This is trivial for \(i=q-t\) since \(\mathcal{T}_{q-t}\) contains at most \(1\) set. In what follows, we assume that \(i<q-t\). Take any set \(Y\in\mathcal{W}_{i}\). Since \(\mathcal{T}_{i}\) is \(t\)-intersecting, there is a \(t\)-element subset \(X\subset Y\) such that \(|\mathcal{W}_{i}|\leq\binom{q-i}{t}|\mathcal{W}_{i}(X)|=\binom{q-i}{q-i-t}| \mathcal{W}_{i}(X)|\). Next, \(\mathcal{W}_{i}(X)\) is \((q-i-t)\)-uniform and does not contain sunflowers with \((q-i-t+2)\) petals by (iii). From (1) we conclude that, for an absolute constant \(C=2^{10}\), \[|\mathcal{W}_{i}|\leq \binom{q-i}{q-i-t}|\mathcal{W}_{i}(X)|\leq\binom{q-i}{q-i-t} \big{(}C(q-i-t+2)\log_{2}\big{(}(q-i-t+2)(q-i-t)\big{)}\big{)}^{q-i-t}\] \[\leq \Big{(}\frac{e(q-i)}{q-i-t}\Big{)}^{q-i-t}\big{(}6C(q-i-t)\log_{2 }q\big{)}^{q-i-t}\leq(20Cq\log_{2}q)^{q-i-t}.\] (v) Let us assume that \(\mathcal{T}_{i}=\{X\}\) for some \(t\)-element set \(X\). Note that all sets in \(\mathcal{T}_{i-1}\) have size at least \(t+1\). Otherwise, if there is \(T\in\mathcal{T}_{i-1}\) of size \(t\) then \(T\) is a proper subset of all other sets from \(\mathcal{T}_{i-1}\), which contradicts the property of \(\mathcal{T}_{i-1}\) guaranteed by Observation 14. Thus, the sets in \(\mathcal{T}^{\prime}:=\mathcal{T}_{i-1}\setminus\mathcal{W}_{i-1}\), if any, have size at least \(t+1\) and all contain \(X\). Recall that, for a family \(\mathcal{F}\), \(\tau(\mathcal{F})\) is the size of the smallest set \(Y\) such that \(Y\cap F\neq\emptyset\) for each \(F\in\mathcal{F}\). Assume that \(\tau(\mathcal{T}^{\prime}(X))>q\). Each set in \(\mathcal{W}_{i-1}\) either contains \(X\) or intersects every set from \(\mathcal{T}^{\prime}(X)\). In the latter case, it has size at least \(\tau(\mathcal{T}^{\prime}(X))\), which is impossible because each set in \(W_{i-1}\) has size at most \(q\). Thus, all sets from \(\mathcal{W}_{i-1}\) contain \(X\), implying that all sets from \(T_{i-1}\) contain \(X\), a contradiction. Therefore, \(\tau(\mathcal{T}^{\prime}(X))\leq q\). If \(\{x_{1},\ldots,x_{q}\}\) is a covering of \(\mathcal{T}^{\prime}(X)\) then we have \[|\mathcal{A}[\mathcal{T}^{\prime}]|\leq|\mathcal{A}[X\cup\{x_{1}\}]|+\ldots+| \mathcal{A}[X\cup\{x_{q}\}]|\leq\frac{q}{r}|\mathcal{A}[T]|,\] where in the last inequality we used the definition of \(T\) and the weak \((r,t)\)-spreadness. Proof of Theorem 13.: Fix \(i\) as in Lemma 15 (v). Note that by (i) such a choice always exists. Let \(T\) be a \(t\)-element set such that \(|\mathcal{A}[T]|\) is the largest possible. By weak \((r,t)\)-spreadness, for any \(j<i\) and any \(W\in\mathcal{W}_{j}\) we have \(|\mathcal{A}[W]|\leq r^{-(q-j-t)}|\mathcal{A}[T]|\). By (iv) and a union bound, we get \(|\mathcal{A}[\mathcal{W}_{j}]|\leq r^{-(q-j-t)}(C_{0}q\log_{2}q)^{(q-j-t)}| \mathcal{A}[T]|\). Using this and (v) we obtain \[|\mathcal{A}[\mathcal{S}]|\stackrel{{(ii)}}{{\leq}}| \mathcal{A}[\mathcal{T}_{i-1}]|+\sum_{j=0}^{i-1}|\mathcal{A}[\mathcal{W}_{j}]| \stackrel{{(iv),(v)}}{{\leq}}\Big{(}\frac{q}{r}+\sum_{j=1}^{\infty }r^{-j}(C_{0}q\log_{2}q)^{j}\Big{)}|\mathcal{A}[T]|\] \[\leq \Big{(}\frac{\epsilon}{2}+\sum_{j=1}^{\infty}\big{(}\frac{ \epsilon}{4}\big{)}^{j}\Big{)}|\mathcal{A}[T]|\leq\varepsilon|\mathcal{A}[T]|,\] where in the third inequality we used the condition on \(r\) and the bound on \(C_{0}\). ## 4. Proofs ### General partitions. Proof of Theorem 2 **Lemma 16**.: _For any \(n\geq 2\) we have \(\frac{B_{n+1}}{B_{n}}\geq\frac{n}{2\log_{\varepsilon}n}\)._ Proof.: We use the following remarkable explicit formula for \(B_{n}\) (see [16]): \[B_{n}=\frac{1}{e}\sum_{s=0}^{\infty}\frac{s^{n}}{s!}. \tag{2}\] Let us compare the \((s-1)\)-th and \(s\)-th terms in the summation. We have \[\frac{\frac{(s-1)^{n}}{(s-1)!}}{\frac{s^{n}}{s!}}=s\Big{(}1-\frac{1}{s}\Big{)}^ {n}\leq se^{-n/s}<e^{-1}, \tag{3}\] provided \(\frac{n}{s}\geq\log_{e}s+1\), that is, \(n\geq s+s\log_{e}s\). Let us denote \(s_{0}(n)\) the largest integer that satisfies this inequality, and note that \(s_{0}(n)\geq\frac{n}{\log_{e}n}\) for \(n\geq 16\). (It is easy to verify by direct substitution, using that \(e^{e}<16\).) Using (3), we conclude that the terms in (2) grow faster than a geometric progression with base \(e\) until at least \(s_{0}\), and so, using the formula for the sum of a geometric progression, we get \(\sum_{s=0}^{s_{0}-1}\frac{s^{n}}{s!}\leq\frac{1}{e-1}\frac{s_{0}^{n}}{s_{0}!}.\) We can thus bound \(B_{n}\) as follows: \[B_{n}\leq\frac{e}{e-1}\sum_{s=s_{0}}^{\infty}\frac{s^{n}}{s!}.\] (We note that, of course, with a bit more care and for relatively large \(n\), the fraction in front should be essentially \(1\).) Let us now bound the ratio. \[\frac{B_{n+1}}{B_{n}}\geq\frac{\sum_{s=s_{0}(n)}^{\infty}\frac{s^{n+1}}{s!}}{ \frac{e}{e-1}\sum_{s=s_{0}(n)}^{\infty}\frac{s^{n}}{s!}}\geq\frac{s_{0}(e-1)} {e}\geq\frac{n}{2\log_{e}n}.\] This completes the proof for \(n\geq 16\). We have \(B_{n}\geq 2B_{n-1}\) since, for each partition \(P\) of \([n-1]\), element \(n\) can be either made a separate part or adjoined to one of the parts in \(P\). Similarly, \(B_{n}\geq 3B_{n-1}\) for \(n\geq 5\) because for each partition \(P\) but the single partition with \(1\) part it can be extended by \(n\) in at least \(3\) ways and, moreover, there are partitions that can be extended in at least \(4\) ways. We are left to note that \(n/(2\log_{e}n)\leq 2\) for \(2\leq n\leq 8\) and \(n/(2\log_{e}n)\leq 3\) for \(9\leq n\leq 15\). Given a partition \(P=(P_{1},\ldots,P_{\ell})\in\mathcal{P}_{n}\), we denote by \(\mathcal{D}_{n}\langle P\rangle\subset\mathcal{B}_{n}\) the family of all partitions from \(\mathcal{B}_{n}\) that do not share any part with \(P\) (\(P\)-derangements), that is, if \(P_{i}\) is not a part in any partition from \(\mathcal{D}_{n}\langle P\rangle\) for any \(i\). **Lemma 17**.: _For \(n\geq 18\) and any partition \(P\in\mathcal{B}_{n}\) we have \(|\mathcal{D}_{n}\langle P\rangle|\geq\frac{1}{9}n^{-3}|\mathcal{B}_{n}|\)._ Following our approach with more care, the bound can be improved to \(|\mathcal{D}_{n}\langle P\rangle|\geq Cn^{-1-\epsilon}|\mathcal{B}_{n}|\) for any \(\epsilon>0\), provided \(n\geq n_{0}(\epsilon)\) is large enough. Proof.: The first step, inspired by a compression operation from Ku and Renshaw [12], is to show that \(|\mathcal{D}_{n}\langle P\rangle|\) is minimized for a partition that consists of \(n\) singletons. To do so, for any given partition \(P\) that has a part \(P_{i}\) with \(|P_{i}|\geq 2\), we introduce a new partition \(P^{\prime}\) that is the same as \(P\) except it replaces part \(P_{i}\) with two parts \(Q_{i},Q_{i}^{\prime}\), where these are non-empty disjoint sets such that \(Q_{i}\sqcup Q_{i}^{\prime}=P_{i}\). Take any partition \(U\in\mathcal{D}_{n}\langle P^{\prime}\rangle\setminus\mathcal{D}_{n}\langle P\rangle\). Partitions \(P\) and \(P^{\prime}\) mostly coincide, and the only part that the former has and the latter does not is \(P_{i}\). Thus, \(U\) must contain \(P_{i}\) as a part. Define \(f(U)\) to be a partition that coincides with \(U\) except it replaces \(P_{i}\) with \(Q_{i},Q^{\prime}_{i}\). Then \(f(U)\in\mathcal{D}_{n}\langle P\rangle\,\backslash\,\mathcal{D}_{n}\langle P^{ \prime}\rangle\). Moreover, \(f\) is an injection. Thus, \(|\mathcal{D}_{n}\langle P^{\prime}\rangle|\leq|\mathcal{D}_{n}\langle P\rangle|\). Repeatedly applying the splitting operation, we arrive at the all-singleton partition \(S\), for which the number of derangements is thus minimized. Following [12], we denote \[\tilde{B_{n}}=|\mathcal{D}_{n}\langle S\rangle|.\] Note that \(\mathcal{D}_{n}\langle S\rangle\) is the family of all partitions with parts of size \(\geq 2\). We have the following recurrence relations: \[B_{n+1}= \sum_{i=0}^{n}\binom{n}{i}B_{i}\] \[\tilde{B}_{n+1}= \sum_{i=0}^{n-1}\binom{n}{i}\tilde{B}_{i}.\] Let us prove by induction on \(s\geq 2\) that for any \(n\geq s\) we have \[\tilde{B}_{s}\geq\Big{(}1-\frac{2\log_{e}n}{n}\Big{)}^{s}\prod_{i=2}^{s-1} \frac{1}{2}\Big{(}1-\frac{2i+1}{3^{i}}\Big{)}\cdot B_{s}. \tag{4}\] In what follows, we will use that \[\frac{\sum_{i=0}^{n}\binom{n}{i}B_{i}}{\sum_{i=2}^{n}\binom{n}{i}B_{i}}\geq \frac{3^{n}}{3^{n}-2n-1},\] because \(B_{i}\geq 2B_{i^{\prime}}\) if \(i>i^{\prime}\). Fix some \(n\geq 2\). Returning to (4), it is true for \(s=2\) because \(\tilde{B}_{2}=1\) and \(B_{2}=2\). Next, for any \(n\geq s+1\) by induction and using Lemma 16 we have \[\tilde{B}_{s+1}=\sum_{i=0}^{s-1}\binom{s}{i}\tilde{B}_{i}\geq\sum_{i=2}^{s-1} \binom{s}{i}\Big{(}1-\frac{2\log_{e}n}{n}\Big{)}^{i}\frac{1}{2}\prod_{j=2}^{i -1}\Big{(}1-\frac{2j+1}{3^{j}}\Big{)}B_{i}\geq\] This proves (4). Let us now obtain a bound on \(\tilde{B}_{n}\). First, we note that \(\prod_{i=2}^{s}\Big{(}1-\frac{2i+1}{3^{i}}\Big{)}\geq\frac{4}{9}\big{(}1-\sum _{i=3}^{s}\frac{2i+1}{3^{i}}\big{)}\geq\frac{4}{9}(1-\frac{7}{27}-2\cdot\frac{ 9}{81})\geq\frac{2}{9}\). Thus, we get \[\tilde{B}_{n}\geq\frac{1}{9}\Big{(}1-\frac{2\log_{e}n}{n}\Big{)}^{n}B_{n}\geq \frac{1}{9}e^{-\frac{2n\log_{e}n}{n-2\log_{e}n}}B_{n}\geq\frac{1}{9}e^{-3\log _{e}n}B_{n}\geq Cn^{-3}B_{n},\] where in the second to last inequality holds provided \(n\geq 6\log_{e}n\), which is true for \(n\geq 18\). We used the bound \((1-\frac{1}{x})=(1+\frac{1}{x-1})^{-1}\geq e^{-\frac{1}{x-1}}\) Proof of Theorem 2.: We give the following set interpretation to \(\mathcal{B}_{n}\). Consider the ground set \(2^{[n]}\), and let \(X\in\mathcal{B}_{n}\) be mapped into a \(\leq n\)-element set \(A\) on \(2^{[n]}\), where each element of \(A\) corresponds to a part from the partition \(X\). In what follows, we think of \(\mathcal{B}_{n}\) as a family of sets. Note that, using Lemma 16, for any set \(X\in{2^{[n]}\choose s}\), \(s\leq n\), such that \(\mathcal{B}_{n}(X)\) is non-empty, we get \[\Big{(}\frac{|\mathcal{B}_{n}(X)|}{|\mathcal{B}_{n}|}\Big{)}^{1/s}\geq\Big{(} \frac{B_{n-s}}{B_{n}}\Big{)}^{1/s}\geq\Big{(}\prod_{i=n-s}^{n-1}\frac{i}{2\log _{e}i}\Big{)}^{1/s}\geq\Big{(}\frac{(n-1)!}{(2\log_{e}(n-1))^{n-1}}\Big{)}^{1/ (n-1)}\geq\] \[\frac{n/e}{2\log_{e}(n-1)}\geq\frac{n}{6\log_{e}n},\] where for the last inequality we need \(n\) to be somewhat large (e.g., \(n\geq 50\) is sufficent). From the above, we get that the family \(\mathcal{B}_{n}\) is \(r_{0}=\frac{n}{6\log_{e}n}\) spread. Moreover, it is weakly \((\frac{n}{12\log_{e}n},t)\)-spread for any \(t\leq n/2\). Indeed, it is sufficient to consider the number of all partitions that fix \(t\) singletons (there are \(B_{n-t}\) of them) and compare that with the number of partitions that fix some \(t+s\) parts, \(s\geq 1\) (there are at most \(B_{n-t-s}\) of those for any choice of the parts to fix). To bound the ratio, we reuse the bounds displayed above. We are now ready to prove the theorem. Take a \(t\)-intersecting family \(\mathcal{F}\subset\mathcal{B}_{n}\). First, we apply Theorem 12 with \(r=r_{0}/2\) and \(q=2^{-23}\frac{n}{\log_{2}^{2}n}\). We get a \(t\)-intersecting family \(\mathcal{S}\) of sets of size at most \(q\) (corresponding to collections of \(q\) pairwise disjoint sets) and a remainder family \(\mathcal{F}^{\prime}\) such that \[|\mathcal{F}^{\prime}|\leq 2^{-q-1}B_{n}\leq n^{-q/\log_{2}n}B_{n}\leq n^{-t-4}B_ {n}\leq n^{-4}B_{n-t}.\] Here we used a bound \(B_{n}/B_{n-1}\leq n\), which is valid since any partition of \(n-1\) elements can be prolonged in at most \(n\) ways to a partition on \(n\) elements, as well as the bound \(t\leq\frac{q}{2\log_{2}n}\), valid for our choice of parameters. Next, we apply Theorem 13 to the family \(\mathcal{S}\) with \(\epsilon=1/2\). Due to the weak \((r_{0}/2,t)\)-spreadness of \(\mathcal{B}_{n}\), it is easy to check that the inequality on the parameters is satisfied. We conclude that either \(|\mathcal{F}|\leq\frac{1}{2}B_{n-t}+|\mathcal{F}^{\prime}|\leq 0.6B_{n-t}\), or \(\mathcal{S}\) consists of one \(t\)-element set \(S\). Moreover, in order for \(\mathcal{F}\) to be maximal this set in the partition language must clearly correspond to a collection of \(t\) singletons. Finally, let us show that \(\mathcal{F}^{\prime}\setminus\mathcal{B}_{n}[S]\) must be empty. Indeed, assume \(P\in\mathcal{F}^{\prime}\setminus\mathcal{B}_{n}[S]\). Then, by Lemma 17, the number of partitions in \(\mathcal{B}_{n}(S)\) that are derangements with respect to \(P\) (induced on the complement of \(S\) in the partition language), is at least \(\frac{1}{9}n^{-3}B_{n-t}\), which is larger than \(|\mathcal{F}^{\prime}|\) by the last displayed formula. We get that if \(\mathcal{F}^{\prime}\) is non-empty, then \(\mathcal{F}\) cannot be extremal. This completes the proof of the theorem. ### Partitions with \(k\) parts Proof of Theorem 4.: We will need the following relation between Stirling coefficients of the second kind. **Lemma 18**.: _For each \(n\geq 1+2\ell\log_{2}n\), \(\ell\geq 2\) we have \(\begin{cases}n\\ \ell\end{cases}\geq n^{2}\begin{cases}n-1\\ \ell-1\end{cases}\)._ Proof.: Stirling numbers obey the following recurrence relation: \[\genfrac{\{}{\}{0.0pt}{}{n}{\ell}}{\ell}=\genfrac{\{}{\}{0.0pt}{}{n-1}{\ell-1}}{ \ell}+\ell\genfrac{\{}{\}{0.0pt}{}{n-1}{\ell}}{n-1}.\] From here, we see that \(\genfrac{\{}{\}{0.0pt}{}{n}{\ell}}{\ell}\geq\ell\genfrac{\{}{\}{0.0pt}{}{n-1}{ \ell}}{n-1}\). Next, let us compare \(\genfrac{\{}{\}{0.0pt}{}{n-1}{\ell}}{\ell}\) and \(\genfrac{\{}{\}{0.0pt}{}{n-1}{\ell-1}}{\ell-1}\). Consider a bipartite graph between \(\ell\)-partitions and \((\ell-1)\)-partitions of \([n-1]\), where two partitions are connected by an edge if one is a refinement of the other. Let us count the degrees in this graph. The neighbors of a \(\ell\)-partition \(X\) are the \((\ell-1)\)-partitions obtained by merging two parts of \(X\). Thus, the degree of \(X\) is \(\genfrac{\ell}{{\}{0.0pt}{}{\ell}}{2}\). The neighbors of an \((\ell-1)\)-partition \(Y\) are those that can be obtained by subdividing one of the parts of \(Y\). Assume that the parts \(Y_{1},\ldots,Y_{\ell}\) have sizes \(k_{1},\ldots,k_{\ell}\), \(\sum_{i=1}^{\ell}k_{i}=n\). There are \(2^{\ell_{i}-1}\) ways to subdivide \(Y_{i}\). Thus, the degree of \(Y\) is \(\sum_{i=1}^{\ell}2^{k_{i}-1}\geq\ell 2^{-1+(n-1)/\ell}\) by convexity. The last expression is at least \(\ell n^{2}/2\) by our assumption on \(n\). Concluding, the degree of any element in the \(\ell\)-partitions part is \(\geq n^{2}/\ell\) times smaller than the degree of any element in the \((\ell-1)\)-partitions part. Double counting the number of edges, we get that \(\genfrac{\{}{\}{0.0pt}{}{n-1}{\ell}}{\ell}\geq n^{2}/\ell\genfrac{\{}{\}{0.0pt }{}{n-1}{\ell-1}}{\ell-1}\) in our assumptions. Combining it with the displayed formula, we get that \(\genfrac{\{}{\}{0.0pt}{}{n}{\ell}}{\ell}\geq n^{2}\genfrac{\{}{\}{0.0pt}{}{n-1 }{\ell-1}}{\ell-1}\). Proof of Theorem 4.: We interpret \(\mathcal{P}_{n}^{\ell}\) as a subfamily of \(\mathcal{B}_{n}\). That is, the ground set is \(2^{[n]}\), and each \(X\in\mathcal{P}_{n}^{\ell}\) is mapped into an \(\ell\)-element set on \(2^{[n]}\), where each element of the set corresponds to a part from the partition \(X\). In what follows, we think of \(\mathcal{P}_{n}^{\ell}\) as a family of \(\ell\)-element sets. Let us derive that \(\mathcal{P}_{n}^{\ell}\) is weakly \((\frac{n^{2}}{2},t)\)-spread for each \(t\leq\ell-2\). For the set \(T\) in the definition of a weakly spread partition take a collection of \(t\) distinct singletons. The number of \(\ell\)-partitions extending it is \(\genfrac{\{}{\}{0.0pt}{}{n-t}{\ell-t}}{\ell-t}\). The number of partitions with any \(t+s\) fixed parts is at most \(\genfrac{\{}{\}{0.0pt}{}{n-t-s}{\ell-t-s}}{\ell-t-s}\). Using Lemma 18, we have \[\genfrac{\{}{\}{0.0pt}{}{n-t}{\ell-t}}{\ell-t}/\genfrac{\{}{\}{0.0pt}{}{n-t-s }{\ell-t-s}}{\ell-t-s}\geq(n-t-s)^{2s}\geq(n^{2}/2)^{s},\] provided \(\ell-t-s\geq 1\). If \(\ell-t-s=0\) then we combine it with the bound \(\genfrac{\{}{\}{0.0pt}{}{n-t-s+2}}{2}=2^{n-t-s+1}\geq 2^{n/2}\geq n^{4}=n^{4} \genfrac{\{}{\}{0.0pt}{}{n-t-s}{\ell-t-s}}{\ell-t-s}\) (the last inequality is valid due to our choice of \(n\)). Thus, the last displayed inequality is always true, and we conclude that \(\mathcal{P}_{n}^{\ell}\) is weakly \((\frac{n^{2}}{2},t)\)-spread for each \(t\leq\ell-2\). Apply Theorem 13 to \(\mathcal{F}\) with \(\varepsilon=1/2\), \(\ell\) playing the role of \(q\) and \(r=n^{2}/2\). The inequality clearly holds for large enough \(n\) satisfying \(n\geq 2\ell\log_{2}n\) (in particular, \(n>2^{18}\) is enough). We conclude that either \(\mathcal{F}\) is a family of partitions extending a fixed set of \(t\) singletons, or \(|\mathcal{F}|\leq\frac{1}{2}\!\left\{\!\!\!\begin{array}{c}n-t\\ \ell-t\end{array}\!\!\!\right\}\). This concludes the proof of the theorem. ### Profiled \(t\)-intersecting partitions Proof of Theorem 7.: For this proof, we also interpret \(\mathcal{U}_{P}\) as a subfamily of the family corresponding to \(\mathcal{B}_{n}\), that is, as an \(\ell\)-uniform family on the ground set \(2^{[\ell]}\). (We have \(\mathcal{U}_{P}\subset{2^{[\ell]}\choose\ell}\).) In this setup, we also can directly apply Theorem 13 with \(\epsilon=1/2\) and get the desired conclusion, provided that we can show that \(\mathcal{U}_{k,\ell}\) is a weakly \((r,t)\)-spread family with \(r\geq 2^{17}\ell\log_{2}\ell\). We will show this below. Let \(a_{t}\) be the number of partitions that contain fixed parts of sizes \(k_{1},\ldots,k_{t}\), and let \(b_{U}\) be the number of partitions that contain fixed parts of sizes \(k_{i},i\in U\). Note that, for any \(w\) and \(U\) with \(|U|=w\) we have \(a_{w}\geq b_{U}\). Let \(U\) be a subset of \([\ell]\) of size \(t+s\). For shorthand, let us denote \(n_{j}=\sum_{i=j+1}^{\ell}k_{i}\) and note that for \(j>j^{\prime}\) we have \(\frac{n_{j^{\prime}}}{n_{j}}\geq\frac{\ell-j^{\prime}}{\ell-j}\). Also note that in our assumptions we have \(n_{t}\geq k_{t+1}(\ell-t)\geq\max\{\ell,2s\}\). \[\frac{a_{t}}{b_{U}}\geq\frac{a_{t}}{a_{t+s}}=\frac{\frac{n_{t}!}{\prod_{i=t+1}^ {\ell}k_{i}!}}{\frac{n_{t+s}!}{\prod_{i=t+s+1}^{t}k_{i}!}}=\frac{n_{t}!}{n_{t+s }!\prod_{i=t+1}^{t+s}k_{i}!}\geq\frac{\prod_{i=n_{t+s}+1}^{n_{t}}i}{2^{s-1}((n_ {t}-n_{t+s}-2(s-1))!}.\] In the last inequality, to bound the product of factorials in the denominator we used that for \(k\geq k^{\prime}\) we have \(k!k^{\prime}!\leq(k+1)!(k-1)!\), as well as that \(k_{i}\geq 2\) for \(i\geq t+1\). We iteratively applied it to show that, in the conditions \(k_{i}\geq 2\) and \(\sum_{i=t+1}^{t+s}k_{i}=n_{t}-n_{t+s}\), the denominator is maximized when the first \(s-1\)\(k_{i}\)'s are equal to \(2\). Below, we use that \(\prod_{i=n_{t}-2s+3}^{n_{t}}i=(n_{t})!/(n_{t}-2s+2)!\geq((n_{t})!)^{(2s+2)/n_ {t}}\geq(n_{t}/e)^{2s-2}\) and that \(n_{t}\geq\ell\). The last displayed expression is equal to \[\frac{\prod_{i=n_{t}-2s+3}^{n_{t}}i}{2^{s-1}}\frac{\prod_{i=n_{t+s}+1}^{n_{t}- 2s+2}i}{((n_{t}-n_{t+s}-2(s-1))!}\geq\frac{(n_{t}/e)^{2s-2}}{2^{s-1}}{n_{t}-2s +2\choose n_{t+s}}\geq(\ell/6)^{2s-2}{n_{t}-2s+2\choose n_{t+s}}.\] If \(s\geq\ell/4\), then \((\ell/6)^{2s-2}\geq(c\ell)^{2s}\) for some constant \(c>0\). If \(s<\ell/4\) then \({n_{t}-2s+2\choose n_{t+s}}>{\ell/2\choose 2}\). In any case, we can conclude that the RHS of the last displayed inequality is at least \((c\ell)^{2s}\) for some constant \(c>0\). We get that the family \(\mathcal{U}_{P}\) is weakly \((c\ell^{2},t)\)-spread. Since \(c\ell^{2}>2^{17}\ell\log_{2}\ell\) for large enough \(\ell\), this is sufficient for our application of Theorem 13. ### Partially \(t\)-intersecting partitions Proof of Theorem 8.: In what follows, we assume that \(k\geq 3\). First, we interpret \(\mathcal{U}_{k,\ell}\) as a family of sets. Put \(N:={k\ell\choose 2}\) and correspond to each partition \(P\) in \(\mathcal{U}_{k,\ell}\) the set of all pairs of elements \(x_{1},x_{2}\in[k\ell]\) such that \(x_{1}\) and \(x_{2}\) belong to the same part in \(P\). Note that, in graph terms, \(P\) is a collection of \(\ell\)\(k\)-cliques. As a set, \(P\) has size \(\ell{k\choose 2}\). In what follows, we work with partitions in this set form. In particular, we think of \(\mathcal{U}_{k,\ell}\) as a family in \({|N|\choose k}\). We are next going to show that \(\mathcal{U}_{k,\ell}\) is sufficiently spread, provided \(\ell\) is large enough. Consider a collection \(X=\{X_{1},\ldots,X_{r}\}\) of disjoint sets in \([k\ell]\), such that \(2\leq|X_{i}|\leq k\). Put \(m(X)=\sum_{i}(|X_{i}|-1)\). We say that a \((k,\ell)\)-partition \(P=(P_{1},\ldots,P_{\ell})\)_extends_\(X\) if for each \(i\) there is \(j\) such that \(X_{i}\subset P_{j}\). Assume first that \(m(X)\leq k\ell/3\). We claim that the number \(u_{k,\ell}(X)\) of partitions that extend \(X\) is at most \(\left(\frac{9}{\ell}\right)^{m}u_{k,\ell}.\) Indeed, we have the following bound: \[\frac{u_{k,\ell}(X)}{u_{k,\ell}}=\frac{\frac{(k\ell-m(X)-r)!}{(\ell-r)!(k!)^{ \ell-r}\prod_{i=1}^{r}(k-|X_{i}|)!}}{\frac{(k\ell)!}{\ell!(k!)^{\ell}}}\leq\] \[\frac{k^{m(X)+r}\ell^{r}}{(k\ell-m(X)-r)^{m(X)+r}}\leq\frac{k^{m(X)+r}\ell^{r }}{(k\ell/3)^{m(X)+r}}\leq\frac{3^{m(X)+r}}{\ell^{m(X)}}\leq\frac{9^{m(X)}}{ \ell^{m(X)}},\] where we used twice that \(m(X)\geq r\). Note that this holds for subpartitions \(X\) that fix at most \(k\ell/3\) elements and fix at least \(2\) elements in each "active" block. If a partition fixes more elements (but also has at least \(2\) elements in each block), then we can simply take a subpartition of size \(k\ell/3\) and apply the bound to that subpartition. Thus, we get the following bound for larger subpartitions. \[\frac{u_{k,\ell}(X)}{u_{k,\ell}}\leq\frac{9^{k\ell/3}}{\ell^{k\ell/3}}\leq \frac{9^{m(X)/3}}{\ell^{m(X)/3}}. \tag{5}\] Note that the last bound is valid for any partition that fixes at least \(2\) elements in each block. Let us now analyze, how is restricting to partitions that extend \(X\) is expressed in set terms. The family of partitions that extend \(X\) can be expressed in different ways. (In what follows, we refer to elements of \([N]\) as edges in the graph sense. It is natural since they correspond to pairs of elements of the original ground set.) Actually, it is necessary and sufficient for each \(i\in[r]\) to fix a subgraph on \(X_{i}\) that is connected, and add no other edges. Thus, the largest number of edges we can fix is \(\sum_{i=1}^{r}\binom{|X_{i}|}{2}=\sum_{i=1}^{r}\frac{|X_{i}|(|X_{i}|-1)}{2} \leq\frac{k}{2}\sum_{i=1}^{r}(|X_{i}|-1)=\frac{km(X)}{2}\). Put differently, if we have fixed \(x\) edges, then the corresponding \(X\) satisfies \(m(X)\geq\lceil\frac{2x}{k}\rceil\). We also note that, in the set interpretation, the corresponding partition cannot fix exactly \(1\) element in a part, and thus the bound (5) is valid for all subpartitions that may arise this way. From here, we get that the family \(\mathcal{U}_{k,\ell}\) is \((\frac{\ell}{9})^{2/3k}\)-spread. If \(\ell\) is sufficiently large (say, \(\ell>k^{Ck}\) with some large constant \(C\)), this spreadness is sufficient to apply the spread approximation machinery. But before we do so, let us analyze, what happens with the \(t\)-intersecting property when passing to the set interpretation. If two \((k,\ell)\)-partitions partially \(t\)-intersect, then they have \(\binom{t}{2}\) edges in common. In what follows, we will be working with this \(\binom{t}{2}\)-intersection property for set families. However, there is a complication that we have to overcome: there are obviously many other ways for two sets to have intersection \(\binom{t}{2}\), without the corresponding partitions necessary being partially \(t\)-intersecting. Luckily, being partially \(t\)-intersecting is the most "economical" way, which allows us to overcome this complication. Consider a \(\binom{t}{2}\)-intersecting (in the set sense) family \(\mathcal{F}\subset\mathcal{U}_{k,\ell}\). Apply Theorem 12 with \(r=(\frac{\ell}{9})^{1/3k}\), \(r_{0}=(\frac{\ell}{9})^{2/3k}\), \(q=k^{10}\). The uniformity \(\ell\binom{k}{2}\) plays the role of \(k\) from the theorem. Note that both inequalities on \(r\) are valid, provided, say, \(\ell>k^{Ck}\) with a large enough \(C\). Provided \(\ell\) is large enough, we get that \((\frac{\ell}{9})^{2/k}>2^{12}\log_{2}(2\ell{k\choose 2}).\) This allows us to apply Theorem 12 and get a family \(\mathcal{S}\) of sets of size at most \(q\) that cover most partitions in \(\mathcal{F}\), and a remainder family \(\mathcal{F}^{\prime}\subset\mathcal{F}\) satisfying \[|\mathcal{F}^{\prime}|\leq(r/r_{0})^{q+1}u_{k,\ell}\leq\Big{(}\Big{(}\frac{9}{ \ell}\Big{)}^{1/3k}\Big{)}^{k^{10}}u_{k,\ell}\leq\ell^{-k^{7}}u_{k,\ell}. \tag{6}\] The next step is to apply Theorem 13 to our approximation \(\mathcal{S}\) with \({t\choose 2}\) playing the role of \(t\). Before we do so, we need to show that \(\mathcal{U}_{k,\ell}\) possess the weak \((r^{\prime},{t\choose 2})\)-spreadness property with a sufficiently large \(r^{\prime}\). In order to do so, we need to return to the analysis of the subpartitions. Let \(E\) be a collection of \({t\choose 2}\) edges and let \(X=(X_{1},\ldots,X_{r})\) be the corresponding subpartition. Then \({t\choose 2}=\sum_{i}{|X_{i}|\choose 2}\). Recall that \(m(X)=\sum_{i}(X_{i}-1)\). It is easy to see that \(m(X)=t-1\) if and only if \(r=1\), and the subpartition consists of just \(1\) set \(X_{1}\) of size \(t\). Moreover, if \(E\) is a collection of \({t\choose 2}+s\) edges, then the corresponding partition \(X^{\prime}\) satisfies \(m(X^{\prime})\geq t+\frac{s}{k}\). We can easily observe the last inequality by taking out vertices in parts of \(X^{\prime}\) one by one and keeping track how does \(m\) and the number of edges change. Choose a collection \(T\) of \({t\choose 2}\) edges that corresponds to a subpartition of \(1\)\(t\)-element set (in other words, a \(t\)-clique). Once we are equipped with the property that for any collection of edges \(E\) of size \({t\choose 2}+s\) the corresponding partition \(X\) satisfies \(m(X)\geq t+\frac{s}{k}\), it is easy to do similar calculations as for \(u_{k,\ell}\) and \(u_{k,\ell}(X)\) above, and obtain that \(\mathcal{U}_{n,k}(T)\) is \(r^{\prime}\)-spread with \(r^{\prime}=\big{(}\frac{9}{\ell-1}\big{)}^{1/3k}\). We note that this weak \((r,t^{\prime})\)-spreadness property is subtle in this application: the family \(\mathcal{U}_{k,\ell}\) only possesses it for \(t^{\prime}={t\choose 2}\) for integer \(t\). The next step is to apply Theorem 13 to our approximation \(\mathcal{S}\). Let \({t\choose 2}\) play the role of \(t\), \(\epsilon=1/2\), \(r^{\prime}\) playing the role of \(r\), and \(q=k^{10}\), as above. Again, we can see that the inequality on \(r^{\prime}\) is valid, provided \(\ell\) is large enough (again \(\ell>k^{Ck}\) is sufficient). The conclusion is that \(\mathcal{S}\) must consist of a single set \(T\) of size \({t\choose 2}\), otherwise the size of \(\mathcal{U}_{k,\ell}[S]\) is at least twice smaller than \(\mathcal{U}_{k,\ell}[T]\) for the largest \(T\). Moreover, \(T\) must correspond to a subpartition consisting of one \(t\)-element set. At this point, we have proved a rough version of the conjecture, along with stability: if the size of a partially \(t\)-intersecting family of \((k,\ell)\)-partitions is at least \(0.51|\mathcal{C}_{k,\ell}^{T}|\), then it is contained in some \(\mathcal{C}_{k,\ell}^{T}\), with an exception of at most \(\ell^{-k^{7}}u_{k,\ell}\) sets. These exceptional sets form the family \(\mathcal{F}^{\prime}\) from above, and we aim to show that this family is empty. (In what follows, we assume that \(\mathcal{F}^{\prime}\cap\mathcal{C}_{k,\ell}^{T}=\emptyset\), since otherwise we could move these sets to \(\mathcal{F}\setminus\mathcal{F}^{\prime}\).) In order to show that \(\mathcal{F}^{\prime}=\emptyset\) for an extremal \(\mathcal{F}\), we need to get some understanding on how often do random \((k,\ell)\)-partitions partially \(t\)-intersect. **Lemma 19**.: _Let \(t\geq 2\). For a given set \(T\) of size \(t\) and a \((k,\ell)\)-partition \(Y=(Y_{1},\ldots,Y_{\ell})\notin\mathcal{C}_{k,\ell}^{T}\), the number of other \((k,\ell)\)-partitions from \(\mathcal{C}_{k,\ell}^{T}\) that do not partially \(t\)-intersect it is at least \(\ell^{-2k^{2}}u_{k,\ell}\)._ Proof.: Fix some partition \(X=(X_{1},\ldots,X_{\ell})\in\mathcal{C}^{T}_{k,\ell}\) and consider a random permutation \(\sigma:[k\ell]\setminus T\to[k\ell]\setminus T\). We will prove that \[\Pr[\sigma(X)\text{ partially $t$-intersects $Y$}]\leq\ell^{-k^{2}}.\] Given that \(|\mathcal{C}^{T}_{k,\ell}|\geq\ell^{-k^{2}}u_{k,\ell}\) and that a uniform random permutation \(\sigma\) generates a uniformly random element of \(\mathcal{C}^{T}_{k,\ell}\), the statement of the lemma follows from the displayed formula. We shall expose \(\sigma\) block by block, where the \(i\)-th block describes where is \(X_{i}\) mapped. Let us denote by \(A_{i}\) the event that \(|\sigma(X_{i})\cap Y_{j}|\geq t\) for some \(j\). We suppose that \(T\subset X_{1}\). Let us first deal with the first "exceptional" event \(A_{1}\). There are two possible ways for it to occur. First, it is possible that for some \(j\) we have \(|Y_{j}\cap T|=t-1\) and \(\sigma(X_{1}\setminus T)\cap Y_{j}\neq\emptyset\). Note, that there are at most \(2\) such \(j\) (at most \(1\) for \(t\geq 3\)). To bound this probability, we simply look at the probability that these sets intersect. The second possibility is covered by the event that \(|\sigma(X_{1}\setminus T)\cap Y_{j}|\geq 2\) for some \(j\). This is the event that the sets of edges of \(X_{1}\setminus T\) and of the partition \(Y\) intersect. Thus, we can provide the following simple bound: \[\Pr[\sigma(X_{1})\cap Y_{j}\geq t\text{ for some }j]\leq\frac{2k|X_{1} \setminus T|}{k\ell-t}+\frac{\binom{k}{2}\ell\binom{|X_{1}\setminus T|}{2}}{ \binom{k\ell-t}{2}}\leq\frac{1}{2}. \tag{7}\] Next, we are going to bound the probability that \(A_{i}\) happens, given that none of the \(A_{1},\ldots,A_{i-1}\) happened. Each of these events is included in the event that \(|\sigma(X_{i})\cap Y_{j}|\geq 2\) for some \(j\). this is the event that the sets of edges of \(X_{1}\setminus T\) and of the partition \(Y\) intersect. At this point, we are working with the partition that \(Y\) induces on \([k\ell]\setminus\sigma(X_{1}\cup\ldots\cup X_{i-1})\). The latter set has size \(k(\ell-i+1)\). Restricted to it, \(Y\) is a partition into at most \(\ell\) parts each of size at most \(k\). Thus, the number of edges in \(Y\) is at most \(\binom{k}{2}(\ell-i+1)\). Thus, we have the bound \[\Pr[A_{i}|\bar{A}_{1},\ldots,\bar{A}_{i-1}]\leq\frac{\binom{k}{2}\sum_{j=1}^{ \ell}\binom{|Y_{j}\setminus(\sigma(X_{1}\cup\ldots\cup X_{i-1}))|}{2}}{\binom{ k(\ell-i+1)}{2}}\leq\frac{\binom{k}{2}^{2}(\ell-i+1)}{\binom{k(\ell-i+1)}{2}} \leq\frac{k^{2}}{2(\ell-i)}. \tag{8}\] We will use this bound up to \(i=\ell-k^{2}\). The remaining parts, that is, \(R:=\sigma(X_{\ell-k^{2}+1}\cup\ldots\cup X_{\ell})\) form a set of size \(k^{3}\). The partition \(Y\) induced on \(R\) again consists of sets of size at most \(k\), and clearly there is at least \(1\) choice of \(\sigma\) so that each \(X_{i}\) does not intersect each part of \(Y\) induced on \(R\) in more than \(1\) element. At the same time, the number of different partitions of \(R\) into \(k^{2}\) parts of size \(k\) is at most \(k^{3k^{3}}\), and thus \[\Pr[\bar{A}_{\ell-k^{2}+1}\cap\ldots\cap\bar{A}_{\ell}\ |\ \bar{A}_{1},\ldots,\bar{A}_{ \ell-k^{2}}]\geq k^{-3k^{3}}. \tag{9}\] Combining the bounds (7), (8), (9), we get \[\Pr[\cap_{i=1}^{\ell}\bar{A}_{i}]\geq\frac{1}{2}k^{-3k^{3}}\prod_{i=1}^{\ell -k^{2}}\Big{(}1+\frac{k^{2}}{2(\ell-i)-k^{2}}\Big{)}^{-1}\geq\frac{1}{2}k^{-3k ^{3}}e^{-\sum_{i=1}^{\ell-k^{2}}\frac{k^{2}}{2(\ell-i)-k^{2}}}\geq\] \[\frac{1}{2}k^{-3k^{3}}e^{-\frac{k^{2}}{2}\log_{e}\ell}=12k^{-3k^{3}}\ell^{- \frac{k^{2}}{2}}\geq\ell^{-k^{2}}.\] This completes the proof of the lemma. Returning to the extremal family \(\mathcal{F}\), assume that \(\mathcal{F}^{\prime}\) is non-empty and thus contains some partition \(Y\). Lemma 19 implies that \(|\mathcal{C}_{k,\ell}^{T}\setminus\mathcal{F}|\geq\ell^{-2k^{2}}u_{k,\ell}\). But (6) implies that \(|\mathcal{F}^{\prime}|\leq\ell^{-k^{7}}\). We conclude that \(|\mathcal{F}^{\prime}|\ll|\mathcal{C}_{k,\ell}^{T}\setminus\mathcal{F}|\), and thus \(\mathcal{F}^{\prime}\) cannot be extremal unless \(\mathcal{F}^{\prime}\) is empty. This concludes the proof of the theorem.
2309.13955
Deep Reinforcement Learning for the Heat Transfer Control of Pulsating Impinging Jets
This research study explores the applicability of Deep Reinforcement Learning (DRL) for thermal control based on Computational Fluid Dynamics. To accomplish that, the forced convection on a hot plate prone to a pulsating cooling jet with variable velocity has been investigated. We begin with evaluating the efficiency and viability of a vanilla Deep Q-Network (DQN) method for thermal control. Subsequently, a comprehensive comparison between different variants of DRL is conducted. Soft Double and Duel DQN achieved better thermal control performance among all the variants due to their efficient learning and action prioritization capabilities. Results demonstrate that the soft Double DQN outperforms the hard Double DQN. Moreover, soft Double and Duel can maintain the temperature in the desired threshold for more than 98% of the control cycle. These findings demonstrate the promising potential of DRL in effectively addressing thermal control systems.
Sajad Salavatidezfouli, Giovanni Stabile, Gianluigi Rozza
2023-09-25T08:41:50Z
http://arxiv.org/abs/2309.13955v1
# Deep Reinforcement Learning for the Heat Transfer Control of Pulsating Impinging Jets ###### Abstract This research study explores the applicability of Deep Reinforcement Learning (DRL) for thermal control based on Computational Fluid Dynamics. To accomplish that, the forced convection on a hot plate prone to a pulsating cooling jet with variable velocity has been investigated. We begin with evaluating the efficiency and viability of a _vanilla_ Deep Q-Network (DQN) method for thermal control. Subsequently, a comprehensive comparison between different variants of DRL is conducted. Soft Double and Duel DQN achieved better thermal control performance among all the variants due to their efficient learning and action prioritization capabilities. Results demonstrate that the soft Double DQN outperforms the hard Double DQN. Moreover, soft Double and Duel can maintain the temperature in the desired threshold for more than 98% of the control cycle. These findings demonstrate the promising potential of DRL in effectively addressing thermal control systems. **Highlights:** * DRL technique demonstrates a successful thermal control performance. * Hard Double DQN is not useful for thermal control tasks. * Soft Double and Duel DQN demonstrate more temperature uniformity along the surface. **Keywords: Thermal Control; Reinforcement Learning; Impinging Jet; DQN** ## 1 Introduction Thermal control is an essential practice with a profound impact on diverse applications, spanning HVAC systems, electronics cooling, medical devices, food and beverage production, and data centers. Achieving optimal operation in these fields often depends on the maintenance of the temperature within a stable and narrow threshold. This can be addressed by the manipulation of heat transfer mechanisms, i.e. conduction, convection, and radiation [1, 2]. Among these mechanisms, convection holds a prominent position, where the transfer of heat is facilitated through the movement of surrounding fluid [3]. In recent years, convection control gained a lot of concentration [4, 5, 6]. Notably, forced convection control has emerged as a focal point, owing to its inherent advantages. Forced convection accelerates the rate of heat transfer, making it a more effective method that can better meet temperature requirements. Pioneering studies have shed light on the potential of this approach. For instance, Davalath and Bayazitoglu examined the forced convection between parallel plates consisting of finite block heat sources [7]. They could regulate the flow and temperature fields through the spacing of blocks, Reynolds and Prandtl numbers. Al-Sarkhi and E. Abu-Nada focused on thermal control optimization by modifying the height and number of fins in a tube [8], pinpointing an optimal combination of these control parameters. In a computational study undertaken by Yilmaz and Oztop, turbulent forced convection in a double forward-facing step was investigated [9]. Their findings underscored the significance of step size as a passive control element in heat transfer scenarios. Extending this pursuit of enhanced heat transfer, Kim et al. employed a control strategy by means of a synthetic jet in a channel [10]. Their results proved the superiority of higher frequencies and direct impingement in achieving enhanced cooling performance. All of these studies exclusively employed passive thermal control techniques. Numerous studies have explored active thermal control to enhance the forced convection mechanism [11, 12, 13]. Active control systems, which rely on external power for operation, have shown particular effectiveness in achieving a precise temperature range. Notably, these investigations have highlighted the potential of active methods to deliver highly targeted thermal control systems [14]. However, it is essential to acknowledge that many of these studies require intensive numerical resources to derive their control strategies [15]. This reliance on computational resources has raised concerns about the practicality and efficiency of these approaches in modern industrial settings. Furthermore, while active control systems are promising, there remains a significant challenge in directly applying optimal control algorithms due to the inherent complexity of governing partial differential equations (PDEs). Despite ongoing research, there is no universally applicable and efficient method for convection control that has emerged as of yet [14]. Over the past few years, the expansion of high-performance computing resources has paved the way for the integration of data-driven and non-intrusive machine learning (ML) methods into the field of fluid dynamics [16, 17, 18, 19]. ML's adaptability offers an innovative modelling framework capable of effectively addressing numerous challenges within fluid mechanics. These challenges encompass reduced-order modeling [20, 21, 22, 23], shape optimization [24], and turbulence closure modeling [25]. Moreover, ML-based approaches have demonstrated significant promise in heat transfer problems, including contact heat transfer [26], critical heat flux [27], and thermal resistance [28] prediction and optimization. Active thermal control systems encounter a multitude of challenges, ranging from achieving precision and energy efficiency to addressing environmental concerns and controlling heat dissipation. However, one important challenge that ties these together is the time delay. Central to any thermal control system is the time required for heat transfer to occur, allowing the controller to make informed decisions. Fortunately, deep reinforcement learning (DRL), a novel branch of machine learning, has showcased its capacity to tackle both nonlinear challenges and time delay issues in flow control [29]. In DRL problems, the choice of environment plays a crucial role in determining the success of control applications. An environment in DRL represents the simulated or real-world system with which an agent interacts to learn optimal control strategies. Various environments can employed in DRL, ranging from simple, 0-Dimensional models to more complex, 3-Dimensioanl simulations or even experimental data [30, 31, 32]. When it comes to controlling forced convection systems, the choice of environment becomes particularly critical. Many previous studies have explored DRL in conjunction with simplified, analytical models, which provide rough estimations of fluid behaviour. However, the precision of these simplified models may not always meet the demands of highly accurate thermal control. In contrast, Computational Fluid Dynamics (CFD) stands out as a superior choice for the environment. CFD is known for its ability to the complete form of the fluid flow equations, providing a highly accurate representation of fluid behaviour. While DRL-CFD studies remain relatively scarce in comparison to other DRL environments, they have demonstrated significant promise. These studies have progressed from computationally efficient investigations into laminar flow [33, 34, 35, 36] to experimental research involving high Reynolds flow [37, 38, 39]. Furthermore, DRL-CFD has been successfully applied in areas such as flow separation suppression [40] and the enhancement of vortex-induced vibration [41]. This emerging field holds great potential for effectively controlling forced convection systems, thanks to the precision and accuracy of CFD as the chosen environment. While DRL-CFD studies remain relatively rare, their potential applications in forced convection control are highly promising. Notably, this study explores active thermal control by means of impingement cooling, as there is no similar study in the literature, to the best of our knowledge. Moreover, it is worth emphasizing that there is a notable absence of comparable research that systematically compares different DRL methods in the context of thermal control. The effectiveness of the DRL-CFD is demonstrated by evaluating its cooling effect on a system comprising a heater surface under constant heat flux and a cooling jet with controlled velocity. In Section 2, we provide a detailed discussion of the methodology, covering topics such as the reinforcement learning framework, CFD solver, deep reinforcement learning, Deep Q-Networks (DQNs), and the algorithms employed in this study. Section 3 provides the model description. Finally, Section 4 focuses on the results of the implemented control system on the hot plate followed by the discussion on the findings. ## 2 Reinforcement Learning Reinforcement learning (RL) is a fascinating branch of machine learning that utilizes a closed-loop feedback control framework. It represents a novel approach to automatically discovering the optimal control strategy. RL comprises various components and follows a well-defined execution process, depicted in Figure 1. RL begins with the acquisition of an observation, referred to as the state (\(s_{t}\)), which is sampled from the environment at specific times. Based on this observation, the agent selects an action (\(a_{t}\)) that maximizes a specific value called the reward (\(r_{t}\)), which is calculated by a certain _reward_ function. The chosen action acts as a control signal that is executed within the environment, leading to the sampling of the next observation (\(s_{t+1}\)) at the subsequent time. This iterative interaction continues until a predefined terminal condition is met, such as environmental convergence or a designated duration, constituting an episode. Through training over numerous episodes, the agent gains the ability to excel in its performance. In other words, it becomes capable of generating an optimal trajectory of action-state pairs (\(s_{0}\), \(a_{0}\), \(s_{1}\), \(a_{1}\),...). There exist two primary branches of RL known as model-based and model-free [42]. In the model-based method, the agent constructs a model during the training phase to capture the relationship between states and actions in the environment [43]. This enables the agent to predict the outcome of executing any action in the environment. Numerous model-based algorithms have been developed particularly for solving kinetics and motion planning problems [44]. However, one noticeable limitation of model-based methods is their heavy reliance on a comprehensive understanding of the rules governing the environment. This requirement poses challenges, especially when dealing with nonlinear problems where constructing an accurate environment model becomes exceedingly difficult. In such cases, model-free methods can offer advantages over model-based approaches. Model-free methods do not necessitate an environment model of any kind, and an agent utilizing model-free methods explicitly learns through trial and error, gradually transitioning from ten tative exploration to deliberate planning at higher levels of proficiency [45]. This research primarily concentrates on addressing a fluid dynamics problem namely the control of turbulent, incompressible fluid flow in conjunction with heat transfer. The problem at hand pertains to a 3-dimensional scenario, where the conservation of mass, momentum, and energy are governed by the interdependent and nonlinear Navier-Stokes and heat equations: \[\nabla\cdot\mathbf{u}=0, \tag{1}\] \[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot(\nabla\mathbf{u})=-\nabla p+\nu \nabla\cdot\nabla\mathbf{u}, \tag{2}\] \[\frac{\partial T}{\partial t}+\mathbf{u}\cdot\nabla T=\alpha\nabla\cdot\nabla T, \tag{3}\] where, the velocity vector is represented by \(\mathbf{u}\), pressure by \(p\), temperature by \(T\), kinematic coefficient of viscosity by \(\nu\), and thermal diffusivity by \(\alpha\). The numerical simulation is conducted with the assistance of ANSYS FLUENT [46]. It is worth noting that we have opted for a model-free approach, considering the inherent complexity of solving nonlinear partial differential equations. ### Deep Q Network The interaction between agent and environment can be formalized as a Markov Decision Process (MDP) in a typical reinforcement learning by a tuple \((S,A,T,r,\gamma)\) where \(S\) and \(A\) are set of states and actions. \(T\left(s,a,s^{\prime}\right)=P\left[S_{t+1}=s^{\prime}\mid S_{t}=s,A_{t}=a\right]\) is the transition function and reward function, \(r\), is \(r(s,a)=\mathbb{E}\left[R_{t+1}\mid S_{t}=s,A_{t}=a\right]\). Finally, \(\gamma\) is the discount factor and is \(\gamma\in[0,1]\). MDP is dedicated to the classical sequential decision-making process, where actions affect both the immediate reward and subsequent states. Thus, the discounted return is defined as \(G_{t}=\sum_{k=0}^{\infty}\gamma_{t}^{(k)}R_{t+k+1}\) from the state as the discounted sum of ongoing rewards and \(\gamma_{t}^{(k)}=\prod_{i=1}^{k}\gamma_{t+i}\). The main goal of the agent is to maximize the discounted return \(G_{t}\). Q-learning is a well-established and effective method [47] that does not require prior knowledge of the system's dynamics, i.e. it is a model-free method. It allows the agent to learn an approximation of the expected discounted return, also known as the value, by formulating an action-value function as follows: \[Q^{\pi}(s,a)=\mathbb{E}_{\pi}\left[G_{t}\mid S_{t}=s,A_{t}=a\right]. \tag{4}\] Where expectation \(\mathbb{E}_{\pi}\) represents the average value when the agent selects an action according to the policy \(\pi\). As mentioned before, the main goal of reinforcement learning (RL) is to find a policy that maximizes the reward. There exists at least one optimal policy, \(\pi^{*}\), which outperforms any other policy and achieves the optimal action-value function. Thus, we can express the optimal action-value function \(Q^{*}\) as follows: \[Q^{*}(s,a)=\max_{\pi}Q^{\pi}(s,a). \tag{5}\] To derive a new policy from the action-value function, a commonly used approach is \(\epsilon\)-greedy. Accordingly, the agent selects the action with the highest value (referred to as the _greedy_ action) with a probability of (1-\(\epsilon\)), while choosing a random action uniformly with a probability of \(\epsilon\). Q-learning simply aims to estimate the optimal action-value function, \(Q^{*}\), through an iterative learning process. The key concept behind Q-learning is to update the action-value function, Q, by iteratively refining its estimates: \[\begin{split} Q_{n}(s,a)=Q_{n-1}(s,a)+\alpha[r+\gamma\max_{a^{ \prime}}Q_{n-1}\left(s^{\prime},a^{\prime}\right)\\ -Q_{n-1}(s,a)]\,.\end{split} \tag{6}\] The foundation of Q-learning lies in an important identity known as the Bellman equation. Accordingly, if we have knowledge of the optimal value at the next timestep, \(Q^{*}(s^{\prime},a^{\prime})\), for all possible actions, \(a^{\prime}\), at the next time step, then the best strategy is to select the action, \(a^{\prime}\), that maximizes the expected value of the sum of immediate reward, \(r\), and the discounted future value, \(\gamma Q^{*}(s^{\prime},a^{\prime})\): \[Q^{*}(s,a)=\mathbb{E}_{s^{\prime}}\left[r+\gamma\max_{a^{\prime}}Q^{*}\left(s ^{\prime},a^{\prime}\right)\mid s,a\right]. \tag{7}\] By using the Bellman equation in an iterative manner, we converge towards the optimal action-value function, \(Q_{n}\to Q^{*}\), as \(n\rightarrow\infty\). Interestingly, the learned action-value function, \(Q_{n}\), directly approximates \(Q^{*}\) regardless of the policy being followed. This simplifies the analysis of the algorithm and facilitates early convergence proofs. Consequently, as long as all state-action pairs are continuously updated, the estimate will ultimately converge to the correct values. However, dealing with problems that involve large states and/or action spaces, such as active flow control, becomes exceedingly challenging when attempting to learn Q-value estimation for all possible state-action pair. To address this issue, novel approaches have emerged, employing deep neural networks to represent different aspects of an agent including policies \(\pi(s,a)\) and values \(Q(s,a)\). Within these methods, deep neural networks are utilized as nonlinear function approximators, trained through gradient descent to minimize a suitable loss function and achieve accurate estimation. An exemplary example of this approach is the Deep Q-Network (DQN) proposed by Mnih et al. [48, 49] in which the deep networks and reinforcement learning were successfully coupled. At each step, the agent, based on the current state \(S_{t}\), chooses an action and appends a transition \((S_{t},A_{t},R_{t+1},\gamma_{t+1},S_{t+1})\) to a replay memory buffer that stores transitions. The parameters of the neural network are then optimized using stochastic gradient descent to minimize the loss. \[L(\theta)=\left[R_{t+1}+\gamma_{t+1}\max_{a^{\prime}}Q_{\bar{\theta}}\left(S_{t +1},a^{\prime}\right)-Q_{\theta}\left(S_{t},A_{t}\right)\right]^{2}. \tag{8}\] The time step, \(t\), is randomly selected from the replay memory. The loss gradient is then back-propagated exclusively into the parameters (\(\theta\)) of the online network, which is responsible for action selection. On the other hand, the term \(\bar{\theta}\) represents the parameters of a target network, which serves as a periodically updated copy of the online network and is not directly optimized. To optimize the network, we employ Adam optimizer [50]. We sample mini-batches uniformly from the experience replay and utilize them for the optimization process. ### DQN Improvements The DQN algorithm has marked a significant milestone; however, it is important to acknowledge several limitations that affect its performance [51]: (a) Q-values Overestimation: The classical DQN algorithm generally suffers from overestimation bias, which arises due to the maximization step in Eqs. 7, 8) in which the update rule involves selecting the action with the maximum Q-value for the next state [52]. However, this includes the Q-values estimated by the same network being updated, which can lead to overestimation of the action values. The overestimation bias can result in sub-optimal policies, where the agent tends to be overly optimistic about the value of certain actions, and therefore has adverse effects on learning [51]. This bias increases as the complexity of the environment increases. (b)Slow convergence: The convergence rate of the classical DQN algorithm is slow when dealing with complex environments. During the early stages of training, the learning process can be unstable and may include oscillations [53]. This instability arises primarily because of the interaction between function approximation (the neural network) and the use of the same network for action selection and evaluation. Many extensions have been proposed in the literature to address these limitations [54]. We integrate two well-known variants of the DQN algorithm to address the above issues. -Double DQN: Double DQN aims to address the overestimation bias that exists in the classical DQN by employing two separate neural networks; the target network and the online network. The target network is dedicated to estimating the target Q-values, whereas the online network is responsible for the action selection [55]. The separation of action selection and value estimation helps to reduce the overestimation bias [56, 57]. To incorporate Double Q-learning, the loss is calculated as follows: \[\begin{split} L(\theta)&=\left[R_{t+1}+\gamma_{t+1} Q_{\theta}\left(S_{t+1}\underset{a^{\prime}}{\mathrm{argmax}}Q_{\theta}\left(S_{t+1},a^{ \prime}\right)\right)\right.\\ &\left.-Q_{\theta}\left(S_{t},A_{t}\right)\right]^{2}.\end{split} \tag{9}\] -Dueling DQN: Dueling DQN utilized an innovative network architecture dedicated to value-based model-free RL [58]. It addresses the limitations of the classical DQN by separating the state value function estimation and the action advantages [58]. This separation allows the network to understand which states are valuable, independent of the action chosen. The action values derived from the improved network are as follows: \[\begin{split} Q(s,a;\xi,\eta,\psi)&=\mathrm{V}(s; \xi,\eta)\\ &+\left(A(s,a;\xi,\psi)-\frac{1}{N_{\mathrm{act}}}\sum_{a^{\prime }}A\left(s,a^{\prime};\xi,\psi\right)\right).\end{split} \tag{10}\] After all, Dueling DQN can generalize better across actions, leading to more efficient decision-making [59]. ### DQN Algorithm The real-time interaction between the CFD solver and the DRL agent is depicted in Fig. 1 and Algorithm 1. This framework can be divided into two main parts. In the first part, the state \(S_{t}\) is formed by collecting data from the environment at time \(t\). Subsequently, the agent chooses an action \(\epsilon\)-greedy, which is used by the CFD solver to solve the flow equations (Eq. 1) to the next state \(S_{t}\). This interaction continues within each episode until termination. Then, as for the second part, the sampled state \(S_{t}\), along with the action \(a_{t}\), reward \(r_{t}\), and next state \(S_{t+1}\), are stored in a replay buffer. The agent, then, extracts transitions of a specified size (mini-batches) and calculates the loss and updates the network parameters. These two parts run in parallel forming the deep Q-network. ## 3 Impingement Jet ### Physics of the Flow Before discussing the DRL-CFD results, one needs to understand the physics of the flow in terms of impingement cooling. According to Fig. 2, the flow field of an impinging jet consists of three regions; the free jet, the impingement region and the wall jet region. These regions are highly sensitive to the effective flow and geometrical parameters including Reynolds number, nozzle-to-plate distance, nozzle section shape and surface geometry. Change in any of the mentioned parameters results in changes in the boundaries between three regions, which subsequently also affects the heat transfer from the impinging surface. Among all the available parameters, the jet velocity can be considered as the most potential variable in active flow control of the impinging jet. A change in the jet velocity and frequency causes a change in the vortex shedding frequency in the flow field which eventually changes the formation and thickness of the boundary layer in the region of the wall jet. Moreover, in practical cases, it is difficult to find a general correlation that can provide an explicit relationship between the heat transfer coefficient or surface temperature in terms of jet velocity. Hence, the use of advanced machine learning techniques such as deep reinforcement learning is necessary for such systems. ### Setup Description We focus on the forced convection of a 3-dimensional hot plate prone to a cooling jet. Fig. 3 shows the geometrical representation of the model. This model represents a simplified version of numerous complex cooling systems such as gas turbine cooling, electronics cooling, steel industry and aerospace applications [61, 62, 63]. The plate (shown in red) has a square shape with a length of 8d and is connected with the impinging jet. It is subject to constant heat flux, \(q^{\prime\prime}\). Jet is placed to the distance of \(H/d=4\). Zero gradient Neumann boundary condition along with a constant pressure is considered for the outer sides of the domain. To decrease computational costs, we conduct calculations solely on one-quarter of the domain utilizing symmetry boundary conditions. The inlet of the jet (shown in blue) incorporates variable velocity magnitude ranging from \(0.1V_{\infty}\sim V_{\infty}\) which is controlled by the DRL agent, while a constant temperature of \(T_{\infty}\) enters the domain. At the initial time, zero velocity and pressure were considered, while the isothermal temperature, \(T_{\infty}\) is applied to the domain. For the sake of precision, a fully structured grid consisting of \(8\times 10^{5}\) hexahedral elements has been utilized, as shown in Fig. 4. The geometrical and flow parameters are inserted in Table 1, along with fluid properties. Consequently, as for the variable velocity of the jet, the Reynolds number range will be: \[\mathrm{Re}=\frac{\rho Vd}{\mu}=170\sim 1700 \tag{11}\] which guarantees a laminar regime in the domain. ## 4 Results and Disccusion The effectiveness of DRL-based thermal control of forced convection on the hot surface is initially validated by conducting a comparative analysis between the classical DQN and the baseline (i.e., no control situation) for the 3-D model. Subsequently, the control algorithm is expanded to explore variants of the DQN and provide a comparative study on the suitable approach for thermal control problems. ### DRL Specifications As mentioned in Section 2.1, the agent's promotion is dictated by a reward-oriented strategy, where an optimized reward function enhances convergence. The following reward function was defined based on the averaged surface temperature, \(T_{\mathrm{surf}}\), and desired temperature, \(T_{\mathrm{d}}\) : \[\begin{cases}+1&|T_{\mathrm{surf}}\,-T_{d}|<2,\\ 0.1-(\left|T_{\mathrm{surf}}\,/T_{d}-1\right|)\times 0.1&|T_{\mathrm{surf}}\,-T_{d}|>2.\end{cases} \tag{12}\] The training process consists of 100 episodes, each lasting 100 seconds (\(10^{4}\) timesteps). Following training, the trained agent is exported for performance testing for 100 seconds. ### Sensitivity Analysis The appropriate state selection plays a crucial role in determining the agent's actions. Particularly, when dealing with high-dimensional problems, the state representation Figure 1: General overview of the DRL-CFD framework Figure 2: Formation of different regions in the context of the impinging jet on a flat plate [60] should encapsulate relevant information about the system's dynamics to reinforce the agent's prediction. The main challenge is to find a balance between the state representation richness and its computational possibility. Including too many variables might lead to a high-dimensional state space, which endangers convergence during training, whilst, an insufficient state representation can lead to a suboptimal solution. As for the selection of state variables, pressure, velocity, and temperature are the candidates. Among these, temperature is considered as a definitive choice due to its direct effect on thermal control. To determine the most suitable choice between velocity and pressure, we examined their temporal/spatial gradient within the domain. It was observed that velocity exhibits higher gradients all over the domain due to the hydraulic boundary layer. Thus, velocity and temperature can be served as the primary state variables for thermal control systems. The next complexity is to determine the suitable sensor location for measuring state variables. A sensitivity analysis has been performed by considering three distances between the probes and the surface, denoted as L in Fig. 5: 1 mm, 5 mm, and 10 mm. Fig. 6 shows the change of total reward in terms of episode number for the mentioned layouts. As depicted, layouts 1 and 2 exhibit similar trends while the former demonstrated superior performance at the end. Layout1 corresponds to the placement of the sensors closest to the wall, i.e. \(L=0.001\) mm. \begin{table} \begin{tabular}{c c c c c c c} d & \(V_{\infty}\) & \(T_{\infty}\) & \(T_{d}\) & \(\rho\) & \(\mu\) & \(k\) & \(C_{p}\) \\ \hline \((m)\) & \((m/s)\) & \((K)\) & \((K)\) & \((kg/m^{3})\) & \((Pa.s)\) & \((W/mK)\) & \((J/kg.K)\) \\ \hline 0.025 & 1 & 288 & 303 & 1.225 & 1.789e-5 & 0.024 & 1006 \\ \end{tabular} \end{table} Table 1: Parameters used in the 3-D thermal control Figure 4: Representation of the structured mesh for the domain Figure 3: Schematic representation of the computational domain along with the dimension data of the jet and hot plate ### Classical DQN As the first part of this research study, we aim to assess the applicability of Deep Reinforcement Learning for thermal control based on a custom environment defined by a CFD solver. Thus, we consider the agent, represented by the jet velocity, within a velocity range of \(0.1V_{\infty}\) to \(V_{\infty}\). Fig. 7 shows the comparison between the temporal changes of dimensionless average surface temperature (\(T^{*}=T/T_{d}\)) for the lower and higher velocity ranges without any control, along with the results obtained using a DQN-based velocity control. The uncontrolled strategy exhibits average temperatures of 1.07 (324 K) and 0.97 (294 K) for the lowest and highest velocities, respectively. In contrast, by applying the DRL-CFD method, we successfully maintained the dimensionless average surface temperature at an almost average of 1 (303 K). These results indicate the capability of DQN to achieve effective thermal control systems. The contour plot presented in Fig. 8 illustrates the average temperature distribution in off- and on-control systems. In the case of the uncontrolled system with a low jet velocity, elevated temperature values are observed throughout the entire surface, except for the central region where jet flow occurs directly. Conversely, the application of a high jet velocity leads to significantly lower temperatures across the surface. For the controlled system, the contour plot is taken by statistically averaging temperature from 0 \(\sim\) 100 seconds. Remarkably, the controlled system consistently maintains an average dimensionless temperature of 1 across the entire surface, with a deviation of less than 2%. These findings highlight the effective temperature regulation achieved by the thermal control system, ensuring stability and uniformity throughout the system. ### Episode Number We conducted three training runs with varying episode numbers: 50, 100, and 150. Fig. 9 presents the surface average temperature based on the on-control agents trained by different episode numbers. The agent trained with 50 episodes shows higher levels of oscillatory behaviour when Figure 5: Probes location for measurement of states during DRL are shown with blue dots Figure 6: Comparison between total reward for different sensor layouts Figure 7: Temporal variation of dimensionless surface average temperature for off- and on-control (classic DQN) systems utilized for the on-control purpose. However, those trained by 100 and 150 episodes demonstrated lower oscillation, indicating a more desirable behaviour for a controller. Fig. 10, displays the time-averaged temperature contour on the hot plate for three episode numbers. The temperature contour observed in episodes 100 and 150 exhibit similarities, with cooler temperatures at the center of the surface (\(<T_{d}\)) and higher temperatures (\(>T_{d}\)) at the corners. However, episode 50 demonstrates comparatively lower performance as it incorporates higher temperature with respect to other episodes. ### DQN Variants In this part, we will conduct a comparative analysis of different variants of DQN. As discussed in Section 2.2, our focus will be on investigating the effectiveness of classical DQN, Double DQN, and Duel DQN. The main idea behind Double Q-learning is to lessen the overestimation bias that can occur with classical Q-learning algorithms. To address this issue, two subvariants of this method have been developed in terms of target network updating; hard update and soft update. In the hard Double DQN, the update of the target network occurs at specific intervals (i.e. after a fixed number of steps). Whereas, soft Double DQN introduces a more gradual way of updating the target network, in which instead of copying the entire parameters from the _online_ network, the soft update performs a weighted average between the on-line network and the target network as follows [64]: \[\theta_{\text{target}}=\tau\theta_{\text{online}}+(1-\tau)\theta_{\text{target}}, \tag{13}\] where \(\theta_{\text{target}}\) and \(\theta_{\text{online}}\) refer to the parameters of the target and online networks, respectively. \(\tau\) is a small value close to zero (here we take 0.001) that controls the extent of the update. Fig. 11 illustrates the evolution of total rewards for both soft and hard Double DQN. The hard DQN exhibits high-frequency oscillations with high amplitude, indicating its failure in the thermal control task. However, the soft Double DQN demonstrates a substantial increase in Figure 8: Temperature contour for off- and on-control (classic DQN) systems Figure 9: Time history of surface temperature and jet velocity for different episode numbers total reward across episodes, reaching a remarkable value of 95 at the end. It is worth mentioning that a total reward of 95 implies the achievement of the desired temperature in more than 95% of time instances during the final episode (based on the reward definition in Eq. 12), demonstrating its excellent performance. Fig. 12 provides a comparison of the reward evolution for DQN variants. Overall, the different variants perform similarly in terms of thermal control. However, the classical DQN shows a lower reward throughout the episodes and demonstrates more oscillatory behaviour with respect to Double and Duel DQNs. Upon a closer examination of the last 30 episodes for the three variants, Duel DQN shows the best final reward, followed by the Soft Double DQN and then Classical DQN. By analysing the time-averaged surface temperature for all the agents, Fig. 13 reveals that the classical DQN exhibits higher temperature gradients on the surface. This behaviour corresponds to the fact that the agent (jet velocity) experiences higher changes over time. On the other hand, both Soft Double DQN and Duel DQN achieve a more uniform surface temperature close to the desired value of \(T^{*}=1\) (303 K). Moreover, the contour of Hard DQN illustrates the ineffectiveness of this method in thermal control systems. Fig. 14 presents the changes in temperature and velocity during the on-control simulation for DQN variants. The Soft Double DQN and Duel DQN exhibit minimal velocity changes, resulting in a more uniform temperature distribution close to the desired setpoint, as predicted. However, Hard DQN's significant changes in actions (jet velocity) lead to poor temperature control, with a considerable discrepancy from the desired temperature. The Soft Double DQN and Dueling DQN methods prove to be effective for jet velocity control, as they produce stable and close-to-desired temperatures on the surface. The Classical DQN and Hard DQN variants, on the other hand, exhibit limitations in achieving optimal thermal control performance. These findings highlight the importance of selecting appropriate DQN variants for thermal control problems to achieve the desired temperature efficiently and reliably. ## 5 Conclusion This research study focused on the application of the Deep Q-Network (DQN) method for a thermal control problem consisting of a hot plate prone to a cooling jet agent with the aid of a custom CFD environment. Two subvariants of Double DQN, namely hard update and soft update, were explored to address the overestimation bias that can occur with classical Q-learning algorithms. The results clearly demonstrate that the soft Double DQN outperforms the hard Double DQN in achieving effective thermal control on the hot plate. Comparison with classical DQN and its advanced variants revealed that both soft Double DQN and Dueling DQN methods performed well in achieving stable and close-to-desired temperature control. The classical and hard DQN variants, however, showed limitations in achieving optimal thermal control, with the classical DQN displaying oscillatory behaviour. By analyzing the time-averaged surface temperature, it was observed that the classical DQN exhibited higher temperature gradients along the surface, corresponding to higher jet velocity changes through the control cycle. In contrast, both soft Double DQN and Dueling DQN Figure 11: Evolution of the total reward value for soft and hard Double DQN in terms of the episode number Figure 10: Temperature contour for different episode numbers Figure 12: Evolution of the total reward value for different variants of DQN in terms of the episode number. Figure 13: Temperature contour for different variants of DQN achieved a more uniform surface temperature close to the desired value. Although all the variants except for the hard Double DQN demonstrated an acceptable performance for the thermal control task, the soft Double DQN and Dueling DQN methods proved to be more effective for jet velocity control, producing stable and close-to-desired temperatures. Future research could explore other advanced deep reinforcement learning algorithms to further enhance thermal control systems.
2309.04597
Existence of solution to a new class of coupled variational-hemivariational inequalities
The objective of this paper is to introduce and study a complicated nonlinear system, called coupled variational-hemivariational inequalities, which is described by a highly nonlinear coupled system of inequalities on Banach spaces. We establish the nonemptiness and compactness of the solution set to the system. We apply a new method of proof based on a multivalued version of the Tychonoff fixed point principle in a Banach space combined with the generalized monotonicity arguments, and elements of the nonsmooth analysis. Our results improve and generalize some earlier theorems obtained for a very particular form of the system.
YR. Bai, S. Migorski, VT. Nguyen, JW. Peng
2023-09-08T21:12:29Z
http://arxiv.org/abs/2309.04597v1
# Existence of solution to a new class of coupled variational-hemivariational inequalities # Existence of solution to a new class of coupled variational-hemivariational inequalities Yunru Bai\({}^{1,}\)1 \({}^{1}\)_School of Science, Guangxi University of Science and Technology, Liuzhou 545006, Guangxi, China._ \({}^{2}\)_College of Applied Mathematics, Chengdu University of Information Technology, Chengdu 610225, Sichuan, P.R. China, and Jagiellonian University in Krakow, Faculty of Mathematics and Computer Science, ul. Lojasiewicza 6, 30348 Krakow, Poland._ \({}^{3}\)_Departement of Mathematics, FPT University, Education zone, Hoa Lac High Tech Park, Km29 Thang Long highway, Thach That ward, Hanoi, Vietnam._ \({}^{4}\)_School of Mathematics Science, Chongqing Normal University, Chongqing 401331, P.R. China._ Footnote 1: Corresponding author. E-mail addresses: [email protected] (Y. Bai), [email protected] (S. Migórski), [email protected] (V. T. Nguyen), [email protected] (J. Peng) Received February 10, 2022; Accepted March 7, 2022. **Abstract.** The objective of this paper is to introduce and study a complicated nonlinear system, called coupled variational-hemivariational inequalities, which is described by a highly nonlinear coupled system of inequalities on Banach spaces. We establish the nonemptiness and compactness of the solution set to the system. We apply a new method of proof based on a multivalued version of the Tychonoff fixed point principle in a Banach space combined with the generalized monotonicity arguments, and elements of the nonsmooth analysis. Our results improve and generalize some earlier theorems obtained for a very particular form of the system. **Keywords.** Coupled variational-hemivariational inequalities; existence; the Tychonoff fixed point theorem; the Clarke subgradient; compactness. ## 1. Introduction In this paper we study the existence of solution to a new class of systems of two nonlinear coupled variational-hemivariational inequalities with constraints. Each inequality involves a nonlinear operator, the generalized (Clarke) directional derivative of a locally Lipschitz function, a convex potential, and a constraint set. The main feature of the system is a strong coupling which appears in the nonlinear operators and the generalized directional derivatives. Our results concern existence and compactness of the solution set to the system, and generalize the results obtained very recently in [11] by using a different method. To introduce the problem we need the following functional framework which will be used throughout the paper. Let \((V,\|\cdot\|_{V})\) and \((E,\|\cdot\|_{E})\) be real reflexive Banach spaces, and \(C\subset V\) and \(D\subset E\) be nonempty, closed and convex sets. We are given two nonlinear operators \(A\colon E\times V\to V^{*}\) and \(B\colon V\times E\to E^{*}\), two convex functions \(\psi\colon V\to\overline{\mathbb{R}}\colon=\mathbb{R}\cup\{+\infty\}\) and \(\theta\colon E\to\overline{\mathbb{R}}\), two nonlinear functions \(J\colon Z_{1}\times X\to\mathbb{R}\) and \(H\colon Z_{2}\times Y\to\mathbb{R}\) (which are locally Lipschitz continuous with respect to their second variables), four linear operators \(\gamma_{1}\colon V\to X\), \(\gamma_{2}\colon E\to Y\), \(\delta_{1}\colon E\to Z_{1}\) and \(\delta_{2}\colon V\to Z_{2}\), and two elements \(h\in V^{*}\) and \(l\in E^{*}\). The system of two coupled nonlinear variational-hemivariational inequalities reads as follows. **Problem 1.1**.: Find \(u\in C\) and \(w\in D\) satisfying the following inequalities \[\langle A(w,u),v-u\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}u;\gamma_{1}(v-u))+ \psi(v)-\psi(u)\geq\langle h,v-u\rangle_{V} \tag{1.1}\] for all \(v\in C\), and \[\langle B(u,w),z-w\rangle_{E}+H^{0}(\delta_{2}u,\gamma_{2}w;\gamma_{2}(z-w))+ \theta(z)-\theta(w)\geq\langle l,z-w\rangle_{E} \tag{1.2}\] for all \(z\in D\). It should be mentioning that Problem 1.1 is new and contains many challenging and important problems as its special cases. We point out below several interesting particular cases of Problem 1.1. 1. If \(J\) and \(H\) are independent of their first variables, respectively, then Problem 1.1 reduces to the following coupled system: find \((u,w)\in C\times D\) such that \[\langle A(w,u),v-u\rangle_{V}+J^{0}(\gamma_{1}u;\gamma_{1}(v-u))+\psi(v)-\psi( u)\geq\langle h,v-u\rangle_{V}\] (1.3) for all \(v\in C\), and \[\langle B(u,w),z-w\rangle_{E}+H^{0}(\gamma_{2}w;\gamma_{2}(z-w))+\theta(z)- \theta(w)\geq\langle l,z-w\rangle_{E}\] (1.4) for all \(z\in D\). This kind of coupled inequalities (1.3)-(1.4) has not been studied in the literature. 2. When \(\psi=\theta\equiv 0\), then Problem 1.1 takes the following form of two coupled hemivariational inequalities: find \((u,w)\in C\times D\) satisfying \[\langle A(w,u),v-u\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}u;\gamma_{1}(v-u)) \geq\langle h,v-u\rangle_{V}\] (1.5) for all \(v\in C\), and \[\langle B(u,w),z-w\rangle_{E}+H^{0}(\delta_{2}u,\gamma_{2}w;\gamma_{2}(z-w)) \geq\langle l,z-w\rangle_{E}\] (1.6) for all \(z\in D\). To the best of our knowledge, there is no results available in the literature for the system (1.5)-(1.6). 3. If \(J\equiv 0\) and \(H\equiv 0\), then Problem 1.1 becomes the following system of coupled variational inequalities: find \((u,w)\in C\times D\) satisfying \[\langle A(w,u),v-u\rangle_{V}+\psi(v)-\psi(u)\geq\langle h,v-u\rangle_{V}\] (1.7) for all \(v\in C\), and \[\langle B(u,w),z-w\rangle_{E}+\theta(z)-\theta(w)\geq\langle l,z-w\rangle_{E}\] (1.8) for all \(z\in D\). This system has been considered and investigated in [11] in which the authors applied the Kakutani-Ky Fan fixed point theorem for multivalued operators to prove the existence of solutions to system (1.7)-(1.8). In this paper, in contrast to [11], we give a new proof which is based on a multivalued version of the Tychonoff fixed point principle in a Banach space combined with the theory of nonsmoth analysis, generalized monotonicity arguments and the Minty approach. * When \(\theta\equiv 0\), \(H\equiv 0\) and \(D=E\), then Problem 1.1 can be reformulated as the following variational-hemivariational inequality subjected to a nonlinear equation constraint: find \((u,w)\in C\times D\) such that \[\left\{\begin{array}{ll}\langle A(w,u),v-u\rangle_{V}+J^{0}(\delta_{1}w,\gamma _{1}u;\gamma_{1}(v-u))+\psi(v)-\psi(u)\geq\langle h,v-u\rangle_{V},\\ &\mbox{for all }v\in C,\end{array}\right.\] \[B(u,w)=l.\] * Assume that \(\psi\equiv 0,\ \theta\equiv 0\), \(J\equiv 0\), \(H\equiv 0\), \(C=V\) and \(D=E\). Then Problem 1.1 is equivalent to the following nonlinear system of two coupled equations: find \((u,w)\in C\times D\) such that \[\left\{\begin{array}{ll}A(w,u)=h,\\ B(u,w)=l.\end{array}\right.\] * Suppose that \(\theta\equiv 0\), \(H\equiv 0\), \(D=E\) and \(B\) is independent of its first variable. Then Problem 1.1 can be reformulated as the following parameter control system driven by a variational-hemivariational inequality: find \(u\in C\) and \(w\in W\) such that \[\langle A(w,u),v-u\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}u;\gamma_{1}(v-u))+ \psi(v)-\psi(u)\geq\langle h,v-u\rangle_{V}\] for all \(v\in C\), where the admissible set \(W\) is defined by \(W:=\{w\in E\mid B(w)=l\}\). * If \(A\) and \(J\) are independent of their first variables, \(B\equiv 0\), \(\theta\equiv 0\), \(H\equiv 0\) and \(l=0\), then Problem 1.1 reduces to the following elliptic variational-hemivariational inequality: find \(x\in C\) such that \[\langle Au,v-u\rangle_{V}+J^{0}(\gamma_{1}(u);\gamma_{1}(v-u))+\psi(v)-\psi(u) \geq\langle h,v-u\rangle_{V}\] (1.9) for all \(v\in C\). The variational-hemivariational inequalities of the form (1.9) have been studied from various perspectives. For example, the results on noncoercive hemivariational inequalities can be found in [2] where the equilibrium problems have been employed, and in [12] where an application to contact problems in mechanics were treated. Several classes of variational-hemivariational and hemivariational inequalities that model problems in contact mechanics have been also studied in [9, 18]. The nonconvex star-shaped constraints sets in evolution hemivariational inequalities have been studied in [5], and singular perturbations of inequality problems were analyzed in [8]. Optimal control problems and inverse problems for the aforementioned inequalities have been investigated in [17, 25]. The elliptic variational-hemivariational inequalities have been treated in [13, 14], differential hemivariational inequalities in [19], and related double phase obstacle problems were considered in [23, 24]. For other recent results on hemivariational inequalities, we refer, for example, to [16, 15, 20, 21, 22, 26, 27] and the references therein. The rest of the paper is organized as follows. In Section 2 we recall a preliminary material needed in the sequel. Section 3 is devoted to state the hypotheses on the data of Problem 1.1, and to deliver the main results of this paper which contain the nonemptiness and compactness of the solution set to Problem 1.1. ## 2. Mathematical Background In this section, we recall a necessary preliminary material which will be used throughout the paper. More details can be found in [1, 3, 4, 6, 18]. Let \((E,\|\cdot\|_{E})\) be a Banach space, \(E^{*}\) be its dual space, and \(\langle\cdot,\cdot\rangle_{E}\) denote the duality brackets between \(E^{*}\) and \(E\). We adopt the symbols "\(\stackrel{{ w}}{{\longrightarrow}}\) " and "\(\rightarrow\)" to symbolize the weak convergence and the strong convergence in various specs, respectively. We recall definitions and properties of upper semicontinuous multivalued operators. **Definition 2.1**.: Let \(Y\) and \(Z\) be topological spaces, \(D\subset Y\) be a nonempty set, and \(G\colon Y\to 2^{Z}\) be a multivalued map. **(i):**: The map \(G\) is called upper semicontinuous (u.s.c., for short) at \(y\in Y\), if for each open set \(O\subset Z\) such that \(G(y)\subset O\), there exists a neighborhood \(N(y)\) of \(y\) satisfying \(G(N(y)):=\cup_{z\in N(y)}G(z)\subset O\). If it holds for each \(y\in D\), then \(G\) is called to be upper semicontinuous in \(D\). **(ii):**: The map \(G\) is closed at \(y\in Y\), if for every sequence \(\{(y_{n},z_{n})\}\subset\operatorname{Gr}(G)\) satisfying \((y_{n},z_{n})\rightarrow(y,z)\) in \(Y\times Z\), it holds \((y,z)\in\operatorname{Gr}(G)\), where \(\operatorname{Gr}(G)\) is the graph of the map \(G\) defined by \[\operatorname{Gr}(G):=\{(y,z)\in Y\times Z\mid z\in G(y)\}.\] If it holds for each \(y\in Y\), then \(G\) is called to be closed (or \(G\) has a closed graph). Let \(X_{1}\) and \(X_{2}\) be two Banach spaces. A multivalued map \(F\colon X_{1}\to 2^{X_{2}}\) is called sequentially weakly-weakly closed, if \(F\) is sequentially closed from \(X_{1}\) endowed with the weak topology into the subsets of \(X_{2}\) with the weak topology. The following result provides two useful criteria for the upper semicontinuity of a multivalued map. **Proposition 2.1**.: _Let \(F\colon X\to 2^{Y}\) with \(X\) and \(Y\) topological spaces. The following conditions are equivalent:_ * \(F\) _is upper semicontinuous._ * _For each closed set_ \(C\subset Y\)_,_ \(F^{-}(C):=\{x\in X\mid F(x)\cap C\neq\emptyset\}\) _is closed in_ \(X\)_._ * _For each open set_ \(O\subset Y\)_,_ \(F^{+}(O):=\{x\in X\mid F(x)\subset O\}\) _is open in_ \(X\)_._ The following definitions provide useful notions from the theory of nonsmooth analysis. **Definition 2.2**.: Let \(V\) be a reflexive Banach space, \(\psi\colon V\rightarrow\overline{\mathbb{R}}\) be a proper, convex and l.s.c. function, and \(A\colon V\to 2^{V^{*}}\) be a multivalued operator. The operator \(A\) is called to be * \(\psi\)-pseudomonotone, if for any \(u\), \(v\in V\) fixed, there exists an element \(u^{*}\in Au\) such that \[\langle u^{*},v-u\rangle_{X}+\psi(v)-\psi(u)\geq 0,\] then we have \[\langle v^{*},v-u\rangle_{X}+\psi(v)-\psi(u)\geq 0\] for all \(v^{*}\in A(v)\). * stable \(\psi\)-pseudomonotone with respect to the set \(W\subset V^{*}\), if \(A\) and \(V\ni u\mapsto Au-w\subset V^{*}\) are \(\psi\)-pseudomonotone for each \(w\in W\). **Definition 2.3**.: Let \(X\) be a Banach space with the dual space \(X^{*}\) and norm \(\|\cdot\|_{X}\). A function \(J\colon X\rightarrow\mathbb{R}\) is called to be a locally Lipschitz continuous at \(u\in X\), if there are a neighborhood \(N(u)\) of \(u\) and a constant \(L_{u}>0\) with \[|J(w)-J(v)|\leq L_{u}\|w-v\|_{X}\ \ \text{for all}\ \ w,v\in N(u).\] Given a locally Lipschitz function \(J\colon X\to\mathbb{R}\), the generalized (Clarke) directional derivative of \(J\) at the point \(u\in X\) in the direction \(v\in X\), denoted by \(J^{0}(u;v)\), is defined by \[J^{0}(u;v)=\limsup_{\lambda\to 0^{+},\,w\to u}\frac{J(w+\lambda v)-J(w)}{ \lambda}.\] Further, the generalized subgradient of \(J\colon X\to\mathbb{R}\) at \(u\in X\) is given by \[\partial J(u)=\{\,\xi\in X^{*}\mid J^{0}(u;v)\geq\langle\xi,v\rangle_{X^{*} \times X}\ \text{ for all }\ v\in X\,\}.\] The generalized subgradient and generalized directional derivative of a locally Lipschitz function enjoy nice properties and rich calculus. Here, we summarize some basic results (see for example [18, Proposition 3.23]). **Proposition 2.2**.: _Let \(J\colon X\to\mathbb{R}\) be a locally Lipschitz function. Then_ 1. _For every_ \(x\in X\)_, the function_ \(X\ni v\mapsto J^{0}(x;v)\in\mathbb{R}\) _is positively homogeneous and subadditive, i.e.,_ \(J^{0}(x;\lambda v)=\lambda J^{0}(x;v)\) _for all_ \(\lambda\geq 0\)_,_ \(v\in X\)_, and_ \(J^{0}(x;v_{1}+v_{2})\leq J^{0}(x;v_{1})+J^{0}(x;v_{2})\) _for all_ \(v_{1}\)_,_ \(v_{2}\in X\)_, respectively._ 2. _For every_ \(x\in X\)_, the set_ \(\partial J(x)\) _is a nonempty, convex, and weakly* compact subset of_ \(X^{*}\)_, which is bounded by the Lipschitz constant_ \(L_{x}>0\) _of_ \(J\) _near_ \(x\)_._ 3. _The graph of the generalized subgradient operator_ \(\partial J\) _of_ \(J\) _is closed in_ \(X\times(w^{*}\!-\!X^{*})\) _topology, i.e., if_ \(\{x_{n}\}\subset X\) _and_ \(\{\xi_{n}\}\subset X^{*}\) _are sequences such that_ \(\xi_{n}\in\partial J(x_{n})\) _and_ \(x_{n}\to x\) _in_ \(X\)_,_ \(\xi_{n}\stackrel{{ w}}{{\longrightarrow}}\xi\) _in_ \(X^{*}\)_, then_ \(\xi\in\partial J(x)\)_, where, recall,_ \((w^{*}\!-\!X^{*})\) _denotes the space_ \(X^{*}\) _equipped with weak* topology._ We end the section by recalling a multivalued version of the Tychonoff fixed point principle in a Banach space, its proof can be found in [7, Theorem 8.6]. **Theorem 1**.: _Let \(C\) be a bounded, closed and convex subset of a reflexive Banach space \(E\), and \(S\colon C\to 2^{C}\) be a multivalued map such that_ 1. \(S\) _has bounded, closed and convex values,_ 2. \(S\) _is weakly-weakly u.s.c.._ _Then \(S\) has a fixed point in \(C\)._ ## 3. Main Results This section is devoted to the main results of the paper which include the nonemptiness and compactness of the solution set to Problem 1.1. Before we state and prove the results, we make the following hypotheses on the data of Problem 1.1. \(H(0)\): \(C\) and \(D\) are nonempty, closed and convex subsets of \(V\) and \(E\), respectively. \(H(1)\): \(h\in V^{*}\) and \(l\in E^{*}\). \(H(2)\): \(\gamma_{1}\colon V\to X\), \(\gamma_{2}\colon E\to Y\), \(\delta_{1}\colon E\to Z_{1}\) and \(\delta_{2}\colon V\to Z_{2}\) are bounded, linear and compact. \(H(\psi)\): \(\psi\colon V\to\overline{\mathbb{R}}\) is a convex and lower semicontinuous function such that \(\operatorname{dom}(\psi):=\{u\in V\mid\psi(u)<+\infty\}\cap C\neq\emptyset\). \(H(\theta)\): \(\theta\colon E\to\overline{\mathbb{R}}\) is a convex and lower semicontinuous function such that \(\operatorname{dom}(\theta):=\{w\in E\mid\theta(w)<+\infty\}\cap D\neq\emptyset\). \(H(J)\): \(J\colon Z_{1}\times X\to\mathbb{R}\) is such that 1. for every \(w\in Z_{1}\), \(X\ni u\mapsto J(w,u)\in\mathbb{R}\) is locally Lipschitz continuous. 2. there exists a constant \(c_{J}\geq 0\) such that \[\|\xi\|_{X^{*}}\leq c_{J}\left(1+\|u\|_{X}+\|w\|_{Z_{1}}\right)\] for all \(\xi\in\partial J(w,u)\), \(u\in X\) and \(w\in Z_{1}\). 3. the following inequality is valid \[\limsup_{n\to\infty}J^{0}(w_{n},u;v)\leq J^{0}(w,u;v),\] whether \(u\), \(v\) and sequence \(\{w_{n}\}\subset Z_{1}\) is such that \[w_{n}\to w\text{ in }Z_{1}\text{ as }n\to\infty\] for some \(w\in Z_{1}\). \(H(H)\): \(H\colon Z_{2}\times Y\to\mathbb{R}\) is such that 1. for every \(u\in Z_{2}\), \(Y\ni w\mapsto H(u,w)\in\mathbb{R}\) is locally Lipschitz continuous. 2. there exists a constant \(c_{H}\geq 0\) such that \[\|\eta\|_{Y^{*}}\leq c_{H}\left(1+\|u\|_{Z_{2}}+\|w\|_{Y}\right)\] for all \(\eta\in\partial H(u,w)\), \(u\in Z_{2}\) and \(w\in Y\). 3. the following inequality is valid \[\limsup_{n\to\infty}H^{0}(u_{n},w;z)\leq H^{0}(u,w;z),\] whether \(w\), \(z\in Y\) and sequence \(\{w_{n}\}\subset Z_{2}\) is such that \[w_{n}\to w\text{ in }Z_{2}\text{ as }n\to\infty\] for some \(w\in Z_{2}\). \(H(A)\): \(A\colon E\times V\to V^{*}\) satisfies the following conditions: 1. for any \(w\in E\) and \(v\), \(u\in V\), the following inequality holds \[\limsup_{\lambda\to 0}\langle A(w,tv+(1-t)u),v-u\rangle_{V}\leq\langle A(w,u),v-u \rangle_{V}.\] 2. for any \(w\in E\) fixed, the multivalued mapping \(V\ni u\mapsto A(w,u)+\gamma_{1}^{*}\partial J(\delta_{1}w,\gamma_{1}u)\subset V ^{*}\) is stable \(\psi\)-pseudomonotone with respect to \(\{h\}\). 3. if \(\{w_{n}\}\subset E\) and \(\{u_{n}\}\subset V\) are such that \[w_{n}\ \stackrel{{ w}}{{\longrightarrow}}\ w\text{ in }E\text{ and }u_{n}\ \stackrel{{ w}}{{\longrightarrow}}\ u\text{ in }V\text{ as }n\to\infty\] for some \((w,u)\in E\times V\), then we have \[\limsup_{n\to\infty}\langle A(w_{n},v),v-u_{n}\rangle_{V}\leq\langle A(w,v),v-u \rangle_{V}.\] 4. the following growth condition is satisfied \[\|A(w,u)\|_{V^{*}}\leq b_{A}(1+\|u\|_{V}+\|w\|_{E})\] for all \((w,u)\in E\times V\) with some \(b_{A}>0\). * there exists a function \(r_{A}\colon\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}\) such that \[\langle A(w,u),u\rangle_{V}-J^{0}(\delta_{1}w,\gamma_{1}u;-\gamma_{1}u)\geq r_{A} (\|u\|_{V},\|w\|_{E})\|u\|_{V}\text{ for all }u\in V\text{ and }w\in E,\] and * for every nonempty and bounded set \(O\subset\mathbb{R}_{+}\), we have \(r_{A}(t,s)\to+\infty\) as \(t\to+\infty\) for all \(s\in O\), * for any constants \(c_{1}\), \(c_{2}\geq 0\), it holds \(r_{A}(t,c_{1}t+c_{2})\to+\infty\) as \(t\to+\infty\), * for sequences \(\{s_{n}\}\subset\mathbb{R}_{+}\) and \(\{t_{n}\}\subset\) such that \[s_{n}\to+\infty,\ t_{n}\to+\infty\text{ and }\tfrac{t_{n}}{s_{n}}\to 0\text{ as }n\to\infty,\] we have \[r_{A}(s_{n},t_{n})\to+\infty\text{ as }n\to\infty.\] \(H(B)\): \(B\colon V\times E\to E^{*}\) satisfies the following conditions: * for any \(u\in V\) and \(z\), \(w\in E\), it holds \[\limsup_{\lambda\to 0}\langle B(u,tz+(1-t)w),z-w\rangle_{E}\leq\langle B(u,w),z-w \rangle_{E}.\] * for each \(u\in V\) fixed, the multivalued mapping \(E\ni w\mapsto B(u,w)+\gamma_{2}^{*}\partial H(\delta_{2}u,\gamma_{2}w)\subset E ^{*}\) is stable \(\theta\)-pseudomonotone with respect to \(\{l\}\). * if \(\{w_{n}\}\subset E\) and \(\{u_{n}\}\subset V\) are such that \[w_{n}\ \stackrel{{ w}}{{\longrightarrow}}\ w\text{ in }E\text{ and }u_{n}\ \stackrel{{ w}}{{\longrightarrow}}\ u\text{ in }V\text{ as }n\to\infty\] for some \((w,u)\in E\times V\), then we have \[\limsup_{n\to\infty}\langle B(u_{n},z),z-w_{n}\rangle_{E}\leq\langle B(u,z),z- w\rangle_{E}.\] * the following growth condition is satisfied \[\|B(u,w)\|_{E^{*}}\leq b_{B}(1+\|u\|_{V}+\|w\|_{E})\] for all \((w,u)\in E\times V\) with some \(b_{B}>0\). * there exists a function \(r_{B}\colon\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}\) such that \[\langle B(u,w),w\rangle_{E}-H^{0}(\delta_{2}u,\gamma_{2}w;-\gamma_{2}w)\geq r _{B}(\|w\|_{E},\|u\|_{V})\|w\|_{E}\text{ for all }u\in V\text{ and }w\in E,\] and * for every nonempty and bounded set \(O\subset\mathbb{R}_{+}\), we have \(r_{B}(t,s)\to+\infty\) as \(t\to+\infty\) for all \(s\in O\), * for any constants \(c_{1}\), \(c_{2}\geq 0\), it holds \(r_{B}(t,c_{1}t+c_{2})\to+\infty\) as \(t\to+\infty\), * for sequences \(\{s_{n}\}\subset\mathbb{R}_{+}\) and \(\{t_{n}\}\subset\) such that \[s_{n}\to+\infty,\ t_{n}\to+\infty\text{ and }\tfrac{t_{n}}{s_{n}}\to 0\text{ as }n\to\infty,\] we have \[r_{B}(s_{n},t_{n})\to+\infty\text{ as }n\to\infty.\] **Theorem 2**.: _Under the assumptions \(H(A)\), \(H(B)\), \(H(0)\), \(H(1)\), \(H(2)\), \(H(J)\), \(H(H)\), \(H(\psi)\) and \(H(\theta)\), the set of solutions to Problem 1.1, denoted by \(\mathbb{S}(h,l)\), is nonempty and weakly compact in \(V\times E\)._ Proof.: The proof of this theorem is divided into five steps. **Step 1.**_If the set \(\mathbb{S}(h,l)\) of solutions to Problem 1.1 is nonempty, then \(\mathbb{S}(h,l)\) is bounded._ Assume that \(\mathbb{S}(h,l)\) is nonempty. Let \((u,w)\in\mathbb{S}(h,l)\), \(u_{0}\in\mathrm{dom}\psi\cap C\) and \(w_{0}\in\mathrm{dom}\theta\cap D\) be arbitrary fixed. Then, we have \[\langle A(w,u),u_{0}-u\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}u;\gamma_{1}(u_{ 0}-u))+\psi(u_{0})-\psi(u)\geq\langle h,u_{0}-u\rangle_{V},\] and \[\langle B(u,w),w_{0}-w\rangle_{E}+H^{0}(\delta_{2}u,\gamma_{2}w;\gamma_{2}(w_{ 0}-w))+\theta(w_{0})-\theta(w)\geq\langle l,w_{0}-w\rangle_{E}.\] From the subadditivity of \(x\mapsto J^{0}(w,u;x)\) and \(x\mapsto H^{0}(u,w;x)\), we have \[\langle A(w,u),u\rangle_{V}-J^{0}(\delta_{1}w,\gamma_{1}u;-\gamma _{1}u)\] \[\leq \langle A(w,u),u_{0}\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}u; \gamma_{1}u_{0})+\psi(u_{0})-\psi(u)-\langle h,u_{0}-u\rangle_{V},\] and \[\langle B(u,w),w\rangle_{E}-H^{0}(\delta_{2}u,\gamma_{2}w;-\gamma _{2}w)\] \[\leq \langle B(u,w),w_{0}\rangle_{E}+H^{0}(\delta_{2}u,\gamma_{2}w; \gamma_{2}w_{0})+\theta(w_{0})-\theta(w)-\langle l,w_{0}-w\rangle_{E}.\] Let \(\xi\in X^{*}\) and \(\eta\in Y^{*}\) be such that \[\xi\in\partial J(\delta_{1}w,\gamma_{1}u)\ \ \text{and}\ \ \langle\xi,\gamma_{1}u_{0}\rangle_{X}=J^{0}(\delta_{1}w,\gamma_{1}u;\gamma_{1}u _{0}),\] \[\eta\in\partial H(\delta_{2}u,\gamma_{2}w)\ \ \text{and}\ \ \langle\eta,\gamma_{2}w_{0}\rangle_{Y}=H^{0}(\delta_{2}u,\gamma_{2}w;\gamma_{1} w_{0}).\] Recall that \(\psi\) and \(\theta\) are convex and l.s.c., so, we invoke [3, Proposition 5.2.25] to find constants \(\alpha_{\psi}\), \(\alpha_{\theta}\), \(\beta_{\psi}\), \(\beta_{\theta}\geq 0\) such that \[\psi(u)\geq-\alpha_{\psi}\|u\|_{V}-\beta_{\psi}\ \ \text{and}\ \ \theta(w)\geq-\alpha_{\theta}\|w\|_{E}-\beta_{\theta}\] for all \((u,w)\in V\times E\). We apply the inequalities above and hypotheses \(H(A)\)(iv)-(v) and \(H(B)\)(iv)-(v) to infer that \[r_{A}(\|u\|_{V},\|w\|_{E})\|u\|_{V}\] \[\leq \langle A(w,u),u\rangle_{V}-J^{0}(\delta_{1}w,\gamma_{1}u;-\gamma _{1}u)\] \[\leq \langle A(w,u),u_{0}\rangle_{V}+\langle\xi,\gamma_{1}u_{0} \rangle_{X}+\psi(u_{0})-\psi(u)-\langle h,u_{0}-u\rangle_{V}\] \[\leq b_{A}(1+\|u\|_{V}+\|w\|_{E})\|u_{0}\|_{V}+c_{J}(1+\|\gamma_{1}u \|_{X}+\|\delta_{1}w\|_{Z_{1}})\|\gamma_{1}u_{0}\|_{X}\] \[+\psi(u_{0})-\psi(u)+\|h\|_{V^{*}}\left(\|u_{0}\|_{V}+\|u\|_{V}\right)\] \[\leq b_{A}(1+\|u\|_{V}+\|w\|_{E})\|u_{0}\|_{V}+c_{J}(1+\|\gamma_{1}| \|u\|_{V}+\|\delta_{1}|\|w\|_{E})\|\gamma_{1}\|\|u_{0}\|_{V}\] \[+\psi(u_{0})+\alpha_{\psi}\|u\|_{V}+\beta_{\psi}+\|h\|_{V^{*}} \left(\|u_{0}\|_{V}+\|u\|_{V}\right),\] and \[r_{B}(\|w\|_{E},\|u\|_{V})\|w\|_{E}\] \[\leq \langle B(u,w),w\rangle_{E}-H^{0}(\delta_{2}u,\gamma_{2}w;- \gamma_{2}w)\] \[\leq \langle B(u,w),w_{0}\rangle_{E}+H^{0}(\delta_{2}u,\gamma_{2}w; \gamma_{u}w_{0})+\theta(w_{0})-\theta(w)-\langle l,w_{0}-w\rangle_{E}\] \[\leq b_{B}(1+\|u\|_{V}+\|w\|_{E})\|w_{0}\|_{E}+c_{H}(1+\|\delta_{2}| \|u\|_{V}+\|\gamma_{2}|\|w\|_{E})\|\gamma_{2}\|\|w_{0}\|_{E}\] \[+\theta(w_{0})+\alpha_{\theta}\|w\|_{E}+\beta_{\theta}+\|l\|_{E^{*} }\left(\|w_{0}\|_{E}+\|w\|_{E}\right).\] Hence, we have \[r_{A}(\|u\|_{V},\|w\|_{E})\] \[\leq \frac{b_{A}(1+\|u\|_{V}+\|w\|_{E})\|u_{0}\|_{V}+c_{J}(1+\|\gamma_{1} \|\|u\|_{V}+\|\delta_{1}\|\|w\|_{E})\|\gamma_{1}\|\|u_{0}\|_{V}}{\|u\|_{V}}\] \[+\frac{\psi(u_{0})+\alpha_{\psi}\|u\|_{V}+\beta_{\psi}+\|h\|_{V^{ *}}\left(\|u_{0}\|_{V}+\|u\|_{V}\right)}{\|u\|_{V}}, \tag{3.1}\] and \[r_{B}(\|w\|_{E},\|u\|_{V})\] \[\leq \frac{b_{B}(1+\|u\|_{V}+\|w\|_{E})\|w_{0}\|_{E}+c_{H}(1+\|\delta_{ 2}\|\|u\|_{V}+\|\gamma_{2}\|\|w\|_{E})\|\gamma_{2}\|\|w_{0}\|_{E}}{\|w\|_{E}}\] \[+\frac{\theta(w_{0})+\alpha_{\theta}\|w\|_{E}+\beta_{\theta}+\| \|_{E^{*}}(\|w_{0}\|_{E}+\|w\|_{E})}{\|w\|_{E}}. \tag{3.2}\] Assume that \(\mathbb{S}(h,l)\) is unbounded. Without any loss of generality, we may suppose that there exists a sequence \(\{(u_{n},w_{n})\}\subset\mathbb{S}(h,l)\) satisfying \(\|u_{n}\|_{V}+\|w_{n}\|_{E}\to\infty\) as \(n\to\infty\), namely, one of the following conditions holds: \[\|u_{n}\|_{V}\uparrow+\infty\text{ as $n\to\infty$ and $\{w_{n}\}$ is bounded in $E$}, \tag{3.3}\] or \[\|w_{n}\|_{E}\uparrow+\infty\text{ as $n\to\infty$ and $\{u_{n}\}$ is bounded in $V$}, \tag{3.4}\] or \[\|w_{n}\|_{E}\uparrow+\infty\text{ and $\|u_{n}\|_{V}\uparrow+\infty$ as $n\to\infty$}. \tag{3.5}\] If (3.3) is true, then from (3.1) we obtain \[r_{A}(\|u_{n}\|_{V},\|w_{n}\|_{E})\] \[\leq \frac{b_{A}(1+\|u_{n}\|_{V}+\|w_{n}\|_{E})\|u_{0}\|_{V}+c_{J}(1+\| \gamma_{1}\|\|u_{n}\|_{V}+\|\delta_{1}\|\|w_{n}\|_{E})\|\gamma_{1}\|\|u_{0}\| _{V}}{\|u_{n}\|_{V}}\] \[+\frac{\psi(u_{0})+\alpha_{\psi}\|u_{n}\|_{V}+\beta_{\psi}+\|h\|_ {V^{*}}(\|u_{0}\|_{V}+\|u_{n}\|_{V})}{\|u_{n}\|_{V}}.\] Taking the limit as \(n\to\infty\) in the inequality above and using assumption \(H(A)(\mathrm{v})\), it yields \[+\infty=\lim_{n\to\infty}r_{A}(\|u_{n}\|_{V},\|w_{n}\|_{E})\] \[\leq \lim_{n\to\infty}\left[\frac{b_{A}(1+\|u_{n}\|_{V}+\|w_{n}\|_{E}) \|u_{0}\|_{V}+c_{J}(1+\|\gamma_{1}\|\|u_{n}\|_{V}+\|\delta_{1}\|\|w_{n}\|_{E}) \|\gamma_{1}\|\|u_{0}\|_{V}}{\|u_{n}\|_{V}}\right.\] \[+\frac{\psi(u_{0})+\alpha_{\psi}\|u_{n}\|_{V}+\beta_{\psi}+\|h\|_{ V^{*}}(\|u_{0}\|_{V}+\|u_{n}\|_{V})}{\|u_{n}\|_{V}}\right]\] \[= b_{A}\|u_{0}\|_{V}+c_{J}\|\gamma_{1}\|^{2}\|u_{0}\|_{V}+\alpha_{ \psi}+\|h\|_{V^{*}}.\] This leads to a contradiction. So, we conclude that \(\mathbb{S}(h,l)\) is bounded. Likewise, we can employ the same argument to obtain a contradiction when (3.4) occurs. If (3.5) holds, we distinguish further the following cases: * \(\frac{\|u_{n}\|_{V}}{\|w_{n}\|_{E}}\to+\infty\) or \(\frac{\|w_{n}\|_{E}}{\|u_{n}\|_{E}}\to+\infty\) as \(n\to\infty\). 2. there exists \(n_{0}\in\mathbb{N}\) such that \(0<c_{0}\leq\frac{\|u_{n}\|_{V}}{\|w_{n}\|_{E}}\leq c_{1}\) for all \(n\geq n_{0}\) for some \(c_{0},c_{1}>0\). Concerning the case i), we only examine the situation if \(\frac{\|u_{n}\|_{V}}{\|w_{n}\|_{E}}\to+\infty\), because the same conclusion can be obtained by using a similar proof when \(\frac{\|w_{n}\|_{E}}{\|u_{n}\|_{E}}\to+\infty\) as \(n\to\infty\). Keeping in mind (3.1), one has \[r_{A}(\|u_{n}\|_{V},\|w_{n}\|_{E})\] \[\leq \frac{b_{A}(1+\|u_{n}\|_{V}+\|w_{n}\|_{E})\|u_{0}\|_{V}+c_{J}(1+ \|\gamma_{1}\|\|u_{n}\|_{V}+\|\delta_{1}\|\|w_{n}\|_{E})\|\gamma_{1}\|\|u_{0}\| _{V}}{\|u_{n}\|_{V}}\] \[+\frac{\psi(u_{0})+\alpha_{\psi}\|u_{n}\|_{V}+\beta_{\psi}+\|h\|_ {V^{*}}(\|u_{0}\|_{V}+\|u_{n}\|_{V})}{\|u_{n}\|_{V}}.\] By virtue of (3.5) and the condition i), we infer \[+\infty=\lim_{n\to\infty}r_{A}(\|u_{n}\|_{V},\|w_{n}\|_{E})\] \[\leq \lim_{n\to\infty}\left[\frac{b_{A}(1+\|u_{n}\|_{V}+\|w_{n}\|_{E}) \|u_{0}\|_{V}+c_{J}(1+\|\gamma_{1}\|\|u_{n}\|_{V}+\|\delta_{1}\|\|w_{n}\|_{E}) \|\gamma_{1}\|\|u_{0}\|_{V}}{\|u_{n}\|_{V}}\right.\] \[+\frac{\psi(u_{0})+\alpha_{\psi}\|u_{n}\|_{V}+\beta_{\psi}+\|h\|_ {V^{*}}(\|u_{0}\|_{V}+\|u_{n}\|_{V})}{\|u_{n}\|_{V}}\right]\] \[= b_{A}\|u_{0}\|_{V}+c_{J}\|\gamma_{1}\|^{2}\|u_{0}\|_{V}+\alpha_{ \psi}+\|h\|_{V^{*}}.\] This leads to a contradiction too. This implies that \(\mathbb{S}(h,l)\) is bounded. Moreover, if the condition ii) holds, then we can use hypothesis \(H(B)\)(v) to get \[+\infty=\lim_{n\to\infty}r_{B}(\|w_{n}\|_{E},\|u_{n}\|_{V})\] \[\leq \frac{b_{B}(1+\|u_{n}\|_{V}+\|w_{n}\|_{E})\|w_{0}\|_{E}+c_{H}(1+ \|\delta_{2}\|\|u_{n}\|_{V}+\|\gamma_{2}\|\|w_{n}\|_{E})\|\gamma_{2}\|\|w_{0}\| _{E}}{\|w_{n}\|_{E}}\] \[+\frac{\theta(w_{0})+\alpha_{\theta}\|w_{n}\|_{E}+\beta_{\theta}+ \|l\|_{E^{*}}(\|w_{0}\|_{E}+\|w_{n}\|_{E})}{\|w_{n}\|_{E}}\] \[\leq b_{B}(c_{1}+1)\|w_{0}\|_{E}+c_{H}(\|\delta_{2}\|c_{1}+\|\gamma_{2 }\|)\|\gamma_{2}\|\|w_{0}\|_{E}+\theta(w_{0})+\alpha_{\theta}+\|l\|_{E^{*}}.\] This implies that the set \(\mathbb{S}(h,l)\) is bounded. **Step 2**.: _For each \(w\in E\) (resp. \(u\in V\)) fixed, the solution set of inequality problem (1.1) (resp. (1.2)) is nonempty, bounded and closed._ Let \(w\in E\) be arbitrary fixed. By the definition of generalized subgradient in the sense of Clarke, we have \[\langle A(w,u)+\gamma_{1}^{*}\xi,u\rangle_{V}=\langle A(w,u),u \rangle_{V}-\langle\xi,-\gamma_{1}u\rangle_{X}\] \[\geq \langle A(w,u),u\rangle_{V}-J^{0}(\delta_{1}y,\gamma_{1}u;-\gamma_ {1}u)\] for all \(\xi\in\partial J(\delta_{1}w,\gamma_{1}u)\). Taking into account hypothesis \(H(A)\)(v) and the inequality above, it gives \[\frac{\inf_{\xi\in\partial J(\delta_{1}w,\gamma_{1}u)}\langle A(w,u)+\gamma^{*}\xi,u\rangle_{V}}{\|u\|_{V}}\geq\frac{\langle A(w,u),u\rangle_{V }-J^{0}(\delta_{1}y,\gamma_{1}u;-\gamma_{1}u)}{\|u\|_{V}}\] \[\geq r_{A}(\|u\|_{V},\|w\|_{E})\] for all \(u\in V\). This means that multivalued operator \[V\ni u\mapsto A(w,u)+\gamma_{1}^{*}\partial J(\delta_{1}w,\gamma_{1}u)\subset V^{*}\] is coercive. In an analogous way, we can verify that for every \(u\in V\) fixed, multivalued operator \[E\ni w\mapsto B(u,w)+\gamma_{2}^{*}\partial H(\delta_{2}u,\gamma_{2}w)\subset E ^{*}\] is coercive as well. Therefore, we can invoke the same arguments as in the proof of [12, Theorem 3] to conclude that the solution set of inequality problem (1.1) (resp. (1.2)) is nonempty, bounded and closed. Next, we introduce the multivalued map \(\Gamma\colon C\times D\to 2^{C\times D}\) defined by \[\Gamma(u,w):=(\mathcal{P}(w),\mathcal{Q}(u))\ \ \text{for all}\ \ (u,w)\in C \times D, \tag{3.6}\] where \(\mathcal{P}\colon E\to 2^{C}\) and \(\mathcal{Q}\colon V\to 2^{D}\) stand for the solution mappings problems (1.1) and (1.2), respectively, namely, \(\mathcal{P}(w)\) and \(\mathcal{Q}(u)\) are the solution sets of problems (1.1) and (1.2) corresponding to \(w\in E\) and \(u\in V\), respectively. From the definition of \(\Gamma\), it is not difficult to show that \((u,w)\in C\times D\) is a fixed point of \(\Gamma\) if and only if it is a solution of Problem 1.1. Based on this property, we are going to verify that \(\Gamma\) has at least one fixed point in \(C\times D\). **Step 3.**_There exists a bounded, closed and convex subset \(\mathcal{X}\) of \(C\times D\) such that \(\Gamma\) maps \(\mathcal{X}\) into itself._ Indeed, it is sufficient to show that there exists a constant \(m_{0}>0\) such that \[\Gamma(\mathcal{O}(m_{0}))\subset\mathcal{O}(m_{0}), \tag{3.7}\] where \(\mathcal{O}(m_{0})\) is defined by \[\mathcal{O}(m_{0}):=\{(u,w)\in C\times D\ |\ \|u\|_{V}\leq m_{0}\ \text{and}\ \|w\|_{E}\leq m_{0}\}.\] Arguing by contradiction, there is no \(m_{0}>0\) such that (3.7) holds. So, for each \(n\in\mathbb{N}\), there exist sequences \((u_{n},w_{n})\), \((v_{n},z_{n})\in C\times D\) satisfying \[(u_{n},w_{n})\in\mathcal{O}(n),\ (v_{n},z_{n})\in\Gamma(u_{n},w_{n})\ \text{and}\ \|v_{n}\|_{V}>n\ \text{or}\ \|z_{n}\|_{E}>n.\] Now, we suppose that \(\|v_{n}\|_{V}>n\). It follows from (3.1) that \[r_{A}(\|v_{n}\|_{V},\|w_{n}\|_{E})\] \[\leq \frac{b_{A}(1+\|v_{n}\|_{V}+\|w_{n}\|_{E})\|u_{0}\|_{V}+c_{J}(1+ \|\gamma_{1}\|\|v_{n}\|_{V}+\|\delta_{1}\|\|w_{n}\|_{E})\|\gamma_{1}\|\|u_{0}\| _{V}}{\|v_{n}\|_{V}}\] \[+\frac{\psi(u_{0})+\alpha_{\psi}\|v_{n}\|_{V}+\beta_{\psi}+\|h\|_ {V^{*}}(\|u_{0}\|_{V}+\|v_{n}\|_{V})}{\|v_{n}\|_{V}}.\] Recalling that \(\|w_{n}\|_{E}\leq n<\|v_{n}\|_{V}\), we apply hypothesis \(H(A)\)(v) to find that \[+\infty=\lim_{n\to\infty}r_{A}(\|v_{n}\|_{V},\|w_{n}\|_{E})\] \[\leq \lim_{n\to\infty}\bigg{[}\frac{b_{A}(1+\|v_{n}\|_{V}+\|w_{n}\|_{E })\|u_{0}\|_{V}+c_{J}(1+\|\gamma_{1}\|\|v_{n}\|_{V}+\|\delta_{1}\|\|w_{n}\|_{E })\|\gamma_{1}\|\|u_{0}\|_{V}}{\|v_{n}\|_{V}}\] \[+\frac{\psi(u_{0})+\alpha_{\psi}\|v_{n}\|_{V}+\beta_{\psi}+\|h\|_ {V^{*}}(\|u_{0}\|_{V}+\|v_{n}\|_{V})}{\|v_{n}\|_{V}}\bigg{]}\] \[\leq 2b_{A}\|u_{0}\|_{V}+c_{J}(\|\gamma_{1}\|+\|\delta_{1}\|)\|\gamma_ {1}\|\|\|u_{0}\|_{V}+\alpha_{\psi}+\|h\|_{V^{*}}.\] Hence we get a contradiction. As we did before, it can also lead to a contraction when the case \(\|z_{n}\|_{E}>n\) occurs. This means that there exists a constant \(m_{0}>0\) such that (3.7) is valid. Therefore, we conclude that there exists a bounded, closed and convex subset \(\mathcal{X}\) of \(C\times D\) such that \(\Gamma\) maps \(\mathcal{X}\) into itself. **Step 4.**\(\Gamma\) _is weakly-weakly upper semicontinuous._ Let \(\mathcal{M}\subset C\times D\) be an arbitrary weakly closed set such that \(\Gamma^{-}(\mathcal{M})\neq\emptyset\). From Proposition 2.1, it is sufficient to verify that \(\Gamma^{-}(\mathcal{M})\) is weakly closed in \(V\times E\). Let \(\{(v_{n},z_{n})\}\subset\Gamma^{-}(\mathcal{M})\) be such that \[(u_{n},w_{n})\ \stackrel{{ w}}{{\longrightarrow}}\ (u,w)\text{ in }V \times E\text{ as }n\to\infty \tag{3.8}\] for some \((u,w)\in V\times E\). Hence, for every \(n\in\mathbb{N}\), we are able to find \((u_{n},w_{n})\in V\times E\) satisfying \[(v_{n},z_{n})\in\Gamma(u_{n},w_{n})\cap\mathcal{M}.\] From (3.1) and (3.2), we have \[r_{A}(\|v_{n}\|_{V},\|w_{n}\|_{E})\] \[\leq \frac{b_{A}(1+\|v_{n}\|_{V}+\|w_{n}\|_{E})\|u_{0}\|_{V}+c_{J}(1+ \|\gamma_{1}\|\|v_{n}\|_{V}+\|\delta_{1}\|\|w_{n}\|_{E})\|\gamma_{1}\|\|u_{0} \|_{V}}{\|v_{n}\|_{V}}\] \[+\frac{\psi(u_{0})+\alpha_{\psi}\|v_{n}\|_{V}+\beta_{\psi}+\|h\|_ {V^{*}}(\|u_{0}\|_{V}+\|v_{n}\|_{V})}{\|v_{n}\|_{V}},\] and \[r_{B}(\|z_{n}\|_{E},\|u_{n}\|_{V})\] \[\leq \frac{b_{B}(1+\|u_{n}\|_{V}+\|z_{n}\|_{E})\|w_{0}\|_{E}+c_{H}(1+ \|\delta_{2}\|\|u_{n}\|_{V}+\|\gamma_{2}\|\|z_{n}\|_{E})\|\gamma_{2}\|\|w_{0} \|_{E}}{\|z_{n}\|_{E}}\] \[+\frac{\theta(w_{0})+\alpha_{\theta}\|z_{n}\|_{E}+\beta_{\theta}+ \|l\|_{E^{*}}(\|w_{0}\|_{E}+\|z_{n}\|_{E})}{\|z_{n}\|_{E}}.\] Combining the latter with \(H(A)(\mathrm{v})\), \(H(B)(\mathrm{v})\), we infer that sequence \(\{(v_{n},\,z_{n})\}\) is bounded in \(V\times E\). Passing to a relabeled subsequence if necessary, we may suppose that \[(v_{n},z_{n})\ \stackrel{{ w}}{{\longrightarrow}}\ (v,z)\text{ in }V \times E\text{ as }n\to\infty \tag{3.9}\] for some \((v,z)\in V\times E\). Next, for each \(n\in\mathbb{N}\), let \(\xi_{n}\in X^{*}\) and \(\eta_{n}\in Y^{*}\) be such that \[\langle\xi_{n},\gamma_{1}(x-v_{n})\rangle_{X} =J^{0}(\delta_{1}w_{n},\gamma_{1}v_{n};\gamma_{1}(x-v_{n})),\] \[\langle\eta_{n},\gamma_{2}(y-z_{n})\rangle_{Y} =H^{0}(\delta_{2}u_{n},\gamma_{2}z_{n};\gamma_{2}(y-z_{n})).\] Hence, we have \[\langle A(w_{n},v_{n})+\gamma_{1}^{*}\xi_{n},x-v_{n}\rangle_{V}+\psi(x)-\psi( v_{n})\geq\langle h,x-v_{n}\rangle_{V}\] for all \(x\in C\), and \[\langle B(u_{n},z_{n})+\gamma_{2}^{*}\eta_{n},y-z_{n}\rangle_{E}+\theta(y)- \theta(z_{n})\geq\langle l,y-z_{n}\rangle_{E}\] for all \(y\in D\). Then, we use the hypotheses \(H(A)\)(ii) and \(H(B)\)(ii) to find that \[\langle h,x-v_{n}\rangle_{V}\] \[\leq \langle A(w_{n},x)+\gamma_{1}^{*}\alpha_{n},x-v_{n}\rangle_{V}+ \psi(x)-\psi(v_{n})\] \[\leq \langle A(w_{n},x),x-v_{n}\rangle_{V}+J^{0}(\delta_{1}w_{n}, \gamma_{1}x;\gamma_{1}(x-v_{n}))+\psi(x)-\psi(v_{n})\] for all \(\alpha_{n}\in\partial J(\delta_{1}w_{n},\gamma_{1}x)\) and \(x\in C\), and \[\langle l,y-z\rangle_{E}\] \[\leq \langle B(u_{n},y)+\gamma_{2}^{*}\beta_{n},y-z_{n}\rangle_{E}+ \theta(y)-\theta(z_{n})\] \[\leq \langle B(u_{n},y),y-z_{n}\rangle_{E}+H^{0}(\delta_{2}u_{n}, \gamma_{2}y;\gamma_{2}(y-z_{n}))+\theta(y)-\theta(z_{n})\] for all \(\beta_{n}\in\partial H(\delta_{2}u_{n},\gamma_{2}y)\) and \(y\in D\). Passing to the upper limit as \(n\to\infty\) in the inequalities above and using hypotheses \(H(J)\)(iii), \(H(H)\)(iii), \(H(A)\)(iii) and \(H(B)\)(iii), we obtain \[\langle h,x-v\rangle_{V}\] \[= \limsup_{n\to\infty}\langle h,x-v_{n}\rangle_{V}\] \[\leq \limsup_{n\to\infty}\big{[}\langle A(w_{n},x),x-v_{n}\rangle_{V} +J^{0}(\delta_{1}w_{n},\gamma_{1}x;\gamma_{1}(x-v_{n}))+\psi(x)-\psi(v_{n}) \big{]}\] \[\leq \limsup_{n\to\infty}\langle A(w_{n},x),x-v_{n}\rangle_{V}+ \limsup_{n\to\infty}J^{0}(\delta_{1}w_{n},\gamma_{1}x;\gamma_{1}(x-v_{n}))\] \[+\psi(x)-\liminf_{n\to\infty}\psi(v_{n})\] \[\leq \langle A(w,x),x-v\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}x; \gamma_{1}(x-v))+\psi(x)-\psi(v),\] and \[\langle l,y-z_{n}\rangle_{E}\] \[= \limsup_{n\to\infty}\langle l,y-z_{n}\rangle_{E}\] \[\leq \limsup_{n\to\infty}\big{[}\langle B(u_{n},y),y-z_{n}\rangle_{E} +H^{0}(\delta_{2}u_{n},\gamma_{2}y;\gamma_{2}(y-z_{n}))+\theta(y)-\theta(z_{n })\big{]}\] \[\leq \limsup_{n\to\infty}\langle B(u_{n},y),y-z_{n}\rangle_{E}+ \limsup_{n\to\infty}H^{0}(\delta_{2}u_{n},\gamma_{2}y;\gamma_{2}(y-z_{n}))\] \[+\theta(y)-\liminf_{n\to\infty}\theta(z_{n})\] \[\leq \langle B(u,y),y-z\rangle_{E}+H^{0}(\delta_{2}u,\gamma_{2}y; \gamma_{2}(y-z))+\theta(y)-\theta(z).\] Hence \[\langle A(w,x),x-v\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}x;\gamma_{1}(x-v))+ \psi(x)-\psi(v)\geq\langle h,x-v\rangle_{V} \tag{3.10}\] for all \(x\in C\), and \[\langle B(u,y),y-z\rangle_{E}+H^{0}(\delta_{2}u,\gamma_{2}y;\gamma_{2}(y-z))+ \theta(y)-\theta(z)\geq\langle l,y-z_{n}\rangle_{E} \tag{3.11}\] for all \(y\in D\). Let \(t\in(0,1)\) and \(r\in C\) be arbitrary. We insert \(x=x_{t}:=tr+(1-t)v\) into (3.10) and apply the positive homogeneity of \(v\mapsto J^{0}(w,u;v)\) and convexity of \(\psi\) to get \[t\left[\langle A(w,x_{t}),r-v\rangle_{V}+J^{0}(\delta_{1}w,\gamma _{1}x_{t};\gamma_{1}(r-v))+\psi(r)-\psi(v)\right]\] \[\geq t\langle A(w,x_{t}),r-v\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1} x_{t};t\gamma_{1}(r-v))+\psi(x_{t})-\psi(v)\] \[\geq t\langle h,r-v\rangle_{V}.\] So, we obtain \[\langle A(w,x_{t}),r-v\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}x_{t};\gamma_{1 }(r-v))+\psi(r)-\psi(v)\geq\langle h,r-v\rangle_{V}.\] Passing to the upper limit as \(t\downarrow 0\) in the inequality above and using hypothesis \(H(A)\)(i) and upper semicontinuity of \((u,v)\mapsto J^{0}(w,u;v)\), we have \[\langle A(w,v),r-v\rangle_{V}+J^{0}(\delta_{1}w,\gamma_{1}v; \gamma_{1}(r-v))+\psi(r)-\psi(v)\] \[\geq \limsup_{t\downarrow 0}\left[\langle A(w,x_{t}),r-v\rangle_{V}+J^{0 }(\delta_{1}w,\gamma_{1}x_{t};\gamma_{1}(r-v))+\psi(r)-\psi(v)\right]\] \[\geq \langle h,r-v\rangle_{V}.\] Since \(r\in C\) is arbitrary, so, we have that \(v\in C\) is a solution of problem (1.1) corresponding to \(w\in D\). Similarly, we also can obtain that \(z\in D\) is a solution of problem (1.2) corresponding to \(u\in C\). This implies that \((v,z)\in\Gamma(u,w)\). Due to the weak closedness of \(\mathscr{M}\), we obtain that \((v,z)\in\mathscr{M}\), that is, \((u,w)\in\Gamma^{-}(\mathscr{M})\). Therefore, we use Proposition 2.1 to conclude that \(\Gamma\) is weakly-weakly u.s.c.. We conclude that all conditions of the Tychonoff theorem, Theorem 1, have been verified. Using this theorem, we deduce that \(\Gamma\) has at least one fixed point \((u^{*},w^{*})\in C\times D\) in \(\mathscr{X}\). This implies that \((u^{*},w^{*})\in C\times D\) is also a solution of Problem 1.1. **Step 5**.: _The set \(\mathbb{S}(h,l)\) is weakly compact._ From Step 1, we can see that the set \(\mathbb{S}(h,l)\) is bounded. Because of the reflexivity of \(V\times E\), it is sufficient to show that \(\mathbb{S}(h,l)\) is weakly closed. Let \(\{(u_{n},w_{n})\}\subset\mathbb{S}(h,l)\) be a sequence such that \[(u_{n},w_{n})\ \stackrel{{ w}}{{\longrightarrow}}\ (u,w)\ \text{in}\ V\times E\ \text{as}\ n\to\infty \tag{3.12}\] for some \((u,w)\in V\times E\). In virtue of the definition of \(\Gamma\), it yields \((u_{n},w_{n})\in\Gamma(u_{n},w_{n})\). Keeping in mind that \(\Gamma\) is weakly-weakly u.s.c. and has nonempty, bounded, closed and convex values, it follows from [10, Theorem 1.1.4] that \(\Gamma\) is weakly-weakly closed. The latter together with the convergence (3.12) implies that \((u,v)\in\Gamma(u,v)\). From the definition of \(\Gamma\), we infer that \((u,v)\in\mathbb{S}(h,l)\), i.e., \(\mathbb{S}(h,l)\) is weakly closed. Consequently, we conclude that \(\mathbb{S}(h,l)\) is weakly compact in \(V\times E\). This completes the proof. We formulate several corollaries of Theorem 2. To this end, we need the following hypotheses. \(\underline{H(J^{\prime})}\): \(J\colon X\to\mathbb{R}\) is locally Lipschitz continuous such that there exists a constant \(c_{J}\geq 0\) such that \[\|\xi\|_{X^{*}}\leq c_{J}\left(1+\|u\|_{X}\right)\] for all \(\xi\in\partial J(u)\) and \(u\in X\). \(H(H^{\prime})\): \(H\colon Y\to\mathbb{R}\) is locally Lipschitz continuous such that there exists a constant \(c_{H}\geq 0\) such that \[\|\eta\|_{Y^{*}}\leq c_{H}\left(1+\|w\|_{Y}\right)\] for all \(\eta\in\partial H(w)\) and \(w\in Y\). \(H(2^{\prime})\): \(\gamma_{1}\colon V\to X\) and \(\gamma_{2}\colon E\to Y\) are bounded, linear and compact. \(H(A)\)(ii)': for each \(w\in E\) the multivalued mapping \(V\ni u\mapsto A(w,u)+\gamma_{1}^{*}\partial J(\gamma_{1}u)\subset V^{*}\) is stable \(\psi\)-pseudomonotone with respect to \(\{h\}\). \(H(A)\)(v)': there exists a function \(r_{A}\colon\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}\) such that \[\langle A(w,u),u\rangle_{V}-J^{0}(\gamma_{1}u;-\gamma_{1}u)\geq r_{A}(\|u\|_{V },\|w\|_{E})\|u\|_{V}\text{ for all }u\in V\text{ and }w\in E,\] and * for every nonempty and bounded set \(O\subset\mathbb{R}_{+}\), we have \(r_{A}(t,s)\to+\infty\) as \(t\to+\infty\) for all \(s\in O\), * for any constants \(c_{1},c_{2}\geq 0\), it holds \(r_{A}(t,c_{1}t+c_{2})\to+\infty\) as \(t\to+\infty\), * for sequences \(\{s_{n}\}\subset\mathbb{R}_{+}\) and \(\{t_{n}\}\subset\) such that \[s_{n}\to+\infty,\;t_{n}\to+\infty\text{ and }\frac{t_{n}}{s_{n}}\to 0\text{ as }n\to\infty,\] we have \[r_{A}(s_{n},t_{n})\to+\infty\text{ as }n\to\infty.\] \(H(B)\)(ii)': for each \(u\in V\) the multivalued mapping \(E\ni w\mapsto B(u,w)+\gamma_{2}^{*}\partial H(\gamma_{2}w)\subset E^{*}\) is stable \(\theta\)-pseudomonotone with respect to \(\{l\}\). \(H(B)\)(v)': there exists a function \(r_{B}\colon\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}\) such that \[\langle B(u,w),w\rangle_{E}-H^{0}(\gamma_{2}w;-\gamma_{2}w)\geq r_{B}(\|w\|_{ E},\|u\|_{V})\|w\|_{E}\text{ for all }u\in V\text{ and }w\in E,\] and * for every nonempty and bounded set \(O\subset\mathbb{R}_{+}\), we have \(r_{B}(t,s)\to+\infty\) as \(t\to+\infty\) for all \(s\in O\), * for any constants \(c_{1},c_{2}\geq 0\), it holds \(r_{B}(t,c_{1}t+c_{2})\to+\infty\) as \(t\to+\infty\), * for sequences \(\{s_{n}\}\subset\mathbb{R}_{+}\) and \(\{t_{n}\}\subset\) such that \[s_{n}\to+\infty,\;t_{n}\to+\infty\text{ and }\frac{t_{n}}{s_{n}}\to 0\text{ as }n\to\infty,\] we have \[r_{B}(s_{n},t_{n})\to+\infty\text{ as }n\to\infty.\] **Corollary 3.1**.: _Suppose that \(H(A)\)(i), (ii)', (iii), (iv)', \(H(B)\)(i), (ii)', (iii), (iv)', \(H(0)\), \(H(1)\), \(H(2^{\prime})\), \(H(J^{\prime})\), \(H(H^{\prime})\), \(H(\psi)\) and \(H(\theta)\) hold. Then, the set of solutions to problem \((\ref{eq:1})\)-\((\ref{eq:2})\) is nonempty and weakly compact in \(V\times E\)._ **Corollary 3.2**.: _Suppose that \(H(A)\), \(H(B)\), \(H(0)\), \(H(1)\), \(H(2)\), \(H(J)\) and \(H(H)\) are fulfilled. Then, the set of solutions to problem \((\ref{eq:1})\)-\((\ref{eq:1})\) is nonempty and weakly compact in \(V\times E\)._ \(H(A)\)(ii)": for each \(w\in E\), \(V\ni u\mapsto A(w,u)\in V^{*}\) is stable \(\psi\)-pseudomonotone with respect to \(\overline{\{h\}}\). \(H(A)\)(v)": there exists a function \(r_{A}\colon\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}\) such that \[\langle A(w,u),u\rangle_{V}\geq r_{A}(\|u\|_{V},\|w\|_{E})\|u\|_{V}\text{ for all }u\in V\text{ and }w\in E,\] and * for every nonempty and bounded set \(O\subset\mathbb{R}_{+}\), we have \(r_{A}(t,s)\to+\infty\) as \(t\to+\infty\) for all \(s\in O\), * for any constants \(c_{1},c_{2}\geq 0\), it holds \(r_{A}(t,c_{1}t+c_{2})\to+\infty\) as \(t\to+\infty\), * for sequences \(\{s_{n}\}\subset\mathbb{R}_{+}\) and \(\{t_{n}\}\subset\) such that \[s_{n}\to+\infty,\ t_{n}\to+\infty\text{ and }\tfrac{t_{n}}{s_{n}}\to 0\text{ as }n\to\infty,\] we have \[r_{A}(s_{n},t_{n})\to+\infty\text{ as }n\to\infty.\] \(H(B)\)(ii)": for each \(u\in V\), \(E\ni w\mapsto B(u,w)\in E^{*}\) is stable \(\theta\)-pseudomonotone with respect to \(\overline{\{l\}}\). \(H(B)\)(v)": there exists a function \(r_{B}\colon\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}\) such that \[\langle B(u,w),w\rangle_{E}\geq r_{B}(\|w\|_{E},\|u\|_{V})\|w\|_{E}\text{ for all }u\in V\text{ and }w\in E,\] and * for every nonempty and bounded set \(O\subset\mathbb{R}_{+}\), we have \(r_{B}(t,s)\to+\infty\) as \(t\to+\infty\) for all \(s\in O\), * for any constants \(c_{1},c_{2}\geq 0\), it holds \(r_{B}(t,c_{1}t+c_{2})\to+\infty\) as \(t\to+\infty\), * for sequences \(\{s_{n}\}\subset\mathbb{R}_{+}\) and \(\{t_{n}\}\subset\) such that \[s_{n}\to+\infty,\ t_{n}\to+\infty\text{ and }\tfrac{t_{n}}{s_{n}}\to 0\text{ as }n\to\infty,\] we have \[r_{B}(s_{n},t_{n})\to+\infty\text{ as }n\to\infty.\] **Corollary 3.3**.: _Suppose that \(H(A)\)(i), (ii)", (iii), (iv)", \(H(B)\)(i), (ii)", (iii), (iv)", \(H(0)\), \(H(1)\), \(H(\psi)\) and \(H(\theta)\) are satisfied. Then, the set of solutions to problem \((\ref{eq:1})\)-\((\ref{eq:1})\) is nonempty and weakly compact in \(V\times E\)._ **Remark 3.1**.: Corollary 3.3 coincides with a result of Liu-Yang-Zeng-Zhao [11, Theorem 7]. In comparision with that result, in the present paper, we give a new proof which is based on a multivalued version of the Tychonoff fixed point principle in a Banach space along with the theory of nonsmoth analysis, the generalized monotonicity arguments and the Minty approach. We conclude that our results are much more general, improve the former one in several directions, and are proved by using a new approach. ## 4. Conclusions In the present paper, a coupled system which consists of two nonlinear variational-hemivariational inequalities with constraints in Banach spaces has been investigated. A general existence result to the system was established by using a multivalued version of the Tychonoff fixed point principle in a Banach space together with the theory of nonsmoth analysis, generalized monotonicity arguments and the Minty approach. Our result extends the recent ones obtained in [11, Theorem 7]. There are plenty of problems arising in engineering applications which can be formulated as a system of coupled variational-hemivariational inequalities. With this motivation, in the future, we plan to utilize the theoretical results established in this paper to study various real engineering problems. Also, we will further develop the mathematical theory for systems of the variational-hemivariational inequalities, to cover, for instance, stability analysis, optimal control, sensitivity, and homogenization. **Acknowledgments** This project has received funding from the NNSF of China Grant Nos. 12001478 and 12101143, the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No. 823731 CONMECH, and the Startup Project of Doctor Scientific Research of Yulin Normal University No. G2020ZK07. It is also supported by Natural Science Foundation of Guangxi Grants Nos. 2021GXNSFFA196004, 2020GXNSFBAB-297137, GKAD21220144, 2018GXNSFAA281353, the Ministry of Science and Higher Education of Republic of Poland under Grant No. 440328/PnH2/2019, and the National Science Centre of Poland under Project No. 2021/41/B/ST1/01636.
2310.20542
Development and validation of a measurement-driven inter crystal scatter recovery algorithm with in-system calibration
In PET a high percentage of gamma photons being detected undergo Compton scattering in the scintillator. Scintillator blocks are often built from optically isolated crystals. Depending on the angle of incidence and the scintillator geometry this might lead to inter crystal scatter (ICS) events, where energy is deposited in two or more crystals in the detector, which common positioning and reconstruction algorithms cannot resolve. Therefore, ICS events worsen the spatial resolution and the signal-to-noise ratio in the reconstructed image. We want to address this challenge by recovering individual crystals from ICS events with their corresponding energy deposits. This information could ultimately be fed into an image reconstruction framework. In this work, we established an algorithm based on a detector that couples a readout channel to each crystal (one-to-one coupling), which combines a measurement-driven calibration and a fitting routine to achieve the recovery of crystal interactions from measured light patterns. Using Geant4 simulations, we validated and optimized this approach by comparing the recovered events to the simulation ground truth. We showed that, with the best performing algorithm versions, all correct crystals could be identified for 95-97% of the simulated events and the crystal energies as well as the event energy sum could be recovered adequately. For the event energy sum a deviation of less than 5% could be achieved for 96% of all events. Overall, the developed ICS recovery algorithm was successfully validated for one-to-one coupled detector. Future application for other detector configurations should be possible and will be investigated. Additionally, using the new crystal interaction information to determine the most likely first interaction crystal is being examined to improve efficiency and signal-to-noise ratio in the PET reconstruction.
Katrin Herweg, Volkmar Schulz, David Schug
2023-10-31T15:23:43Z
http://arxiv.org/abs/2310.20542v1
Development and validation of a measurement-driven inter-crystal scatter recovery algorithm with in-system calibration ###### Abstract **Background:** In PET a high percentage of gamma photons being detected undergo Compton scattering in the scintillator. Scintillator blocks are often built from optically isolated crystals. Depending on the angle of incidence and the scintillator geometry this might lead to inter crystal scatter (ICS) events, where energy is deposited in two or more crystals in the detector, which common positioning and reconstruction algorithms cannot resolve. Therefore, ICS events worsen the spatial resolution and the signal-to-noise ratio in the reconstructed image. **Purpose:** We want to address this challenge by recovering individual crystals from ICS events with their corresponding energy deposits. This information could ultimately be fed into an image reconstruction framework. **Methods:** In this work, we established an algorithm based on a detector that couples a readout channel to each crystal (one-to-one coupling), which combines a measurement-driven calibration and a fitting routine to achieve the recovery of crystal interactions from measured light patterns. Using Geant4 simulations, we validated and optimized this approach by comparing the recovered events to the simulation ground truth. **Results:** We showed that, with the best performing algorithm versions, all correct crystals could be identified for 95 % to 97 % of the simulated events and the crystal energies as well as the event energy sum could be recovered adequately. For the event energy sum a deviation of less than 5 % could be achieved for 96 % of all events. **Conclusion:** Overall, the developed ICS recovery algorithm was successfully validated for one-to-one coupled detector. Future application for other detector configurations should be possible and will be investigated. Additionally, using the new crystal interaction information to determine the most likely first interaction crystal is being examined to improve efficiency and signal-to-noise ratio in the PET reconstruction. ## 1 Introduction In positron emission tomography (PET) a high percentage of gamma photons being detected undergo Compton scattering in the scintillator [1]. Depending on the angle of incidence and the scintillator geometry this might lead to inter crystal scatter (ICS) events, where energy is deposited in two or more crystals in the detector. ICS is a long-known challenge for PET [2, 3, 4, 5]. It results in wrongly assigned lines of responses (LORs) due to falsely positioned events, thereby worsening the spatial resolution [6, 7, 8]. In regular data processing only one crystal ID is passed to the reconstruction. Since it is not possible to determine the first-interaction ID of an ICS event with complete certainty, it seems advantageous to consider also first-interaction probabilities and other crystal IDs in the reconstruction [9]. One option to deal with ICS, when it can be resolved, is to reject these events before the reconstruction to avoid a degradation of the spatial resolution [10, 11]. However, this reduces the detector sensitivity noticeably. At this point in time, a positive effect on the image quality in clinical scans has not been shown yet. To tackle this challenge several methods have been proposed including neural network approaches [12, 13] and signal multiplexing [14]. We use a measurement-driven algorithm inspired by Gross-Weege et al (2016) [10] and similar to that described in Lee et al (2018) [15], using a linear combination of known light patterns to estimate the measured light pattern. Within the algorithm we aim to very cleanly determine the single crystal interaction (SCI) response of the detector, which then includes all optical and readout effects. This makes the achieved SCI light spread matrix (SCI-LSM) unique for each calibrated detector. What differentiates our approach from that of Lee et al (2018) [15] is the direct application to measurement data, the usage of different positioning patterns in the calibration process and its integration in a processing framework capable of online and post processing [16, 17]. In the end we aim to make crystal interactions accessible for an existing reconstruction framework, in order to utilize all the information available. For now, we focus on a clinical one-to-one coupled detector geometry and validate the algorithm with Geant4 simulations. This simulation models the interaction of an incoming gamma photon with the scintillator without describing the scintillation process itself or the detection of the optical photons by the photosensor. Energy deposits in crystals can be transformed into measured light patterns using the SCI-LSM. The results of the validation are presented here. ## II Materials The proposed algorithm is based on a measurement-driven calibration. In this work, we use a one-to-one coupled PET detector to get the necessary calibration data. The detector consisted of a \(12\times 12\) LSO array (Tianle Photonics Co. LTD.), which was subdivided into \(2\times 2\) crystal sub elements by a double layer of ESR reflector (see Fig. 1). Each of the crystal pixels in a sub element had a size of \(3.92\,\mathrm{mm}\times 3.92\,\mathrm{mm}\times 16\,\mathrm{mm}\). No glue was used inside of the array to optimize the light transport. As photo-sensors, 36 digital silicon photomultipliers (dSiPMs, DPC3200-22 by PDPC) in a \(6\times 6\) arrangement were employed, resulting in \(12\times 12\) readout channels with a pitch of \(4\,\mathrm{mm}\) matching the crystal pitch. One crystal sub group is located on a dSiPM that outputs photon values for all four channels when the readout condition is met [18]. The readout was accomplished by the Hyperion III platform [19]. It should be noted that the application of the method to other sensor technologies, such as analogue SiPMs read out with ASICs [20, 21], is viable, as the algorithm takes no prior knowledge of the detector into account except for the measured light pattern. Measurements were conducted using a \({}^{22}\)Na source, which irradiated the whole crystal array at a detector temperature of \(17\,^{\circ}\)C. We employed DPC trigger scheme 2 with validation network 8-OR (see Tabacchini et al (2014) [18] for a more detailed description) at an overvoltage of \(3\,\mathrm{V}\). ### Geant4 simulation For the validation of the proposed algorithm with a known ground truth, Geant4 simulations of an LSO crystal array with the same geometry as the measurements were created (see Fig. 1b). This resulted in a crystal array of \(6\times 6\) crystal sub elements, which were separated by a \(160\,\mathrm{\SIUnitSymbolMicro m}\) gap. Each sub element consisted of four \(3.9\,\mathrm{mm}\times 3.9\,\mathrm{mm}\times 16\,\mathrm{mm}\) LSO crystals. Since no optical simulation was conducted, ESR foil between sub elements and wrapping on the outside of the crystal array were not defined. Energy deposits generated by a 511-keV gamma photon are summed up per crystal element. Here we considered a gamma event to be one 511 keV gamma interacting with the crystal array. The gamma source was implemented as an isotropically emitting point source centered in front of the crystal array with a distance of \(20\,\mathrm{mm}\) to illuminate the full array. ## III Methods The proposed algorithm consists of two essential parts. First is a measurement-based calibration, which results in a quantitative knowledge of the light spread for each crystal in the detector in case of a SCI. Secondly, in the recovery step, this knowledge is used to fit Figure 1: The crystal array used for calibration is shown in (a) and the corresponding simulation geometry in (b). a linear combination of light spread functions in order to reproduce the light pattern of a given measurement. ### Calibration The calibration starts by analysing flood measurements, which are conducted as described before. The resulting signals are clustered by their timestamps within a singles cluster window of \(30\,\mathrm{ns}\). With the help of a center of gravity (COG) algorithm the clusters are assigned a position on the detector. Using the location of the pixel with the highest photon count, the main pixel, regions of interest (ROI) are defined. The ROIs are based on the DPCs housing the neighboring pixels to the main pixel. For central main pixel locations there is a horizontal, a vertical and a diagonal neighbor DPC. For main pixels located on the edge or the corner, not all neighbours are physically present. Considering three neighbouring DPCs we can define eight ROIs with different combinations of the neighbouring DPCs. The COG of all pixels of the DPCs belonging to a ROI is then calculated if the respective DPCs are present in the cluster, meaning they have triggered, were validated and their timestamp was within the cluster window. Up to eight different two dimensional COG coordinates can be calculated for each cluster. An example of these COG coordinates can be found in Schug Figure 2: The left column shows an exemplary location of a main pixel marked in blue and the respective ROI for different neighbour combinations specifically for a DPC configuration of \(4\times 4\) DPCs. The right column shows for that COG neighbour ROI the floodplain histogram of all main pixels on the sensor tile. a) and e) depict the main dSiPM criterion, b) and f) the horizontal neighbour criterion, c) and g) the vertical neighbour criterion and d) and h) the diagonal neighbour criterion. et al (2015) [22], where two of the eight possible coordinates were computed. Some exemplary ROIs are shown in Figure 2. The COG positions are then filled into eight separate 2D flood map histograms (see Fig. 2 and Fig. 2(a), where SCIs can be seen in the high-count areas and the connecting lines represent ICS events) according to the ROI they are assigned to. Based on the fitted positions, geometrical cuts in the individual COG variables can be defined to generate the SCI-LSM. With the described method we are adaptive to the DPC triggering situation and optimize the rejection efficiency by using the largest possible cluster size defined by the ROIs. With a further rejection of clusters with DPCs triggered outside the ROIs we remove ICS events with a large distance between the crystals that generate scintillation light and reject pile up events as well (see Fig. 2(b) as an example of one filtered flood map). Figure 2 and Figure 3 both show that there is an inherent variation in the light distribution (see e.g. crystal at \(5\),\(-5\)), where the COG peaks are closer together and asymmetrical in comparison to the rest of the array. This makes the importance of an individual, measurement-based calibration for each detector apparent. These effects cannot be easily modelled with an optical simulation, which typically assumes a regular and perfect optical behaviour over the whole crystal-SiPM detector block. Using the filtered SCI events and exploiting the one-to-one coupling scheme, we assign the crystal ID based on the main pixel ID. For each crystal, we histogram the photon sum of the four pixels of the main DPC, the value used for energy estimation, into one spectrum and the light fraction captured by each readout channel into individual histograms. Normalization is performed by dividing the readout channel's photon count by the value used for energy estimation. A timestamp for the cluster is set to the timestamp of the main DPC. The SCI-LSM consists of the means of the light fraction histograms, while an uncertainty matrix represents the width of these histograms. In case of the used detector with one-to-one coupled crystals this is a \(144\times 144\) matrix for the light fractions and a matrix of the same size for the uncertainties as well as a vector of 144 photon-to-energy conversion factors. ### Recovery As a first step the LSM is reduced to the triggered channels and crystals matching these channel IDs. Only crystals whose calibrated light patterns include the triggered photosensors are considered to have contributed to the measured light pattern. A numerical least-square method is used to solve the matrix equation Eq. 1, where \(\mathbf{M}\) describes the LSM, \(\vec{e}\) is the energy distribution vector of the crystals and \(\vec{p}\) is the measured light distribution, and find the most-probable solution for \(\vec{e}\) to describe the observed photon pattern \(\vec{p}\). \[\mathbf{M}\cdot\vec{e}=\vec{p} \tag{1}\] As the solver is unconstrained, negative energy values can be part of the solution. Therefore, after each solving of Eq. 1, the contributing crystals in the energy distribution vector are either reduced by removing the lowest negative energy entry or by removing all energy entries below a certain threshold larger than zero, which has been varied. Reducing the number of crystals contributing to the energy distribution vector, requires the size of the LSM to also be reduced. This method is applied iteratively allowing the solution to converge to a stable point with physically sensible energy contributions of a set of crystals over several fitting iterations. We considered two methods to improve algorithm performance. On the one hand, we used weights on the calibration matrix and measurement. The weights tested were based on the number of photons detected per SiPM (photon weights) and the standard deviation of the calibrated light fraction distribution per channel (sigma weights). On the other hand, we applied an energy filter (posteriori filter), which dismisses all crystals in the algorithm solution with energies below a certain threshold. If this posteriori filter is employed, it affects both the end solution of the algorithm and the metric calculation. ### Validation For the validation of the algorithm's functionality, the simulation output is used as a ground truth. The energy deposits of the simulation are combined with the light spread as described by the calibration matrix in order to model the channel response of the photo-sensor (see Fig. 6b). Since a numerical method is used to achieve the solution, some deviations are expected. The performance of the algorithm is qualitatively assessed with two metrics: * the first metric (\(\Delta_{\text{crystal}}\)), assesses the relative deviation from the simulation event Figure 3: In (a), a superposition of flood maps for all different DPC readout conditions is shown. The peaks correspond to the crystal positions and SCIs, while the diagonal crosses represent ICS and light sharing. (b) shows the strictly filtered SCIs at the crystal positions for the main dSiPM criterion. It becomes apparent in both figures that the light distribution varies from the mean behaviour due to differences in the optical coupling of the four crystals located on a single DPC for some crystals (see e.g. crystal at \(5,-5\)), which cannot be achieved with a perfect optical simulation and therefore motivates the measurement-based approach. energy for each crystal within the algorithm solution or the ground truth: \[\Delta_{\text{crystal}}=\frac{\sum\limits_{i=1}^{N_{\text{crystal}}}|e^{i}_{ \text{algorithm}}-e^{i}_{\text{simulation}}|}{\sum\limits_{i=1}^{N_{\text{ crystal}}}e^{i}_{\text{simulation}}}\] (2) * the second metric (\(\Delta_{\text{sum}}\)), determines the relative energy sum deviation between the algorithm solution and the simulation: \[\Delta_{\text{sum}}=\frac{\sum\limits_{i=1}^{N_{\text{crystal}}}e^{i}_{ \text{algorithm}}-\sum\limits_{i=1}^{N_{\text{crystal}}}e^{i}_{\text{ simulation}}}{\sum\limits_{i=1}^{N_{\text{crystal}}}e^{i}_{\text{simulation}}}\] (3) For the best performance of the algorithm both metrics should be as close to zero as possible. Additionally, we determine the fraction of events (correct crystal fraction), for which the algorithm has assigned all the crystals correctly without considering their energy deposition. In order to achieve a more realistic simulation result, uncertainties were included before applying the algorithm. The two modelled effects are the energy resolution of the crystal and its light yield non-proportionality. To model the intrinsic energy resolution of LSO crystals a Gaussian centered around each energy value with a full width at half maximum (FWHM) of \(8\,\mathrm{\char 37}\) was sampled randomly adding an uncertainty to the simulation. The light yield non-proportionality of LSO is well described in Moszynski et al (2016) [23]. According to their non-proportionality fraction at \(662\,\mathrm{keV}\), we developed a continuous model describing the non-proportionality for all energies (see Fig. 4). The model consists of a hyperbolic fit to the data of Moszynski et al (2016) [23], continued for energies lower than \(20\,\mathrm{keV}\) with a linear function. This model is applied to the simulation after it was converted into the channel response of the photo-sensor. ## IV Results ### Basic simulation The first results of the validation show that the iterative reduction of the most negative energy contribution in the solving method results in many low energy contributions remaining in crystals not present in the simulation ground truth (compare Fig. 6a and 6c). These crystals also cause an overestimation of the event energy sum (see Fig. 5a). For the weighting approach, using the photon weights on both matrix and measurement broadened both metric distributions significantly and deteriorated the mean metric values (see Tab. 1). The number of iterations the algorithm goes through is slightly reduced compared to the case without weighting (see Fig. 7). When applying the sigma weights to matrix and measurement, the \(\Delta_{\rm crystal}\) distribution is shifted closer to zero and slightly sharpened, which is also shown in a smaller mean \(\Delta_{\rm crystal}\) (see Tab. 1). The behaviour for \(\Delta_{\rm sum}\) is similar, reducing the mean metric value. The number of solving iterations in the algorithm seems to increase on average (see Fig. 7). Also, the correct crystal fraction rises from \(0.5\,\%\) (no weights) to \(7.5\,\%\) (sigma weights) (see Tab. 1). Using the posteriori filter without weighting and scanning through different values, we found that for a filter of \(20\,\mathrm{keV}\), the \(\Delta_{\rm crystal}\) metric goes into saturation (see Fig. 8) and does not improve significantly for higher filter values, while the \(\Delta_{\rm sum}\) metric crosses zero between filter values \(10\,\mathrm{keV}\) and \(15\,\mathrm{keV}\) and continues to deteriorate again from \(20\,\mathrm{keV}\) on. Therefore, we chose an energy filter of \(20\,\mathrm{keV}\) and applied it to all further evaluation with Figure 4: Light yield non-proportionality according to Moszynski et al [23], down to energies of \(16\,\mathrm{keV}\) this follows a hyperbolic function (orange fit). To determine the non-proportionality for energies lower than \(16\,\mathrm{keV}\) a linear behaviour is assumed (green fit). In application, the switch from hyperbolic to linear model is made at \(20\,\mathrm{keV}\) (red line). iterative reduction of solutions. The energy filter removes most of the misidentified crystal solutions and reduces the energy overestimation significantly (see Fig. 5a). With this filter, the correct crystal fraction rises from \(0.5\,\mathrm{\char 37}\) to \(96.4\,\mathrm{\char 37}\). It is also applied to both metrics (see Fig. 5c and Fig. 5e). Compared to the unfiltered case (see Fig. 5b and Fig. 5d), the metrics show a sharper distribution close to \(0\) with only few outliers surpassing the absolute value of \(0.05\). In general, \(95.9\,\mathrm{\char 37}\) of events fulfill this criterion. We found that combining sigma weighting and energy filter did not significantly improve the metric values compared to no weighting with energy filter and instead reduced the number of events with correctly identified crystals while increasing the average number of iterations within the algorithm. Therefore, we did not consider weights for further analysis, only using the energy filter instead. The results with the energy filter led us to also test iteration methods with thresholds instead of iteratively reducing negative solutions. Removing all solutions below a set threshold, shows a quick improvement of metric values and fraction of events with correctly Figure 5: In (a) the event energy sum spectra of the simulation, the algorithm solution and the algorithm solution excluding crystal contributions below \(20\,\mathrm{keV}\) are shown. The algorithm solution spectrum shows a structure similar to the simulation, but consistently overestimates energy, which can be seen by the shift of the peaks to higher energies. If crystal contributions with low energies are removed from the energy sum, this effect is significantly reduced. (b) and (d) demonstrate the distribution of the two metrics without energy filter. In (c) and (e), on the other hand, the metric distributions with \(20\,\mathrm{keV}\) energy filter can be seen. The filter shifts the distributions closer to zero and sharpens them significantly. assigned crystals (see Tab. 2) for increased thresholds. ### Simulation with uncertainties After including the uncertainties described in section II., the energy spectrum of the simulation is broadened and shifted slightly towards lower energies (see Fig. 8(a)). The algorithm solution with a posteriori filter of \(20\,\mathrm{keV}\) reproduces the shape and position of the modified simulation's energy spectrum. Both metrics cluster mainly around zero with a broader distribution than before modifying the simulation (see Fig. 8(b) and Fig. 8(c)). The \(\Delta_{\mathrm{sum}}\) metric is also slightly shifted towards negative values. Applying the sigma weights to this modified dataset showed no significant improvement of the mean and a slight decrease in the correct crystal fraction. Figure 6: Crystal and energy distribution of one exemplary event for the simulation in (a), the simulation with added lightspread in (b) and the algorithm solution in (c). This example is one of the events with the worst metric values (\(\Delta_{\mathrm{crystal}}=0.21\) and \(\Delta_{\mathrm{sum}}=0.14\)), which is caused by the high energy deposit of over \(20\,\mathrm{keV}\) in crystal \((5,5)\). No energy filter is applied for the plots and the metrics given. Figure 7: Number of solving iterations within the algorithm for different weights. (a) shows no weights. In (b) photon weights were applied and in (c) the case for rms weights can be seen. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(\overline{\Delta}_{\rm crystal}\) & \(\overline{\Delta}_{\rm sum}\) & corr. crystal frac. / \% \\ \hline **no uncertainties** & & & \\ \hline no weights & 0.0604\(\pm\) 0.0004 & 0.0449\(\pm\) 0.0003 & 0.5 \\ \hline photon weights & 0.2875\(\pm\) 0.0027 & \(-\)0.1171\(\pm\) 0.0021 & 0.0 \\ \hline sigma weights & 0.0459\(\pm\) 0.0006 & 0.0329\(\pm\) 0.0006 & 7.5 \\ \hline no weights & 0.0120\(\pm\) 0.0003 & \(-\)0.0044\(\pm\) 0.0002 & 96.4 \\ \hline sigma weights & 0.0161\(\pm\) 0.0005 & 0.0028\(\pm\) 0.0005 & 89.4 \\ \hline **with uncertainties** & & & \\ \hline no weights & & & \\ \hline + posteriori filter 20 keV & 0.0424\(\pm\) 0.0005 & \(-\)0.0277\(\pm\) 0.0007 & 96.6 \\ \hline sigma weights & & & \\ \hline + posteriori filter 20 keV & 0.0421\(\pm\) 0.0005 & \(-\)0.0212\(\pm\) 0.0008 & 89.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean metric values for different weighting and energy filtering combinations with iterative reduction of negative solutions. Additionally, the correct crystal fraction is shown. Figure 8: Metric values for different energy filter values are shown. The decrease of both metrics reduces around 20 keV. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(\overline{\Delta}_{\rm crystal}\) & \(\overline{\Delta}_{\rm sum}\) & corr. crystal frac. / \% \\ \hline **no uncertainties** & & & \\ \hline threshold 0 keV & 0.0441\(\pm\) 0.0004 & 0.0309\(\pm\) 0.0003 & 3.6 \\ \hline threshold 5 keV & 0.0101\(\pm\) 0.0002 & 0.0008\(\pm\) 0.0002 & 90.2 \\ \hline threshold 10 keV & 0.0092\(\pm\) 0.0002 & 0.0001\(\pm\) 0.0002 & 95.0 \\ \hline threshold 20 keV & 0.0101\(\pm\) 0.0003 & \(-\)0.0004\(\pm\) 0.0002 & 95.1 \\ \hline **with uncertainties** & & & \\ \hline threshold 10 keV & 0.0529\(\pm\) 0.0010 & \(-\)0.0356\(\pm\) 0.0011 & 95.4 \\ \hline threshold 20 keV & 0.0541\(\pm\) 0.0011 & \(-\)0.0359\(\pm\) 0.0011 & 93.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Mean metric values for different thresholds during the solving iteration and no weighting. Additionally, the correct crystal fraction is shown. Figure 9: In (a) the event energy sum spectra of the simulation (sim), the simulation with uncertainties (sim modified), the algorithm solution (alg) and the algorithm solution with an energy filter of \(20\,\mathrm{keV}\) (alg filtered) are shown. The algorithm solution spectrum shows a structure similar to the simulation with uncertainties, but slightly overestimates the energy. If the energy filter is applied, this effect is significantly reduced (alg filtered) and modified simulation and filtered algorithm solution fit together satisfactorily. (b) demonstrates the distribution of the \(\Delta_{\mathrm{crystal}}\) metric with applied \(20\,\mathrm{keV}\) energy filter, while (c) shows the same for the \(\Delta_{\mathrm{sum}}\) metric. Discussion The overall validation of the described algorithm is deemed successful, with a high correct crystal fraction after applying the posteriori filter, and a low deviation of energy contributions in these crystals as described by low metric values. The correct crystal fraction of more than 96 % can be roughly compared to the identification rate stated by Lee et al (2018), which lies at 93 % for one-to-one coupled detectors [15], showing that our algorithm can indeed hold up to previously published identification rates. With our defined metrics, we have given an estimate on the accuracy of energies reproduced by the algorithm. According to \(\Delta_{\mathrm{sum}}\) the event energies were on average reproduced with a deviation of 0.28 % and more than 99 % of all simulated events showed an energy deviation of less than 0.5 %. With this result, like Lee et al [15], we can claim a good linearity between recovered and true energy. Comparing to ICS recovery with a convolutional neural network [13], where a crystal selection accuracy of 99 % was stated for photoelectric effect and 91 % for ICS events, our correct crystal fraction of 96 % encompasses both of these values and tests in more detail whether a correct crystal recovery was made, which is why we believe that it should be considered on the same level or even improved. As the energy linearity in [13] is similar to that in [15], we can assume the same conclusion as before, that our energy recovery is on a similar level as the convolutional neural network. The tests to apply different weighting during the recovery process show that for simulations without uncertainties the mean metric values are deteriorated for photon weights and slightly improved for sigma weights. A disadvantage of the sigma weights is that they increase the number of iterations, which the algorithm goes through. The improvement with the sigma weights is, however, outweighed by the application of an energy filter. Even a filter of 5 keV already achieves better results than the sigma weights (see Fig. 8). Using the optimal energy filter value of 20 keV and no weights, achieves the best mean metric values and the best fraction of correctly assigned events (see Tab. 1). We have also found that the combination of sigma weights and energy filter has no significant effect on the metric values, while deteriorating the fraction of correctly assigned events. Applying uncertainties to the simulation, the algorithm can more easily reproduce the shape of the energy spectrum and with the energy filter manages to replicate also the position of the peak very well (see Fig. 9a). The mean metric values of the simulation data with uncertainties and energy filter are similar to those with no uncertainties and no energy filter. If, for simulations without uncertainties, we compare the mean metric values of the algorithm with a threshold in the iterative solving process and the algorithm with iteratively removed negative solutions and posteriori energy filter of 20 keV, it becomes evident that even a small threshold of 5 keV makes both metric values comparable. Only the correct crystal fraction is slightly reduced, but this effect can be removed by increasing the threshold to 10 keV or higher. Applying these higher thresholds on the simulation with uncertainties shows again similar results to those for application of the posteriori filter, but overall they are slightly deteriorated. At the same time, the use of thresholds reduces the mean and maximum number of solving iterations significantly (see Fig. 7 and Fig. 10). Filtering low energy crystal contribution, either by thresholding or by applying an external filter, can be considered a viable procedure in this case, as the electronics in the experiment impose a low energy cut-off on channel basis through rising edge thresholds or digital validation schemes, such as described in Tabacchini et al (2014) [18] and energy thresholds are commonly used in reconstruction to filter Compton scatter and with that also ICS [24]. Of course, the experimental argument is only valid if we are considering a one-to-one coupled detector block. For the more complicated case of light-sharing detectors, the method would have to be reevaluated. Considering the application of this algorithm with a phantom or patient as the gamma source, we believe that the algorithm would be capable of also recovering phantom or patient scatter events and ICS events related to them. The algorithm described in this work does not make any a-priori estimation regarding the energy of the incoming gamma. It primarily considers the measurement. The data on which the algorithm is applied is reduced to the channels, which have photon values larger zero and all crystals, which could contribute to a light output on these channels (see section III.). In this regard, a gamma, scattered in the phantom or patient, should not be different from a gamma scattered in the scintillator, where the scattered gamma is not stopped in the crystal. The latter case is covered in our simulation validation. Therefore, the algorithm should not distinguish between gamma photons scattered in the phantom or patient and those which do not interact with either. Thus, it should be able to recover the crystals of both events sufficiently. ## VI. Conclusion and Outlook In summary, we have developed and validated an algorithm, which recovers individual crystals from a gamma photon interaction with their deposited energies. The algorithm uses a measurement-based calibration in combination with a numerical least-square solver for the recovery. From the validation results we can conclude, that the algorithm can reproduce the events of the simulation satisfactorily. For a simulation without uncertainties, we have shown that up to \(96.4\,\mathrm{\char 37}\) of solution crystals with energies above \(20\,\mathrm{keV}\) correspond to the simulation crystals and \(95.9\,\mathrm{\char 37}\) of events have a total energy deviation of less than \(5\,\mathrm{\char 37}\) between simulation and algorithm. The application of weights to the LSM and measurement did not yield a significant performance improvement in combination with the mentioned energy filter. Similarly, using different methods for the iteration process did not show a dif Figure 10: Number of solving iterations within the algorithm for different weights. (a) shows a threshold of \(0\,\mathrm{keV}\). In (b) a threshold of \(5\,\mathrm{keV}\) is applied, (c) displays a threshold of \(10\,\mathrm{keV}\) and in (d) the case of a \(20\,\mathrm{keV}\) threshold is shown. ference in performance, but thresholding displayed a reduction in iteration numbers, making it favourable, especially for high data throughput. The performance of the algorithm holds for simulations with uncertainties. We see great potential to use the crystal interaction information to improve efficiency and signal-to-noise ratio of the PET reconstruction. Based on the crystal interaction information, Compton kinematics together with an incidence angle would allow to determine the most likely first interaction crystal. Passing on the most probable first interaction crystal would be a benefit to the reconstruction, as demonstrated by Surti et al (2018) [8], who showed that choosing the crystal with the second highest energy deposit improved positioning performance compared to using the crystal with the highest energy deposit. With our algorithm the second highest crystal method would not be limited to one-to-one coupled detectors anymore. Therefore, another future aim is to apply the developed algorithm to highly pixelated detectors and prove its efficiency. Additionally, it is planned to implement the algorithm for the PET-MRI system recently installed in University Medical Center Utrecht [25]. ## Acknowledgments This work was carried out with funding from the JARA Prep Fund Simulation and Data Science "Deep Image Data Analysis for Precision Medical Imaging" (PF-JARA-SDS008). ## Conflict of Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests with the work reported in this paper: V.S. and D.S. are co-founders and employees of the spin-off company Hyperion Hybrid Imaging Systems GmbH, Aachen, Germany.
2309.16193
Disentangling mappings defined on ICIS
We study germs of hypersurfaces $(Y,0)\subset (\mathbb C^{n+1},0)$ that can be described as the image of $\mathscr A$-finite mappings $f:(X,S)\rightarrow (\mathbb C^{n+1},0)$ defined on an ICIS $(X,S)$ of dimension $n$. We extend the definition of the Jacobian module given by Fern\'andez de Bobadilla, Nu\~no-Ballesteros and Pe\~nafort-Sanchis when $X=\mathbb C^n$, which controls the image Milnor number $\mu_I(X,f)$. We apply these results to prove the case $n=2$ of the generalised Mond conjecture, which states that $\mu_I(X,f)\geq codim_{\mathscr A_e} (X,f)$, with equality if $(Y,0)$ is weighted homogeneous.
Alberto Fernández-Hernández, Juan J. Nuño-Ballesteros
2023-09-28T06:24:30Z
http://arxiv.org/abs/2309.16193v1
# Disentangling mappings defined on Icis ###### Abstract. We study germs of hypersurfaces \((Y,0)\subset(\mathbb{C}^{n+1},0)\) that can be described as the image of \(\mathscr{A}\)-finite mappings \(f:(X,S)\to(\mathbb{C}^{n+1},0)\) defined on an icis\((X,S)\) of dimension \(n\). We extend the definition of the Jacobian module given by Fernandez de Bobadilla, Nuno-Ballesteros and Penafort-Sanchis when \(X=\mathbb{C}^{n}\), which controls the image Milnor number \(\mu_{I}(X,f)\). We apply these results to prove the case \(n=2\) of the generalised Mond conjecture, which states that \(\mu_{I}(X,f)\geq\operatorname{codim}_{\mathscr{A}_{\varepsilon}}(X,f)\), with equality if \((Y,0)\) is weighted homogeneous. Key words and phrases:Image Milnor number, the Mond conjecture, ICIS 2000 Mathematics Subject Classification: Primary 58K15; Secondary 32S30, 58K40 Work of J. J. Nuno-Ballesteros partially supported by Grant PID2021-124577NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe". Introduction Let \(f\) be a smooth smooth curve in \(\mathbb{C}^{n}\) and let \(\mu_{I}(X,f)\) be the map \[\mu_{I}(f):=\operatorname{\operatorname{codim}}_{\mathscr{A}_{e}}(f)\] where \(\mathscr{A}_{e}\) is the set of all smooth smooth curves in \(\mathbb{C}^{n}\). The map \(\mu_{I}(X,f)\) is called _distentanglement_. The map \(\mu_{I}(X,f)\) is called _distentanglement_. Our main contribution in this article is that we prove the generalised Mond conjecture in the case that \(n=2\) in its whole generality. Therefore, we show that the \(\mu\geq\tau\) statement can be extended to surfaces \((Y,0)\) in \((\mathbb{C}^{3},0)\) with \(\mathscr{A}\)-finite normalisation mapping on an icis. Formally, the main theorem of this paper is the following result: **Theorem 1.2**.: _Let \(f:(X,S)\to(\mathbb{C}^{3},0)\) be an \(\mathscr{A}\)-finite mapping where \((X,S)\) is an icis of dimension \(2\) and with image \((Y,0)\). Then,_ \[\mu_{I}(X,f)\geq\operatorname{codim}_{\mathscr{A}_{e}}(X,f),\] _with equality if \((Y,0)\) is weighted homogeneous._ In order to prove it, we adapt the construction of the module \(M(g)\) and its relative version for unfoldings \(M_{\operatorname{rel}}(G)\) that were defined in [4] by J. Fernandez de Bobadilla, J. J. Nuno-Ballesteros and G. Penafort-Sanchis. We extend to this general framework their main result, which states that the length of \(M(g)\) equals \(\mu_{I}(X,f)\) if and only if \(M_{\operatorname{rel}}(G)\) is a Cohen-Macaulay module, and that, if each of these cases hold, the generalised Mond conjecture holds for the mapping \((X,f)\). ## 2. Mappings on icis Singularities of smooth mappings between manifolds is a classical subject in Singularity Theory. The infinitesimal methods were developed by Thom and Mather in the late sixties and by this reason it is known as Thom-Mather theory. We refer to the book [11] for a modern presentation of the theory, which also includes the extension to holomorphic germs between complex manifolds. In [10] Mond and Montaldi extended the Thom-Mather theory of singularities of mappings \(f\colon(X,S)\to(\mathbb{C}^{p},0)\) defined on an icis\((X,S)\). The crucial point here is that they consider deformations not only of the mapping \(f\), but also of the icis\((X,S)\). Here we summarise some of the basic definitions and properties, in order to make this paper more self-contained. For a more detailed account we refer to the original paper by Mond and Montaldi [10]. Along this section we work with holomorphic map germs \(f\colon(X,S)\to(\mathbb{C}^{p},0)\), where \((X,S)\) is an icis of dimension \(n\). It is usual to denote such a map germs by a pair \((X,f)\), although sometimes we may omit the base set of the germ if it does not provide relevant information or it is clear from the context. The two first definitions declare the type of equivalences and deformations we are dealing with in this theory. **Definition 2.1**.: Two holomorphic map germs \(f,g:(X,S)\to(\mathbb{C}^{p},0)\) are called _\(\mathscr{A}\)-equivalent_ if we have a commutative diagram where the columns are biholomorphisms. **Definition 2.2**.: An _unfolding of the pair_\((X,f)\) is a map germ \(F\colon(\mathcal{X},S^{\prime})\to(\mathbb{C}^{p}\times\mathbb{C}^{r},0)\) together with a flat projection \(\pi\colon(\mathcal{X},S^{\prime})\to(\mathbb{C}^{r},0)\) and an isomorphism \(j\colon(X,S)\to\big{(}\pi^{-1}(0),S^{\prime}\big{)}\) such that the following diagram commutes where \(\pi_{2}:\mathbb{C}^{p}\times\mathbb{C}^{r}\to\mathbb{C}^{r}\) is the Cartesian projection. In Definition 2.2, \(\mathbb{C}^{r}\) is called the _parameter space of the unfolding_. It is common to denote the unfolding by \((\mathcal{X},\pi,F,j)\). For each parameter \(u\in\mathbb{C}^{r}\) in a neighbourhood of the origin, we have a mapping \(f_{u}\colon X_{u}\to\mathbb{C}^{p}\), where \(X_{u}:=\pi^{-1}(u)\), denoted also by \((X_{u},f_{u})\). **Definition 2.3**.: Two unfoldings \((\mathcal{X},\pi,F,j)\) and \((\mathcal{X}^{\prime},\pi^{\prime},F^{\prime},j^{\prime})\) over \(\mathbb{C}^{r}\) are _isomorphic_ if the following diagram commutes: where \(\Phi\) and \(\Psi\) are biholomorphisms and also \(\Psi\) is an unfolding of the identity over \(\mathbb{C}^{d}\). If \((\mathcal{X},\pi,F,j)\) is an unfolding of \((X,f)\) over \((\mathbb{C}^{r},0)\), a germ \(\rho:(\mathbb{C}^{s},0)\to(\mathbb{C}^{r},0)\) induces and unfolding \((\mathcal{X}_{\rho},\pi_{\rho},F_{\rho},j_{\rho})\) of \((X,f)\) by a _base change_ or, in other words, by the fibre product of \(F\) and \(\operatorname{id}_{\mathbb{C}^{p}}\times\rho\): where we omit the points of the germs for simplicity. The unfolding \((\mathcal{X},\pi,F,j)\) is _versal_ if every other unfolding, for example \((\mathcal{X}^{\prime},\pi^{\prime},F^{\prime},j^{\prime})\), is isomorphic to an unfolding induced from the former by a base change, \((\mathcal{X}_{\rho},\pi_{\rho},F_{\rho},j_{\rho})\). A versal unfolding is called _miniversal_ if it has a parameter space with minimal dimension. **Definition 2.4**.: A germ \((X,f)\) is _stable_ if any unfolding is _trivial_, that is, isomorphic to the constant unfolding \((X\times\mathbb{C}^{r},\pi_{2},f\times\operatorname{id}_{\mathbb{C}^{r}},i)\). We say that \((X,f)\) has _isolated instability_ if there exists a representative \(f\colon X\to\mathbb{C}^{p}\) such that the restriction \(f\colon X\setminus f^{-1}(0)\to\mathbb{C}^{p}\setminus\{0\}\) has only stable singularities. A _stabilisation_ of \((X,f)\) is a \(1\)-parameter unfolding \((\mathcal{X},\pi,F,j)\) with the property that for any small enough \(s\in\mathbb{C}\setminus\{0\}\), \((X_{s},f_{s})\) has only stable singularities. Such a mapping \((X_{s},f_{s})\), with \(s\neq 0\), is called a _stable perturbation_ of \((X,f)\). A crucial fact is that any germ \((X,f)\) with isolated instability admits a stabilisation, provided that \((n,p)\) are nice dimensions in the sense of Mather or \(f\) has only kernel rank one singularities (that is, \(f\) admits an extension whose differential has kernel rank \(\leq 1\) everywhere). A proof in the case \(X=\mathbb{C}^{n}\) can be found in [11] and the extension to the case of mappings on \(\operatorname{ICIS}\) appears in [6]. Next, we recall the notion of \(\mathscr{A}_{e}\)-codimension of a germ \((X,f)\). In order to do this, we introduce the following notation: * \(\mathscr{O}_{p}\) is the local ring of holomorphic functions \((\mathbb{C}^{p},0)\to\mathbb{C}\), * \(\mathscr{O}_{X,S}\) is the (semi-)local ring of holomorphic functions \((X,S)\to\mathbb{C}\), * \(f^{*}:\mathscr{O}_{p}\to\mathscr{O}_{X,S}\) is the induced ring morphism \(f^{*}(h)=h\circ f\), * \(\theta_{\mathbb{C}^{p},0}\) is the \(\mathscr{O}_{p}\)-module of germs of vector fields on \((\mathbb{C}^{p},0)\), * \(\theta_{X,S}\) is the \(\mathscr{O}_{X,S}\)-module of germs of vector fields on \((X,S)\), * \(\theta(f)\) is the module of vector fields along \(f\), * \(\omega f:\theta_{\mathbb{C}^{p},0}\to\theta(f)\) is the mapping \(\omega(\eta)=\eta\circ f\), * \(tf:\theta_{X,S}\to\theta(f)\) is mapping \(tf(\xi)=d\tilde{f}\circ\xi\), for some analytic extension \(\tilde{f}\) of \(f\). **Definition 2.5**.: The \(\mathscr{A}_{e}\)_-codimension_ of \((X,f)\) is defined as \[\operatorname{codim}_{\mathscr{A}_{e}}(X,f)=\dim_{\mathbb{C}}\frac{\theta(f)}{ tf(\theta_{X,S})+\omega f(\theta_{p})}+\sum_{x\in S}\tau(X,x),\] where \(\tau(X,x)\) is the Tjurina number of \((X,x)\). When \(\operatorname{codim}_{\mathscr{A}_{e}}(X,f)<\infty\), the germ \((X,f)\) is called \(\mathscr{A}\)-finite. The versality theorem holds also for mappings on ICIS, as the reader can find in [10]: \((X,f)\) is \(\mathscr{A}\)-finite if and only if it admits a versal unfolding and in case it is \(\mathscr{A}\)-finite, then \(\operatorname{codim}_{\mathscr{A}_{e}}(X,f)\) is equal to the number of parameters in a miniversal unfolding. As a consequence, \((X,f)\) is stable if and only if \((X,S)\) is smooth and \(f\) is stable in the usual sense (see [11]). Another important issue with \(\mathscr{A}\)-finiteness is the extension of the Mather-Gaffney geometric criterion for mappings on icis: \((X,f)\) is \(\mathscr{A}\)-finite if and only if it has isolated instability (see [6]). Finally, we will recall the definition of image Milnor number in the case \(p=n+1\). The original definition when \(X=\mathbb{C}^{n}\) is due to Mond (see [9, 11]), but it can be adapted quite easily to mappings on icis (see [6]). In the case \(p\leq n\), the analogous invariant is called the discriminant Milnor number, considered for the first time by Damon and Mond in [2] in the case \(X=\mathbb{C}^{n}\) and extended to mappings on icis by Mond and Montaldi in [10]. The definition is clearly inspired in the classical Milnor number and is motivated by the following theorem. We denote by \(B_{\epsilon}\) the closed ball in \(\mathbb{C}^{n+1}\) of radius \(\epsilon>0\) centered at the origin. We assume \(f\colon(X,S)\to(\mathbb{C}^{n+1},0)\) is \(\mathscr{A}\)-finite and that either \((n,n+1)\) are nice dimensions of Mather or \(f\) has only corank one singularities. We take a stabilisation of \((X,f)\) with stable perturbation \((X_{s},f_{s})\). **Theorem 2.6**.: _[_9, 6_]_ _For all \(\epsilon,\eta\), with \(0<\eta\ll\epsilon\ll 1\) and for all \(s\in\mathbb{C}\), with \(0<|s|<\eta\), \(f_{s}(X_{s})\cap B_{\epsilon}\) has the homotopy type of a bouquet of spheres of dimension \(n\). Moreover, the number of such spheres, denoted by \(\mu_{I}(X,f)\), is independent of the choice of the stabilisation, the parameter \(s\) and the numbers \(\epsilon,\eta\)._ **Definition 2.7**.: With the notation of Theorem 2.6, \(f_{s}(X_{s})\cap B_{\epsilon}\) is called the _disentanglement_ and \(\mu_{I}(X,f)\) is called the _image Milnor number_ of \((X,f)\). The proof of Theorem 2.6 is based on arguments by Le [8] and by Siersma [15]. In fact, the original formulation in [15] gives a recipe of how to compute \(\mu_{I}(X,f)\) in a more algebraic way: **Theorem 2.8**.: _[_15_]_ _With the notation of Theorem 2.6, let \(G\colon(\mathbb{C}^{n+1}\times\mathbb{C},0)\to(\mathbb{C},0)\) be a reduced equation of the image of the stabilisation. Then,_ \[\mu_{I}(X,f)=\sum_{y\in B_{\epsilon}\setminus f_{s}(X_{s})}\mu(g_{s};y),\] _where \(g_{s}(y)=G(y,s)\) and \(\mu(g_{s};y)\) is the Milnor number of \(g_{s}\) at \(y\)._ _Remark 2.9_.: In order to compute the image Milnor number \(\mu_{I}(X,f)\), sometimes is more convenient to consider a stable unfolding (i.e., an unfolding which is stable as a germ) instead of a stabilisation. In such a case, the bifurcation set \(\mathcal{B}\) is the set of parameters \(u\in\mathbb{C}^{r}\) in a neighbourhood of the origin, such that \(f_{u}\colon X_{u}\to\mathbb{C}^{n+1}\) has only stable singularities. When \((n,n+1)\) are nice dimensions or when \(f\) has only corank one singularities, \(\mathcal{B}\) is a proper closed analytic subset germ in a neighbourhood of \(0\) in (see [11] or [6]). A stabilisation can be constructed easily by taking a line \(L\subset\mathbb{C}^{r}\) with the property that \(L\cap\mathcal{B}=\{0\}\). If \(u\notin\mathcal{B}\), then \(f_{u}(X_{u})\cap B_{\epsilon}\) has the homotopy type of a bouquet of \(n\)-spheres and the number of such spheres is \(\mu_{I}(X,f)\). Analogously, if \(G\colon(\mathbb{C}^{n+1}\times\mathbb{C}^{r},0)\to(\mathbb{C},0)\) is a reduced equation of the image of the unfolding, then \[\mu_{I}(X,f)=\sum_{y\in B_{\epsilon}\setminus f_{u}(X_{u})}\mu(g_{u};y),\] where \(g_{u}(y)=G(y,u)\) and \(\mu(g_{u};y)\) is the Milnor number of \(g_{u}\) at \(y\). ## 3. The generalised version of the Jacobian module \(M(g)\) Let \(f:(X,S)\to(\mathbb{C}^{n+1},0)\) be an \(\mathscr{A}\)-finite mapping defined on an icis\((X,S)\subset(\mathbb{C}^{n+k},S)\) of dimension \(n\). As \(f\) is finite, its image \((Y,0)\) is a hypersurface of \((\mathbb{C}^{n+1},0)\), and hence it can be described by a reduced equation \(g\in\mathscr{O}_{n+1}\). Let \(h:(\mathbb{C}^{n+k},S)\to(\mathbb{C}^{k},0)\) be a mapping so that \((X,S)=h^{-1}(0)\). Consider an analytic extension \(\tilde{f}:(\mathbb{C}^{n+k},S)\to(\mathbb{C}^{n+1},0)\) of \(f\), and write \(\hat{f}=(\tilde{f},h):(\mathbb{C}^{n+k},S)\to(\mathbb{C}^{n+1+k},0)\). It is therefore clear that the restriction of \(\hat{f}\) to \((X,S)\) is precisely the mapping \((f,0)\). Hence, \(\hat{f}\) is a finite mapping, since \(\hat{f}^{-1}(0)=f^{-1}(0)=S\). Moreover, the diagram commutes, where \(i\) is the inclusion and \(j\) is the natural immersion. In particular, \(\hat{f}\) is an unfolding of \(f\) deforming both the mapping and the domain. After taking representatives, the induced deformations of \(f\) are the mappings \(\hat{f}_{t}:X_{t}\subset\mathbb{C}^{n+k}\to\mathbb{C}^{n+1}\) defined as \(\hat{f}_{t}(x)=\tilde{f}(x,t)\), where \(X_{t}=h^{-1}(t)\) for \(t\in\mathbb{C}^{k}\) small enough. The key idea is that \(\hat{f}\) is the simplest unfolding of \(f\) with smooth source. Hence, the definition of the module for the mapping \(f\) will be performed through a specialisation of the one from \(\hat{f}\). Since \(\hat{f}\) is a finite mapping, its image \((\hat{Y},0)\) is a hypersurface of \((\mathbb{C}^{n+1+k},0)\). Furthermore, the restriction of \(\hat{f}\) to its image \((\mathbb{C}^{n+k},S)\to(\hat{Y},0)\) is the normalisation mapping of \((\hat{Y},0)\), and hence it induces a monomorphism of rings \(\mathscr{O}_{\hat{Y},0}\hookrightarrow\mathscr{O}_{n+k}\) that lets us consider \(\mathscr{O}_{\hat{Y},0}\) to be a subring of \(\mathscr{O}_{n+k}\). In this case, the diagram commutes, where \(\pi\) is the epimorphism associated to the natural inclusion of \((\hat{Y},0)\) in \((\mathbb{C}^{n+1+k},0)\). We consider both \(\mathscr{O}_{\hat{Y},0}\) and \(\mathscr{O}_{n+k}\) to be \(\mathscr{O}_{n+1+k}\)-modules via the corresponding morphisms \(\pi\) and \(\hat{f}^{*}\), respectively. Let us consider \(\hat{g}\in\mathscr{O}_{n+1+k}\) to be a reduced equation of \((\hat{Y},0)\) in such a way that \(\hat{g}\circ j=g\). The following result from R. Piene [14] relates the conductor ideal of \(\hat{f}\), given by \[C(\hat{f})=\{h\in\mathscr{O}_{\hat{Y},0}:h\cdot\mathscr{O}_{n+k}\subset \mathscr{O}_{\hat{Y},0}\},\] with the partial derivatives of \(\hat{g}\) and with the minors of the Jacobian matrix \(d\hat{f}\). **Theorem 3.1** ([14]).: _There exists a unique \(\lambda\in\mathscr{O}_{n+k}\) such that, for every \(l\in\{1,\ldots,n+1+k\}\),_ \[\partial_{l}\hat{g}\circ\hat{f}=(-1)^{l}\cdot\lambda\cdot\det(d\hat{f}_{1}, \ldots,d\hat{f}_{l-1},d\hat{f}_{l+1},\ldots,d\hat{f}_{n+1+k}),\] _where \(\partial_{l}\hat{g}\) denotes the partial derivative of \(\hat{g}\) with respect to the \(l\)-th variable. Furthermore, the ideal \(C(\hat{f})\) is principal, and generated by the element \(\lambda\)._ Let us denote by \(J(\hat{g})\) to the Jacobian ideal of \(\hat{g}\), which is generated by the partial derivatives \(\partial_{l}\hat{g}\). This result therefore shows that \(J(\hat{g})\cdot\mathscr{O}_{n+k}\subset C(\hat{f})\), where \(J(\hat{g})\cdot\mathscr{O}_{n+k}=\hat{f}^{*}(J(\hat{g}))\). On the other hand, since \(\hat{f}\) is a finite mapping with degree \(1\) onto its image, an application of Theorem 3.4 of [12] of D. Mond and R. Pellikaan shows that \(\mathscr{F}_{1}(\hat{f})\cdot\mathscr{O}_{n+k}=C(\hat{f})\), where \(\mathscr{F}_{1}(\hat{f})\) denotes the first Fitting ideal of \(\mathscr{O}_{n+k}\) as an \(\mathscr{O}_{n+1+k}\)-module via \(\hat{f}^{*}\). Let us denote by \((y,z)\) to the coordinates in \((\mathbb{C}^{n+1+k},0)\), with \(y\in\mathbb{C}^{n+1}\) and \(z\in\mathbb{C}^{k}\), and consider the Jacobian ideal \(J_{y}(\hat{g})=\langle\partial\hat{g}/\partial y_{1},\ldots,\partial\hat{g}/ \partial y_{n+1}\rangle\) generated by the derivatives with respect to the variables \(y=(y_{1},\ldots,y_{n+1})\). **Definition 3.2**.: The restriction of \(\hat{f}^{*}\) to \(\mathscr{F}_{1}(\hat{f})\) induces an epimorphism of \(\mathscr{O}_{n+1+k}\)-modules \[\frac{\mathscr{F}_{1}(\hat{f})}{J_{y}(\hat{g})}\to\frac{C(\hat{f})}{J(\hat{g}) \cdot\mathscr{O}_{n+k}}.\] We define \(N(\hat{g})\) to be the \(\mathscr{O}_{n+1+k}\)-module given by the kernel of this morphism, and define \[M(g)=N(\hat{g})\otimes\frac{\mathscr{O}_{n+1+k}}{\mathfrak{m}_{k}\cdot \mathscr{O}_{n+1+k}},\] where \(\mathfrak{m}_{k}=(z_{1},\ldots,z_{k})\) is generated by the parameters of the unfolding \(\hat{f}\) of \(f\). Note that \(M(g)\) has a natural \(\mathscr{O}_{n+1}\)-module structure inherited from the tensor product. Hence, \(M(g)\) is defined by taking into account that \(\hat{f}\) is an unfolding of \(f\), and following the spirit of [4] in the case of mappings with smooth source. Furthermore, it is relevant to notice that this module \(M(g)\) coincides with the given in the smooth case just by taking \(\hat{f}=f\) and \(k=0\). _Remark 3.3_.: Since \(\hat{f}\) is defined in a smooth source, it lies naturally in the context of [4]. However, the module \(N(\hat{g})\) that we have defined is not exactly the same as the module that is defined in [4]. Indeed, the source of the morphism that defines \(N(\hat{g})\) has the Jacobian ideal \(J_{y}(\hat{g})\) where only partial derivatives with respect to the parameters \(y_{1},\ldots,y_{n+1}\) are taken into consideration, while the module in the target does have all the partial derivatives. Although this may seem to be somewhat whimsical, this will be shown to be exactly what is needed for the module to specialise properly. This is due to the fact that, in general, \(J(\hat{g})\cdot\mathscr{O}_{n+k}\neq J_{y}(\hat{g})\cdot\mathscr{O}_{n+k}\), in contrast with what happens in the smooth case. This important fact is what infuences this definition for \(M(g)\), and what makes that another possible definition may not specialise properly. _Remark 3.4_.: The module \(N(\hat{g})\) is determined by the short exact sequence \[0\to N(\hat{g})\to\frac{\mathscr{F}_{1}(\hat{f})}{J_{y}(\hat{g})}\to\frac{C( \hat{f})}{J(\hat{g})\cdot\mathscr{O}_{n+k}}\to 0.\] **Proposition 3.5**.: _The following formula holds:_ \[N(\hat{g})=\frac{(\hat{f}^{*})^{-1}(J(\hat{g})\cdot\mathscr{O}_{n+k})}{J_{y}( \hat{g})}.\] Proof.: By definition, it is clear that \[N(\hat{g})=\frac{(\hat{f}^{*})^{-1}(J(\hat{g})\cdot\mathscr{O}_{n+k})\cap \mathscr{F}_{1}(\hat{f})}{J_{y}(\hat{g})}.\] Since \(J(\hat{g})\cdot\mathscr{O}_{n+k}\subset C(\hat{f})\), then \((\hat{f}^{*})^{-1}(J(\hat{g})\cdot\mathscr{O}_{n+k})\subset\mathscr{F}_{1}( \hat{f})\), and the formula follows. ## 4. A relative version for the generalised Jacobian module In this section, a relative version of the module for unfoldings of \(\hat{f}\) is defined. For the sake of simplicity, we consider unfoldings \(F:(\mathbb{C}^{n+k+r},S\times 0)\to(\mathbb{C}^{n+1+k+r},0)\) of the mapping \(\hat{f}\) instead of general unfoldings of \(f\) with possibly non-smooth source. Let \(G\in\mathscr{O}_{n+1+k+r}\) be an equation for the image of \(F\) such that \(G(y,z,0)=\hat{g}(y,z)\), where \((y,z,u)\) are the coordinates of \((\mathbb{C}^{n+1+k+r},0)\), with \(y\in\mathbb{C}^{n+1},z\in\mathbb{C}^{k}\) and \(u\in\mathbb{C}^{r}\). We then have that the diagram commutes, where \(\hat{i},\hat{j}\) are the natural immersions. Let us denote by \[J_{y}(G)=\Big{\langle}\frac{\partial G}{\partial y_{1}},\ldots,\frac{\partial G} {\partial y_{n+1}}\Big{\rangle},\ J_{z}(G)=\Big{\langle}\frac{\partial G}{ \partial z_{1}},\ldots,\frac{\partial G}{\partial z_{k}}\Big{\rangle},\] and \(J_{y,z}(G)=J_{y}(G)+J_{z}(G)\). We therefore define \(M_{\mathrm{rel}}(G)\) as the kernel of the module epimorphism \[\frac{\mathscr{F}_{1}(F)}{J_{y}(G)}\to\frac{C(F)}{J_{y,z}(G)\cdot\mathscr{O}_ {n+k+r}}.\] Hence, \(M_{\mathrm{rel}}(G)\) fits into the short exact sequence \[0\to M_{\mathrm{rel}}(G)\to\frac{\mathscr{F}_{1}(F)}{J_{y}(G)}\to\frac{C(F)}{ J_{y,z}(G)\cdot\mathscr{O}_{n+k+r}}\to 0.\] Furthermore, it is straightforward to verify that **Proposition 4.1**.: _The following formula holds:_ \[M_{\mathrm{rel}}(G)=\frac{(F^{*})^{-1}(J_{y,z}(G)\cdot\mathscr{O}_{n+k+r})}{J _{y}(G)}.\] In contrast with the case of \(M(g)\), we have that \(J_{y,z}(G)\cdot\mathscr{O}_{n+k+r}=J(G)\cdot\mathscr{O}_{n+k+r}\) in virtue of lemma 4.5 of [4]. Hence, they can be interchanged indistinctively in this formula. Notice, in addition, that the relative module in [4] is denoted by \(M_{y}(G)\) with the intention to highlight that the module is relative with respect to the parameters \(y=(y_{1},\ldots,y_{n+1})\). Since this interpretation in the case of mappings with non-smooth source is less clear, due to the fact that one has both parameters \(y,z\), the authors have decided to consider the notation \(M_{\mathrm{rel}}(G)\). In what follows, we show that the given definition of the \(\mathscr{O}_{n+1+k+r}\)-module \(M_{\mathrm{rel}}(G)\) properly specialises to \(M(g)\) seen as an \(\mathscr{O}_{n+1}\)-module. Before giving a proof of this result, let us comment on the specialisation method that should be considered. _Remark 4.2_.: It is relevant to notice that the specialisation process that is performed in [4] for mappings with smooth source restricts the relative module \(M_{y}(G)=M_{\mathrm{rel}}(G)\) to be an \(\mathscr{O}_{r}\)-module, and then \(M_{\mathrm{rel}}(G)\) is tensored with \(\mathscr{O}_{r}/\mathfrak{m}_{r}\). Hence, one naturally obtains a \(\mathbb{C}\)-vector space that is isomorphic to \(M(g)\) (see Theorem 4.6 of [4]). However, this specialisation process ignores the fact that \(M(g)\) is an \(\mathscr{O}_{n+1}\)-module. In order to take this into account, one should perform the specialisation process in a slightly different way. Since \(M_{\mathrm{rel}}(G)\) is an \(\mathscr{O}_{n+1+r}\)-module, the tensor product \[M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{n+1+r}}{\mathfrak{m}_{r}\cdot \mathscr{O}_{n+1+r}}\] provides naturally an \(\mathscr{O}_{n+1}\)-module, due to the fact that \(\mathscr{O}_{n+1+r}/(\mathfrak{m}_{r}\cdot\mathscr{O}_{n+1+r})\cong\mathscr{O }_{n+1}\), where \(\mathfrak{m}_{r}=(u_{1},\ldots,u_{r})\) is the maximal ideal of \(\mathscr{O}_{r}\) generated by the parameters of the unfolding \(F\). In fact, this specialisation process both captures the spirit of forcing the parameters of the unfolding to be equal to \(0\) and keeps the natural \(\mathscr{O}_{n+1}\)-module structure of \(M(g)\), as it can be easily verified with minor modifications in the proofs of section 4 of [4]. Hence, it can be easily checked that the module \(M_{\mathrm{rel}}(G)\) of [4] in the smooth case satisfies that \[M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{n+1+r}}{\mathfrak{m}_{r}\cdot \mathscr{O}_{n+1+r}}\cong M(g),\] where \(\cong\) denotes isomorphism of \(\mathscr{O}_{n+1}\)-modules. Although this is not relevant in the smooth case, we have already seen in the definition 3.2 that specialisations should be taken through this method in order to preserve the module structure. Taking this into account, we are now able to check that the module \(M_{\mathrm{rel}}(G)\) in this setting properly specialises to \(M(g)\): **Theorem 4.3**.: _If \(n\geq 2\), then_ \[M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{n+1+k+r}}{\mathfrak{m}_{k+r}\cdot \mathscr{O}_{n+1+k+r}}\cong M(g)\] _as \(\mathscr{O}_{n+1}\)-modules._ Proof.: The same proof of the smooth case (Theorem 4.6 of [4] and the previous remark) shows that \[M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{n+1+k+r}}{\mathfrak{m}_{r}\cdot \mathscr{O}_{n+1+k+r}}\cong N(\hat{g}),\] since this first specialisation restricts the parameters that were added in the unfolding \(F\) of \(\hat{f}\), both with smooth domain. Indeed, the only difference is that the definition of \(N(\hat{g})\) has a term \(J_{y}(\hat{g})\), but it holds that \(\hat{j}(J_{y}(G))=J_{y}(\hat{g})\) and, as in the smooth case, \(\hat{j}(J_{y,z}(G))=J(\hat{g})\). Hence, \[M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{n+1+k+r}}{\mathfrak{ m}_{k+r}\cdot\mathscr{O}_{n+1+k+r}} =\left(M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{n+1+k+r}}{ \mathfrak{m}_{r}\cdot\mathscr{O}_{n+1+k+r}}\right)\otimes\frac{\mathscr{O}_{n +1+k}}{\mathfrak{m}_{k}\cdot\mathscr{O}_{n+1+k}}\] \[=N(\hat{g})\otimes\frac{\mathscr{O}_{n+1+k}}{\mathfrak{m}_{k} \cdot\mathscr{O}_{n+1+k}}=M(g),\] where the last equality holds by definition of \(M(g)\) _Remark 4.4_.: As we have commented on before in Remark 4.2, the specialisation performed is different than the one appearing in [4] for mappings with smooth source. Although this different approach lets us transfer the module structure, it is important for applications to analyse the analogue process. It turns out that, when we restrict \(M_{\mathrm{rel}}(G)\) to be an \(\mathscr{O}_{k+r}\)-module via the natural inclusion \(\mathscr{O}_{k+r}\to\mathscr{O}_{n+1+k+r}\) induced by the projection \(\mathbb{C}^{n+1}\times\mathbb{C}^{k+r}\to\mathbb{C}^{k+r}\), then it is straightforward to verify that \[M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{k+r}}{\mathfrak{m}_{k+r}}\cong M (g)\] as \(\mathbb{C}\)-vector spaces, just by noticing that \[M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{k+r}}{\mathfrak{m}_ {k+r}} \cong\frac{M_{\mathrm{rel}}(G)}{\mathfrak{m}_{k+r}\cdot M_{\mathrm{ rel}}(G)}=\frac{M_{\mathrm{rel}}(G)}{(\mathfrak{m}_{k+r}\cdot\mathscr{O}_{n+1+k+r}) \cdot M_{\mathrm{rel}}(G)}\cong\] \[\cong M_{\mathrm{rel}}(G)\otimes\frac{\mathscr{O}_{n+1+k+r}}{ \mathfrak{m}_{k+r}\cdot\mathscr{O}_{n+1+k+r}}\cong M(g).\] In a nutshell, the specialisation process can be performed either restricting first \(M_{\mathrm{rel}}(G)\) to be an \(\mathscr{O}_{k+r}\)-module, or rather keeping its whole \(\mathscr{O}_{n+1+k+r}\)-module structure. The obtained modules are, respectively, \[M_{\mathrm{rel}}(G)\otimes(\mathscr{O}_{n+1+k+r}/\mathfrak{m}_{k+r}\cdot \mathscr{O}_{n+1+k+r})\text{ and }M_{\mathrm{rel}}(G)\otimes(\mathscr{O}_{k+r}/ \mathfrak{m}_{k+r}).\] In both scenarios one recovers \(M(g)\) with some structure: in the former case, the result is an \(\mathscr{O}_{n+1}\)-module isomorphic to \(M(g)\), and, in the latter, a \(\mathbb{C}\)-module isomorphic to \(M(g)\). Hence, the specialisation proces that one should perform depends on whether one needs to keep the module structure or not. For most of the cases, the only information that needs to be keeped is the complex dimension as a vector space, and hence both methods are valid. Lastly, an important result regarding the form of the module \(M_{\mathrm{rel}}(G)\) when \(F\) is stable is the following: **Proposition 4.5**.: _Let \(F\) be a stable unfolding of \(\hat{f}\) and \(G\) an equation such that \(G\in J(G)\). Then,_ \[M_{\mathrm{rel}}(G)=\frac{J(G)}{J_{y}(G)}.\] Proof.: Since \(F\) is stable and \(G\in J(G)\), the smooth version of the module \(M(G)\) from [4] satisfies that \(M(G)=0\). In this case, the formula of proposition 5.1 of this article yields that \[M(G)=\frac{(F^{*})^{-1}(J(G)\cdot\mathscr{O}_{n+k+r})}{J(G)},\] and hence \((F^{*})^{-1}(J(G)\cdot\mathscr{O}_{n+k+r})=J(G)\). Furthermore, in the smooth case, it follows that \(J(G)\cdot\mathscr{O}_{n+k+r}=J_{y,z}(G)\cdot\mathscr{O}_{n+k+r}\). Hence, proposition 4.1 yields \[M_{\mathrm{rel}}(G)=\frac{(F^{*})^{-1}(J_{y,z}(G)\cdot\mathscr{O}_{n+k+r})}{J _{y}(G)}=\frac{J(G)}{J_{y}(G)},\] and the claim follows. Note that, since \(\hat{f}\) is a finite mapping, the existence of a stable unfolding \(F\) of \(\hat{f}\) is granted. Furthermore, as it is commented in [4], there is always a stable unfolding \(F\) which admits an equation \(G\) with the property that \(G\in J(G)\). Such an equation is referred to as a _good defining equation_. Indeed, if \(F(u,x)=(u,f_{u}(x))\) is a stable unfolding of \(\hat{f}\), then let \(F^{\prime}\) be the \(1\)-parameter stable unfolding of \(F\) given by \(F^{\prime}(t,u,x)=(t,u,f_{u}(x))\). Let \(G=0\) be a reduced equation of the image of \(F\) and consider \(G^{\prime}(t,u,y)=e^{t}G(u,y)\). It therefore follows that \(G^{\prime}=0\) is a reduced equation defining the image of \(F^{\prime}\) with the property that \(G^{\prime}=\partial_{t}G^{\prime}\in J(G^{\prime})\). ## 5. Relation between \(\dim_{\mathbb{C}}M(g)\) and \(\operatorname{codim}_{\mathscr{A}_{e}}(X,f)\) Let \(F\) be a stable unfolding of \(\hat{f}\), and consider an equation \(G\) of the image of \(F\), namely, \((Z,0)\), that satisfies \(G\in J(G)\). Then, the last result of the previous section showed that \(M_{\operatorname{rel}}(G)=J(G)/J_{y}(G)\). Let us relate this with the codimension of \((X,f)\), which can be determined through the formula \[\operatorname{codim}_{\mathscr{A}_{e}}(X,f)=\dim_{\mathbb{C}}\frac{\theta(i) }{ti(\theta_{n+1})+i^{*}(\operatorname{Derlog}Z)},\] where \(i:(\mathbb{C}^{n+1},0)\to(\mathbb{C}^{n+1+k+r},0)\) denotes the natural immersion (see [10] for more details). Recall that \(\operatorname{Derlog}Z=\{\xi\in\theta_{n+1+k+r}:\xi(G)=\lambda(G)\}\), and \(\operatorname{Derlog}G=\{\xi\in\theta_{n+1+k+r}:\xi(G)=0\}\). Notice that \(G\in J(G)\), so that \(G=\sum_{s}a_{s}\partial_{s}G\), where \(\partial_{s}G\) denotes the partial derivatives with respect to all the variables in \((y,z,u)\in\mathbb{C}^{n+1+k+r}\). Hence, the vector field \(\epsilon=\sum_{s}a_{s}\partial_{s}\) satisfies that \(\epsilon(G)=G\), where \(\partial_{s}\) denotes the coordinate vector field associated with the \(s\)-th coordinate, where \(s\in\{1,\ldots,n+1+k+r\}\). Furthermore, \(\operatorname{Derlog}Z=\operatorname{Derlog}G\oplus\langle\epsilon\rangle\). We therefore have that \[\operatorname{codim}_{\mathscr{A}_{e}}(X,f)=\dim_{\mathbb{C}}\frac{\theta(i) }{ti(\theta_{n+1})+i^{*}(\operatorname{Derlog}G)+i^{*}(\epsilon)}.\] Notice that the evaluation mapping \(\operatorname{ev}:\theta_{n+1+k+r}\to J(G)\) given by \(\xi\mapsto\xi(G)\) is a surjective mapping with kernel \(\operatorname{Derlog}G\). Hence, it induces an isomorphism \[\frac{\theta_{n+1+k+r}}{\operatorname{Derlog}G}\cong J(G).\] Thus, \[\frac{\theta_{n+1+k+r}}{\langle\frac{\partial}{\partial y_{1}},\ldots,\frac{ \partial}{\partial y_{n+1}}\rangle+\operatorname{Derlog}G}\cong\frac{J(G)}{J _{y}(G)}=M_{\operatorname{rel}}(G).\] Tensoring with \(\mathscr{O}_{k+r}/\mathfrak{m}_{k+r}\) yields that \[\frac{\theta(i)}{ti(\theta_{n+1})+i^{*}\operatorname{Derlog}G}\cong M(g).\] Now, notice that the evaluation map acting on \(\epsilon\) gives \(\epsilon(G)=G\), and hence \(i^{*}(\operatorname{ev}(\epsilon))=i^{*}(G)=g\). Therefore, if \(K(g)=(J(g)+(g))/J(g)\), then \[0\to K(g)\to\frac{\theta(i)}{ti(\theta_{n+1})+i^{*}\mathrm{Derlog}\,G}\to \frac{\theta(i)}{ti(\theta_{n+1})+i^{*}\mathrm{Derlog}\,G+i^{*}(\epsilon)}\to 0\] is a short exact sequence. Indeed, the evaluation map satisfies that \(\operatorname{ev}(ti(\theta_{n+1}))=J(g)\) and that \(\operatorname{ev}(i^{*}\mathrm{Derlog}\,G)=0\). Hence, the evaluation map yields an isomorphism \[\frac{ti(\theta_{n+1})+i^{*}\mathrm{Derlog}\,G+i^{*}(\epsilon)}{ti(\theta_{n+1} )+i^{*}\mathrm{Derlog}\,G}\cong\frac{J(g)+(g)}{J(g)}=K(g).\] After taking lengths in the exact sequence, and taking into account the previous assertions, it follows that **Theorem 5.1**.: _In the above conditions,_ \[\dim_{\mathbb{C}}M(g)=\dim_{\mathbb{C}}K(g)+\operatorname{codim}_{\mathscr{ A}_{\epsilon}}(X,f).\] _In particular, \(\dim_{\mathbb{C}}M(g)\geq\operatorname{codim}_{\mathscr{A}_{\epsilon}}(X,f)\), with equality in case that \(g\) is weighted homogeneous._ Furthermore, this formula shows that \(\dim_{\mathbb{C}}M(g)\) only depends on the isomorphism class of \(g\), and neither depends on the mapping \(f\) nor on the chosen extension \(\hat{f}\). When either \((n,n+1)\) are nice dimensions or in corank one, all stable singularities are weighted homogeneous (see [11]). This gives the following direct consequence of Theorem 5.1: **Corollary 5.2**.: _Assume that either \((n,n+1)\) are nice dimensions or \((X,f)\) has corank one. Then, \(M(g)=0\) if and only if \((X,f)\) is stable._ ## 6. The Jacobian module and the Mond conjecture This section is the centerpiece of this paper. We present one of the main results we aim to establish, namely, a formula for the image Milnor number expressed in terms of the Samuel multiplicity of the module \(M_{\mathrm{rel}}(G)\). Additionally, we prove that the generalised Mond conjecture holds provided the module \(M_{\mathrm{rel}}(G)\) is Cohen-Macaulay. This theorem is the key ingredient to show the main result of the article [4], which is Theorem 6.1, and whose proof can be easily adapted to this new setting: **Theorem 6.1**.: _Assume that either \((n,n+1)\) are nice dimensions or \((X,f)\) has corank one. Let \(F\) be a stable unfolding of \(\hat{f}\). With the notations of Section 3,_ \[\mu_{I}(X,f)=e(\mathfrak{m}_{k+r},M_{\mathrm{rel}}(G)),\] _that is, \(\mu_{I}(X,f)\) equals the Samuel multiplicity of the \(\mathscr{O}_{k+r}\)-module \(M_{\mathrm{rel}}(G)\) with respect to \(\mathfrak{m}_{k+r}\)._ Proof.: Take a representative of \(F\) and let \(w\in\mathbb{C}^{k}\times\mathbb{C}^{r}\) be a generic value. The conservation of multiplicity implies that \[e\Big{(}\mathfrak{m}_{k+r},M_{\mathrm{rel}}(G)\Big{)}=\sum_{p\in B_{\epsilon}}e \Big{(}\mathfrak{m}_{k+r,w},M_{\mathrm{rel}}(G)_{(p,w)}\Big{)}.\] In order to compute the previous multiplicity, let us take first the points \(p\in Y_{w}\cap B_{\epsilon}\), where \(Y_{w}\) is the image of \(f_{w}:X_{w}\to\mathbb{C}^{n+1}\). Since the module \(M_{\mathrm{rel}}(G)\) specialises to \(M(g)\), it follows that \[M_{\mathrm{rel}}(G)_{(p,w)}\otimes\frac{\mathscr{O}_{k+r,w}}{\mathfrak{m}_{k+r,w}}\cong M(g_{w})_{p}\] as \(\mathbb{C}\)-vector spaces in virtue of Remark 4.4. If \(F\) is a stable unfolding, it follows that, provided \(w\) is generic, \(f_{w}\) is a stable mapping, and hence \(M(g_{w})_{p}=0\) for each \(p\in Y_{w}\), by Corollary 5.2. Hence, the points \(p\in Y_{w}\cap B_{\epsilon}\) do not contribute to the term \(e(\mathfrak{m}_{k+r},M_{\mathrm{rel}}(G))\). On the other hand, if \(p\in B_{\epsilon}/Y_{w}\), then \[M_{\mathrm{rel}}(G)_{(p,w)}=\frac{\mathscr{O}_{\mathbb{C}^{k+r}\times B_{ \epsilon},(p,w)}}{J_{y}(G)}\] is a module with dimension \(\geq k+r\). Indeed, this follows from the exact sequence that defines \(M_{\mathrm{rel}}(G)\), since the localisation of \(C(F)/(J_{y,z}(G)\cdot\mathscr{O}_{n+k+r})\) is zero if \(p\notin Y_{w}\), and \(\mathscr{F}_{1}(F)\) localises to \(\mathscr{O}_{\mathbb{C}^{k+r}\times B_{\epsilon},(p,w)}\). Moreover, \[M_{\mathrm{rel}}(G)_{(p,w)}\otimes\frac{\mathscr{O}_{k+r,w}}{\mathfrak{m}_{k+r,w}}\cong\frac{\mathscr{O}_{B_{\epsilon},p}}{J(g_{w})}\] is a module with dimension \(0\), since it has finite length due to the fact that \(g_{u}\) has isolated singularity. This implies that the dimension of \(M_{\mathrm{rel}}(G)\) is \(\leq k+r\). Hence, \(M_{\mathrm{rel}}(G)_{(p,u)}\) is a complete intersection ring, and in particular a Cohen-Macaulay \(\mathscr{O}_{k+r}\)-module of dimension \(k+r\). Hence, \[e\Big{(}\mathfrak{m}_{k+r,w},M_{\mathrm{rel}}(G)_{(p,w)}\Big{)} =\dim_{\mathbb{C}}\left(M_{\mathrm{rel}}(G)_{(p,w)}\otimes\frac{ \mathscr{O}_{k+r,w}}{\mathfrak{m}_{k+r,w}}\right)\] \[=\dim_{\mathbb{C}}\frac{\mathscr{O}_{B_{\epsilon},p}}{J(g_{w})}= \mu(g_{w},p).\] By Siersma's Theorem 2.8, it follows that \(\sum_{p\in B_{\epsilon}/Y_{w}}\mu(g_{w},p)=\mu_{I}(X,f)\). Once this result has been proven, the following theorem is now an immediate consequence: **Theorem 6.2**.: _In the above conditions, the following statements are equivalent and imply the generalised Mond conjecture for \((X,f)\):_ 1. \(\dim_{\mathbb{C}}M(g)=\mu_{I}(X,f)\)_,_ 2. \(M_{\mathrm{rel}}(G)\) _is a Cohen-Macaulay_ \(\mathscr{O}_{k+r}\)_-module of dimension_ \(k+r\)_._ _Furthermore, if \(g\) is weighted homogeneous and satisfies the generalised Mond conjecture, then the above assertions hold._ Proof.: Recall that the Samuel multiplicity of an \(R\)-module \(M\) is generally smaller than the length of \(M/(\mathfrak{m}\cdot M)\), with equality if and only if \(M\) is a Cohen-Macaulay module with the same dimension as \(R\). Hence, \(M_{\operatorname{rel}}(G)\) is Cohen-Macaulay of dimension \(k+r\) if and only if \[\mu_{I}(X,f) =e\Big{(}\mathfrak{m}_{k+r},M_{\operatorname{rel}}(G)\Big{)}= \dim_{\mathbb{C}}\frac{M_{\operatorname{rel}}(G)}{\mathfrak{m}_{k+r}M_{ \operatorname{rel}}(G)}\] \[=\dim_{\mathbb{C}}M_{\operatorname{rel}}(G)\otimes\frac{\mathscr{ O}_{k+r}}{\mathfrak{m}_{k+r}}=\dim_{\mathbb{C}}M(g).\] Therefore, if one of the items hold, it then follows that \[\mu_{I}(X,f)=\dim_{\mathbb{C}}M(g)=\dim_{\mathbb{C}}K(g)+\operatorname{codim }_{\mathscr{A}_{e}}(X,f)\geq\operatorname{codim}_{\mathscr{A}_{e}}(X,f),\] and hence, the generalised Mond conjecture holds for \((X,f)\). Moreover, if \(g\) is weighted homogeneous, then \(K(g)=0\). Thus, if the generalised Mond conjecture holds for \((X,f)\), then \[\mu_{I}(X,f)=\operatorname{codim}_{\mathscr{A}_{e}}(X,f)=\dim_{\mathbb{C}}M( g)-\dim_{\mathbb{C}}K(g)=\dim_{\mathbb{C}}M(g).\] Hence, the above assertions hold. _Remark 6.3_.: Recent work by Nuno-Ballesteros and Gimenez Conejero [7] has shown that the requirement for \(M_{\operatorname{rel}}(G)\) to be \(k+r\)-dimensional can be eliminated from the second condition. Thus, we have that \(\dim_{\mathbb{C}}M(g)=\mu_{I}(X,f)\) if and only if \(M_{\operatorname{rel}}(G)\) is a Cohen-Macaulay module. Indeed, if the dimension of \(M_{\operatorname{rel}}(G)\) is strictly less than \(k+r\), then \(\mu_{I}(X,f)\) is zero, which, according to [7], implies that \((X,f)\) is a stable map-germ, and hence \(\operatorname{codim}_{\mathscr{A}_{e}}(X,f)=0\). This result shows that in order to establish the validity of the Mond conjecture for \(f\), it suffices to check the Cohen-Macaulay property of the module \(M_{\operatorname{rel}}(G)\). While it has been observed that this module is Cohen-Macaulay in every computed example, it remains an open question to provide a rigorous proof of this statement. ## 7. Proof of the generalised Mond conjecture for \(n=2\) In this section, we achieve our main objective of the article, which is to show the validity of the generalised Mond conjecture for \(n=2\) by employing the results from the previous section. Building on the main result of the previous section, to establish the generalised Mond conjecture for mappings \(f:(X,0)\to(\mathbb{C}^{3},0)\) where \((X,0)\) is a \(2\)-dimensional icis, it suffices to check that \(M_{\operatorname{rel}}(G)\) is a Cohen-Macaulay module of dimension \(k+r\). Notice first that the dimension of \(M_{\operatorname{rel}}(G)\) is \(\leq k+r\), due to the fact that \[\dim_{\mathbb{C}}M_{\operatorname{rel}}(G)\otimes\frac{\mathscr{O}_{n+1+k+r}} {\mathfrak{m}_{k+r}\cdot\mathscr{O}_{n+1+k+r}}=\dim_{\mathbb{C}}M(g)<+\infty.\] Therefore, it is enough to show that \(\operatorname{depth}M_{\operatorname{rel}}(G)\geq k+r\). To achieve this, we apply the depth lemma to the exact sequence \[0\to M_{\operatorname{rel}}(G)\to\frac{\mathscr{F}_{1}(F)}{J_{y}(G)}\to\frac{C(F )}{J_{y,z}(G)\cdot\mathscr{O}_{n+k+r}}\to 0\] to obtain that \[\operatorname{depth}M_{\operatorname{rel}}(G)\geq\min\left\{\operatorname{ depth}\frac{\mathscr{F}_{1}(F)}{J_{y}(G)},\operatorname{depth}\frac{C(F)}{J_{y,z}(G) \cdot\mathscr{O}_{n+k+r}}+1\right\}.\] Hence, both terms of the minimum have to be shown to be greater than or equal to \(k+r\). For the module \(C(F)/J_{y,z}(G)\cdot\mathscr{O}_{n+k+r}\), it has been checked in Remark 3.9 of [4] that this module is Cohen-Macaulay of dimension \(n+k+r-2\) provided \(n\geq 2\), due to the fact that it is isomorphic to the determinantal ring \(\mathscr{O}_{n+k+r}/R(F)\), where \(R(F)\) denotes the ramification ideal of \(F\). In particular, it follows that \[\operatorname{depth}\frac{C(F)}{J_{y}(G)\cdot\mathscr{O}_{n+k+r}}+1=n+k+r-1 \geq k+r.\] Therefore, it is enough to verify that \(\operatorname{depth}\mathscr{F}_{1}(F)/J_{y}(G)\geq k+r\). Notice that, up to this point, the assumption \(n=2\) has not been used yet. In general, the module \(\mathscr{F}_{1}(F)/J_{y}(G)\) has dimension \[\dim\frac{\mathscr{F}_{1}(F)}{J_{y}(G)}=\max\left\{\dim M_{\operatorname{rel} }(G),\dim\frac{C(F)}{J_{y}(G)\cdot\mathscr{O}_{n+k+r,0}}\right\}=n+k+r-2,\] due to the fact that \(\dim M_{\operatorname{rel}}(G)\leq k+r\leq n+k+r-2\) provided \(n\geq 2\). Therefore, it is not expected that \(M_{\operatorname{rel}}(G)\) will be Cohen-Macaulay for every \(n\geq 2\). The only case in which we can expect this is when \(n=2\), since, in this case, its dimension is precisely \(k+r\). To verify this claim, we make use of Pellikaan's Theorem: **Theorem 7.1** (Pellikaan, Section 3 of [13]).: _If \(J\subset F\subset R\) are ideals of \(R\) where \(J\) is generated by \(m\) elements, \(\operatorname{grade}\left(F/J\right)\geq m\) and \(\operatorname{pd}\left(R/F\right)=2\), then \(F/J\) is a perfect module and \(\operatorname{grade}\left(F/J\right)=m\)._ This result plays a crucial role to show that the module is indeed Cohen-Macaulay, as it is proven in the following proposition: **Theorem 7.2**.: _If \(n=2\), then \(\mathscr{F}_{1}(F)/J_{y}(G)\) is a Cohen-Macaulay module._ Proof.: We follow the notation of the previous result, taking \(R=\mathscr{O}_{n+1+k+r}\), \(F=\mathscr{F}_{1}(F)\) and \(J=J_{y}(G)\). It follows from the proof of Theorem 3.4 of [12] that \(R/F=\mathscr{O}_{n+1+k+r}/\mathscr{F}_{1}(F)\) is a determinantal ring of dimension \(n+k+r-1\), and hence Cohen-Macaulay. Hence, by the Auslander-Buchsbaum formula, \[\operatorname{pd}\left(R/F\right) =\operatorname{depth}R-\operatorname{depth}R/F=\dim R-\dim R/F\] \[=n+1+k+r-(n+k+r-1)=2.\] Moreover, \(J=J_{y}(G)\) is clearly generated by \(n+1=3\) elements, namely the partial derivatives of \(G\) with respect to the variables \(y_{1},\ldots,y_{n+1}\). Furthermore, \[\operatorname{grade}\left(F/J\right) =\operatorname{depth}\left(\operatorname{Ann}\left(F/J\right), \mathscr{O}_{n+1+r}\right)=\operatorname{ht}\operatorname{Ann}\left(F/J \right)=\] \[=\dim\mathscr{O}_{n+1+r}-\dim F/J=n+1+r-(n+r-2)=3.\] Thus, Pellikaan's Theorem states that \(F/J\) is a perfect \(\mathscr{O}_{n+1+k+r}\)-module. Furthermore, since \(\mathscr{O}_{n+1+k+r}\) is a local Cohen-Macaulay ring, then \(F/J\) is Cohen-Macaulay. Now the main result of this article follows easily as an application of the results presented in the previous section. Proof of Theorem 1.2.: Theorem 7.2 implies that \(M_{\operatorname{rel}}(G)\) is a Cohen-Macaulay module. Hence, Theorem 6.2 implies that the generalised Mond conjecture holds for \((X,f)\). In this setting, the image Milnor number therefore satisfies that \(\mu_{I}(X,f)=\dim_{\mathbb{C}}M(g)\). This gives an operative method to compute this number, as the following example shows: **Example 7.3**.: Let \((X,0)\subset(\mathbb{C}^{3},0)\) be the hypersurface defined by \(x^{3}+y^{3}-z^{2}=0\) and let \(f:(X,0)\to(\mathbb{C}^{3},0)\) be the \(\mathscr{A}\)-finite mapping \(f(x,y,z)=(x,y,z^{3}+xz+y^{2})\). In this case, \(\hat{f}(x,y,z)=(x,y,z^{3}+xz+y^{2},x^{3}+y^{3}-z^{2})\) turns out to be a stable mapping. With Singular[3], one easily obtains that \(\operatorname{codim}_{\mathscr{A}_{e}}(X,f)=6\) and \(\mu_{I}(X,f)=\dim_{\mathbb{C}}M(g)=6\).